Re: Oh so familiar
Me too....but I use paypal with them, so I can control the back end source of funds.
29 posts • joined 10 Oct 2008
JANET, Compuserv, Demon, then force9 (later to become plusnet), for me. I have since BT'd, but I'm better now thanks (A&A if someone else is paying, but plusnet still used, and yes I know they are BT owned...but they are not BT. You can ring them, for a start. With A&A, or more specifically AK, I installed an early-ish consumer level voip system - network alchemy, I think - back in the days when the Rev came out to install the equipment on the cabling I had pre-installed. I had a nokia 9000 and the time and AK had just got a brand new 9110 - so it would be 1998). I was gutted when Demon went under. I first hosted my own website in the Demon days..and it's still running now...but hosted under very different circumstances.
Aol was always one to be avoided. The volume, and quality, of the advertising was the clue.
Gnome is a nice simple corporate style window manager. It looks familiar to the end user, and does not impede productivity (as long as the machine has the resources to run it properly). It has nice things like online account integration, and dash to dock, that add to ease of use. User sees a consistent interface, i can optimise the underlying system to perform specific workloads. I can deploy any of the major distributions (for reason based on role), whether that be Centos, Debian, Ubuntu, or Fedora.
I, personally, use peppermintos (for the ease of integrating with my web services, as well as the lower resource footprint) on my laptop. I also use gnome (I'm currently a debian house, but that has been and may be again Ubuntu: Just not until unity has been purged. Yuk - in the sense that it has never worked) with dash to dock on my multi-screen powerhouse workstations.
There is a big difference between the individual users preferences on a development machine, and a large scale deployment* and training. For the later, I'd pick Gnome.
*My large scale deployments are only 3-10 users these days, but the consistent interface means that I can write user level training materials that are equally applicable whatever the underlying machine, role, and OS.
Whenever I don't feel like my mobile operator knowing where my phone is, it's understood I can just switch off my phone.
Can you? I'm curious, and have been for a while. I know that you can press the power button, and the screen goes black, but in these days of enforced non-removeable batteries I'd be very surprised if your phone is ever really "off". What you mean is, you don't know what it's doing, because nobody has told you.
I know mine is on, because it senses the usb cable being attached, and displays the level of battery charge.
I loved my old first gen nokia communicator....but when considering any device such as this ?I apply two criteria:
How does it compare on price and function to my phone and my Think Outside Bluetooth keyboard, and my circa £200 Acer S3?...and.....Under what circumstances would carry it in preference to those?
I've not bought any of the various options I've considered, yet.
I'd love something that compared with the Acer s3 at that sort of price point, but suspect I'll have to buy at least one new battery before that comes to pass.
"So, out of curiosity - what non-phone device do you keep or do *really* important stuff on? Because whatever it is, you would have to be an idiot to think it is in any sense secure."
No, quite. When i said *really* important, I meant in context to me. Not "really" important stuff in the grander scheme of things. I accept that in order to have a degree of convenience in interacting with the modern world I will have to engage with devices and systems that are largely out of my control and understanding. My communcations profile does not merit a network of embedded agents passing memorised verbal coded messages across the globe. I am sceptical about devices like phones, that suddenly have to have permanently installed batteries, and may or may not be doing all manner of things for which I have not given an informed consent. There is much circumstantial evidence of us being violated (from a data perspective - see the facebook "listening in" controversy/conspiracy theory for example - and no, apart from trying it initially when it was released outside colleges, I do not log in to facebook).
When I do online banking, bill paying, and anything involving confidential information or registrations, then I tend to use qubes and perform these more sensitive tasks in a disposable VM. At least that way I know that I'm not getting persistence and crosstalk/data leakage beyond the duration of the activity. Or at least I think i do...I've not checked every line of code. I am cautious, I am not important enough to be comprehensively locked down verifiably secure. It's a balance. I haven't tried routing these particular activities over Tor, for example. For a start it tends to be unusably slow at shifting data about, and secondly my inherent cynicism makes me assume that it's just a construct to highlight to the security services that you think that you might have something to hide, and they should scrutinise you more (which might help explain why it's so bloody slow). I'm not doing anything wrong, and I'm really not looking to make my life more difficult than it need be. So moderate sensible precautions is what I'm aiming for.
I wont be doing these sorts of things on any phone, any time soon, i don't expect.
@ Charlie Clark
Meanwhile? Over two years ago.
Smart Lock (On-body detection, Trusted Devices, Trusted Places, Trusted Face, Trusted Voice) can be configured simply to your preference in android. That tends to be the advantage of android. It can be configured to your preference.
The advantage of iphone (where i is them, not you), is that you are assimilated by the borg, resistance is futile, you will do as you are told for the greater good of the collective. With a bit of Shiny! as a distraction.
That this is the notable talking point about the upcoming round of a new apple phone release tells you all you need to know. It's more overpriced shiny, with little or no bang per buck advantage over anything else.
Let's be honest, there are relatively few people that something like a Moto G5 wouldn't be perfectly adequate for. This isn't primarily about technology any more. It's about fashion...and the design and pricing is predominantly market positioning based, not cost based.
It's worse than no more actual innovation, it's things getting worse.
I'd still like a micro sd card slot, and a replaceable battery, please. Replaceable battery particularly. I know a number of people with phones that they are perfectly happy with, but they are getting rid of because of failing battery. The more slimline and over glued the damn things are, the more eyewateringly expensive to repair or DIY replacement is impossible. Unibodies efficiently transfer shock into the internal components, and are normally hard as hell to get into to repair. Also, I find, innnovation like wireless charging cooks the battery and the phone internals and shortens life. At a battery fail every two years, the cost of ownership of a device priced at a grand is five hundred notes a year. That's bloody outrageous. Give me a nice flexible plastic click to open body, and I'll coat it in a shock absorbing case of my choice, and pop a tempered glass screen protector on so I only have to replace that, rather than my phone screen. I will still be able to get this into my pocket, or bag, so it's fine. It will also be big enough for me to be able to see, and not so slippy that I drop it or knock it off things every other time I try to pick it up. Note also, no curved backs to the phone for this reason.
My phone is fine, my camera is fine, unlocking it with my miband2 [insert your preferred trusted bluetooth device here] is fine. I don't keep, or do anything *really* important on my phone, because you would have to be an idiot to think that any of them are in any sense secure. It runs cyanogenmod/lineageOS so is more up to date and less cluttered than most oem "enhanced" phones.
My phone is really easy to teardown and re-assemble. That's the sort of innovation that I'd like please. I'd also like that conventional HD shaped screen, I don't want some sort of idiot tall thin widescreen so that i can strap it to my head and pretend that I'm not in the real world. There's far too much of that sort of thing going on in general, never mind for a communications device.
If I have to hold a screen that close to my face, for facial recognition to work, I wont be able to read the screen. So, no thanks.
Perhaps it's just me.
budgie desktop, but the flavour of ubuntu that I now favour is Debian.
I'm tired of all the various not quite joined up bits I run into on ubuntu, and there is uncertainty on the desktop given the demise of unity. Stretch is pretty up to date, at least for now (and there's always sid for development). Gnome on debian is surprisingly quick. My multi-desktop with an ati grahpics card has never been more stable. I have a gnome desktop, which can be duplicated (in terms of look and feel) on pretty much every distribution.
So I'm finding I use centos for supportable non-bleeding edge applications, and debian for development, and everything I do has never worked better.
So for me it's Gnome first (as a desktop), and whatever platform suits the application. The ubiquity of gnome helps when it comes to moving platform.
Most every distro defaults to gnome these days, and despite it's reputation it's not a bad choice.
In over twenty years working for an SME, doing everything from network, infrastructure, through support, to actual functional work sometimes (end of year accounts support and the suchlike), I must have had thousands of such events. I have sympathy with the prof, because Issuing training material, or even having new applications or operating system upgrades deployed with training, is nowhere near as common as it should be, or even always possible.
It's galling when director X calls with a problem trying to make appication Y do something useful (that it can't), when they were the one who insisted that we *had to have* it in the first place. I would sit down with my users, solve any problem, and then just tell them that computers work when I touch them, it's not their fault (while gently suggesting a working methodology that avoids the problem in future). On a systems level, just make sure all the data is stored securely and daily backups are made that users can't get their hands on (including email). Most of all make sure that nobody in your organisation thinks that saving things on a single desktop computer is anything like acceptable, or supported by IT.
A computer expert can often cause problems, but many unskilled users make light work of truly screwing things up. Don't let them near anything important. Have central data, with historical backups. be able to revert desktop environments to fresh installations, that are known and tested.
My heart goes out who to those whose job it is to navigate users through the ribbons, rather than wrestle the bits at infrastructure level.
...which is that the windows environment is predicated on being a monitised platform, with paid for software. If you want to generate cost points at every level on IT (from software sales, through customisation, and out to support), and therefore pay staff, and build a business, then windows is the smart choice (as a supplier, the customer pays).
If you want to set up something for yourself, at minimum expenditure, with the ability to source free tools, and stitch them together to solve real tasks, then GNU/Linux has an awful lot to offer. Particularly with it's superior non-crippled network and security options, and it's more resource efficient footprint on servers.
I've constructed, deployed, maintained, and supported, both windows and *nix servers and networks. Neither does all the things that the other does, there are pro's and cons...but anyone who seriously thinks an open-source based network (servers and clients) are more expensive to run than a broadly equivalent windows network has been at the funny stuff.
Lots of small understandable tools, that can be deployed as a consistent whole, to achieve specific tasks, that can be easily customised, and efficiently deployed (resources, cycles, memory footprint, networking), beat the hell out of any monolithic, labyrinthine, dependency riddled windows application that I've ever had the displeasure to have to try to fix.
The point here is that the NHS is a large organisation, with a wide range of clinicians and other staff, and sod all chance of getting competent systems managers to every point of presence. It has a scale, and budget that should allow it to benefit from central provisioning, and maintenance. The customised, cut down, remotely maintained client-server model should be the best bet here (if with some local proxy/caching).
Way easier to achieve something good in this paradigm with GNU/Linux than windows, in my view. Open source tools should be as functional, more transparent, and less costly - both up front, in development, and in support. The difference is that you need to put together a team to do it, because there are not the layers of profit that there is in the proprietary sector that pays for all the "supplier knows best" that you are allowed to purchase at a substantial markup, and then locked into.
I didn't like unity. Like others have said it never really felt finished, it was a resource hog, threw me errors, didn't look great, and was awkward to use. It's had some resource thrown at it, and frankly it's not good enough for what it is, and how long we've been waiting for it to work.
Look at ubuntu budgie, it's in its first incarnation, and though with less ambitious scope already feels like it hangs together better, and has a better look and feel.
Failing that, any desktop that runs plank will do.
There are lots of perfectly fine desktops, without having to reinvent the wheel with corners. Most people who moan about the desktop and prefer windows, actually mean themes. The linux community could do itself a favour by putting together some slick, well designed themes, and worry less about the actual desktop machinery.
Seriously. Nicely theme anything, whether it be cinnamon, MATE, budgie, , kde, gnome or whatever, and market it properly, and it will do as well as a reworking of the desktop idiom...just make it look nice, and have understandable system tools.
On a personal level I find getting things working with ubuntu a bit easier than with other distributions. It offers a good balance of bleeding edge, community support, stability, and manageable upgrade cycle. For virtualisation development, ease of use for things like openstack, and general development, it's not perfect, but better than most.
Unity i don't like, but the availability of so many options on the same base and repositories means there are ways around it. I like the look of budgie, and it will be available as ubuntu budgie from 17.04. It's already workable, and likely my direction of travel in the short term.
I'd argue that it's government that's not competent.
BT, to my certain knowledge, were working on fibre in the local loop at Martlesham Heath in the late eighties, and early nineties. I myself did a project in conjunction with them regarding fault finding in the local loop (TPON) in 1991.
Why did it never get deployed?
The conservative government were keen to sell off cable franchises, at maximum price. So they prevented BT competing on services (legislatively), and did not require cable companies to achieve full, or substantial coverage (they could meet their targets just by focusing on high density housing, and in many cases former council estates that were the major market for cable TV services at the time).
The UK broadband infrastructure has never recovered from this cynical politically motivated carve up, seemingly purely for profit (tax cuts == buying votes).
Have you heard of Chromebooks?
Google's recent moves around it's 'linux' have put it very much in the enterprise market, along with google apps. They've put clear water between what they do for google architecture and what they do for the linux ecosystem.
Microsoft is perhaps trying to have a position via "my enemies, enemy" strategy.
I can see your argument for a wearable, through perhaps display glasses, with eye and voice (mind!) control. I can see sensors located variously, with possibly a wrist device for notification and alerts. I really cannot see the argument for a specific wrist located general computing device, and the limitations that imposes.
Current tech, and best for most usage cases is a tricorder/handheld device supplemented with accessories (sensors, display, alerts/alarms, headsets). Something that allows the user to determine an appropriate setup.
It's not possible to make a general purpose computer smartwatch display that my old eyes can see, that I want stuck on my wrist. Not doable.
I find smart watches (/dumb consumers) really fascinating from a marketing/brand perspective.
There seems to be a level of social acceptance/visibility that is required to make things a consumer product. It appears to bear no relationship to how good an idea something is, or how well thought through the implementation.
In this case: Apple releases smartwatch > smartwatches are a here and now tech ready to be used > vast numbers of consumers realise they've actually been sold a pup and shuffle away quietly.
I've long had a soft spot for 'smart' watches....being a sweetspot age for the development of digital watches...remembering the first LED watches, that were like a brick on the arm and needed a button press to reveal the magical red robot numbers. Anyone who seriously thought that a computer with a tiny screen stuck on the end of your arm, that needs buttons pressing, or worse an accurate screen tap, could do anything truly useful, deserves all that they get. To then imagine that a charging routine that demands every day/night, and limitations on environmental factors (taking it off for a shower, to swim, etc) are truly living in la-la land.
I still have an original Casio protrek titanium here. It's outlasted it's usefulness without ever needing charging or winding. It doesn't move, it just sits on the windowsill, collecting sunlight, and refusing to stop working. If I was ever stranded on a desert island that's the watch I'd want with me. I wont have it though, because it's way too awkward to wear these days. I have an original pebble, and there was a lot about that which was good. Re-charging every three to four days was a bit of a bind...but bonus marks for on wrist charging....even if the weak magnets have a tendency to pull off when hammering at the keyboard, while arguing on the internet. It did pretty much all I needed, particularly with plexfit connected to google fit (I favour google fit because it's the only really non-proprietary solution in terms of collecting from multi-vendor fitness wearables, and I don't much care if they know how lazy i really am). Except - screen tearing. Pretty much every pebble I've seen/tested/heard about has some level of screen tear develop. In my case it's terminal and renders the damn thing unusable.
I've had a mi band pulse for notifications and steps. fit and forget, charge once a month. Very easy to wear. Unfortunately no screen, so no time, and it can be difficult to determine what vibrate patterns mean what. So the mi band 2 was worth a punt at £25. Notifies call, text, or "app" (which is programmable), doesn't do much in the way of extra function but tells the time to the latest minute, and records steps continuously, with heart rate on demand from the band. Battery re-charge ever 14+ days at least. Survives football (I am in the habit of wrapping a neoprene wrist support around a wearable while playing, and wont wear an unbreakable strap for reasons of loss of wrist) and a shower.
I'd like seconds and a stopwatch/timer option (on wrist rather than via phone), and to be able to change "app" to a three letter code for the notifying application, but apart from that it pretty much meets my current best case usage scenario.
Certainly better than any "smartwatch" would.
"Makes me wonder if one could make a full system image (backup or third-party,) upgrade and activate 10. Then restore the Windows 7 image (why not revert from 10?"
My experience is that a machine with OEM SLiC activation gives you a bit of leeway. I have a fair stack of Dell T5500 and T3500 workstations that I have been upgrading. This is for clients who still run Windows 7 specific software, but wanted to bank the digital entitlement for later use.
Cloning the hard drives (using dd under *nix) and then running the update on the cloned drive activated fine. As did a drive cloned from one machine, and used for digital activation in another machine. All using a Dell OEM windows 7 activation. This meant I could clone in the first machine I upgraded, transfer the cloned drive to a new machine for upgrade, and gradually expand this across a number of machines.
Once the digital entitlement is active for a machine, you can re-install a clean windows 7 (format the upgraded drive) and the machine activates with the correct windows 10 version (Pro, in this case). These old Dell workstations are quite nice (great rock solid boxes), because you can turn a drive on or off in the bios, so each machine is back running with the original windows 7 setup, but with a clean activated windows 10 installation ready to be enabled via bios. I've also done this process of cloning the drive with laptops and other machines, and it seems to work appropriately for qualifying versions of the previous windows OS for both windows 10 home and pro.
It would seem that it's a relatively flexible process, with Microsoft happier to have you in the fold, rather than deny you a license.
As for using windows versus linux - I started in the CP/M era. It constantly amazes me how intellectual property law has been used to take a slew of innovations that were previously shared for greater good, to being protected so that they could be mined. Instead of the hardware being the product, the software now is. Microsoft are at the centre of monetising the software, and they want to be at the centre of monetising services. They are nothing like the best at any of it, but they have most enterprise support, because they build in the most chargeable layers.
If you install clean, put all the customised settings to "off" and don't let cortana search for anything....then windows 10 works okay. It takes an age to boot and it's slow as hell at stuff though. If you don't believe that then put something like cub linux on the same machine instead.
I owe my working life to the crippling of users capability. I'm amazed that some people appear to have only just noticed. This isn't a new windows 10 thing.
DAB has, essentially, already failed. It is broadcast at too low a bit rate to appeal to Hi-Fi enthusiasts, and is too expensive (because essentially the UK is on it's own in using it...). As a format it is has little room to grow, and looks like a white elephant.
Someone should put it out of it's misery and pilot the UK towards adopting a newer high compression based format that the rest of the world actually uses. Something that can adequately replace FM, rather than being less compelling than it (from a quality audio perspective).
Strength and flexibility are not synonymous.
If something of the same material is more flexible, it is likely to be less strong. Different materials may have different characteristics, but are likely to have different drawbacks. For example, ceramic is very strong, but very brittle, whereas polythene is very flexible, but not very strong.
A flexible thing may bend and distort components underneath, causing damage. In comparison a harder thing may transfer shock into the case chassis, rather than through the components.
Or not, it depends on the design.
It may be that by design the ipad is indeed more resilient to damage, but that is not borne out by the contention of the article, which is misleading at best.
Thinner thing made of stuff, bends less than thicker thing made of stuff, is pretty scant material on which to base an article.
If you know that you want different partitions, you almost certainly know how to do it. If you want different partitions it's entirely possible that you might want these partitions spread over different types of drive (solid state or ramdisk, or RAID - mirrored, striped or a combination - for example), Filesystems (to put a NTFS partition to be mounted in both windows and linux for shared data, perhaps), or for a range of other reasons (operating system testing, virtual machine storage.....etc).
The author should get over himself and his parochial self interest, and realise that the least complex setup is almost certainly the correct choice for a default installation for an inexperienced user.
Stick everything where it is easy to find and learn from. Those of us with special needs can sort ourselves out, thanks.
Windows Phone 7, really who cares?
There was a time that HTC Windows Mobile phones were the power user smartphone of choice. Highly configurable, with loads of software, to achieve oodles of high end enterprise integration. Unfortunately the phone bits were never very good (in fact the Diamonds that I bought for our company were possibly the worst professional decision I've ever made in my career).
Windows mobile has been left trailing in the dust by the iphone and android in the consumer space and the better power management and resilience of the symbian communicators (E72 etc) and the integrated enterprise goodness of the blackberry's - despite nicking all the best hardware early from HTC.
So they've decided to make mobile 7 pretty like apple (we all know how they are likely to fare on that battleground), and reduce the high levels of configurability and customisation of the old windows mobile (about the best thing the platform had going for it).
For what? where will it sit in the marketplace? More expensive than android, but less flexible. A pretty competitor to iphone, but without the market share and fanbois.
....if it had worked properly and been more flexible. As a unified method of dealing with interactive collaboration it is quite spectacular. Being able to combine different media and to record the development of a wave and replay it has a number of applications for which a solution currently does not exist in the real world.
Imagine a bulletin/message board with waves for threads/topics. Subsequent contributors can add/amend/insert material in the relevant place, without having to quote previous posts and append it at the end. It could be mixed media...and anyone joining the discussion could play the thread to catch up. Of course to be able to do that it needed to be able to be hosted on ones own servers/hosting, to be integrated into a wrapper that is site specific, and to allow a diverse range of enhanced admin options (for example privileges for read/write/edit of existing waves and to be able to control who can publish new waves).
....but now think that you can use the same mechanism for private messages, public messages to people unregistered at the site, you can also use it as a collaboration environment for writing white papers or as a project management tool. Think of it as rtf for the internet age.
Google indicated that this would be possible, and tried to get developers on board to work towards such integration. Why didn't it happen? Well it partly did, it's just that the technology is immature, the feature set outside the google hosted wave servers substandard, and the google hosted waves lacked the community (of either developers or users) to gain momentum.
Wave isn't a fail, though. Get the technology right, integrate it into specific applications, and distribute freely in a truly non-proprietary way, and I still believe it's a game changer.
The question is, who pays?
It's an Apple Newton moment. Absolutely brilliant idea, but fundamentally unusable in its current form. Look forward ten years, and the showcased concepts behind it will be ubiquitous.
I think the view that is expressed in this article is interesting, but then I would do because in practice I reached a similar conclusion.
For my MBA dissertation it was my intention to do a case study on the implementation of ERP within my organisation. This plan was somewhat compromised as in the process of that implementation, and after a change in emphasis within the various business units that compromise our organisation, a decision was made to use specific dedicated tools that communicated across the functions, rather than integrated them into a whole.
From an implementation point of view, particularly from an IT perspective, it is 'easier' to deploy one comprehensive system. However the ability to find one system that is a good fit for all practices and operations within a diverse organisation is challenging. It often means compromise, customisation and alteration of existing processes. This can undermine the advantages of an integrated system making it cumbersome, inflexible, and hard to maintain. It is also likely that overall cost of purchase and deployment will be high.
Choosing small dedicated tools for specific processes means that a better fit for individual work processes can be found. This allows more 'out of the box' solutions, making maintenance and upgrades easier, as well as no one failure rendering all business functions inoperable. It also allows more readily for change within autonomous business units or processes. The additional effort required is in getting the diverse tools passing necessary information required across processes.
From an IT perspective it requires a more complex infrastructure, nevertheless I think it can benefit the performance of the business.
It might not have been the intended focus of my dissertation, but you may be relieved to know (as was I) that this conclusion at least allowed me to satisfactorily pass my degree.
Clearly it isn't the correct approach for everyone, but it certainly should be considered. The managers in our business units are happier, and have bought into the process more - quite simply because the business software they use is of their choice rather than an imposition from elsewhere. Unfortunately it's more onerous for me, but you don't run a business to please the IT function any more than you should rely on accountants for innovation.
I've been looking for a small quiet pc that I can mount behind a monitor to use as a portable computer/media centre for display at various locations around my automated home. Initially it seemed that the lack of 1080p would be the only significant stumbling block for this machine. It seems, however, that the LinuxMCE (my solution of choice to integrate media services and home automation) people have it running 1080p comfortably.
If could buy this cheaper, without XP or a hard drive, it would be perfect.
Biting the hand that feeds IT © 1998–2020