* Posts by Trevor_Pott

6991 publicly visible posts • joined 31 May 2010

The moment a computer crash nearly caused my car crash

Trevor_Pott Gold badge

@AC

According to the sticker they gave me, I was supposed to bring her in for serviceing at 56500 miles. My odo read 56556 when I got it to the dealership. The trip from my house to the dealership is ~30 miles.

Is that "dangerously over?" If so, can you please give me a source for that?

As to "save a few $ on servicing," I don't understand where you would draw such an inference. I have never – ever – had my car serviced in any fashion where money was an issue. I hand my vehicle over to my mechanic with the explicit instructions “do what needs to be done, and don’t call for permission unless it goes over $5000.”

I have never had cause to believe that he would screw me for so much as a cent and thus no reason no to give him carte blanche to do whatever servicing he feels is necessary. He has 30+ years experience with this make of car. I am absolutely and categorically not going to question his judgement regarding my vehicle.

He is the expert. He ahs the superior knowledge. My ability to Wikipedia up some information does not put me in a position to endanger myself or others by refusing the recommendations of the the individual who is quite probably the best qualified person for 400km in any direction to tell me what needs be done to my car.

So if you feel that adhering to the milage he puts on the “visit me again in” sticker +/- about 50 miles is “dangerously over” then please list sources. I will discuss them at legnth with my mechanic.

Trevor_Pott Gold badge

@Vic

You know Vic, you don’t exactly have standing to be complaining about the rules of formal debate. You have several demonstrable flaws in your method, and a live-moderated forum is hardly the appropriate venue for the kind of semantic arguing that a truly proper formal debate gives rise to.

Your first logical fallacy is that when taken on aggregate, your posts consistently forward an appeal to authority; with yourself as the authority. As an establishment of yourself as an authority on – quite frankly multiple areas of expertise over which you seem to claim domain – you offer neither primary sources nor credentials. Instead you offer whimsical anecdotes that are supposed to be “evidence” of your vast personal experience.

A few issues pop up with this. Experience does not equate to expertise. Oh, you can make it sound as though you *obviously* have the expertise because of tangentially related experience, but that can just as easily by verbal sleight-of-hand. Let’s use an example.

I have over 25000+ hours as a Systems administrator for photography companies and photographic labs. Another 5000+ hours supporting the image/video rendering industry. In that time, I have picked up enough knowledge to talk the talk, but I cannot walk the walk.

I could certainly sound like I knew a great deal about photography. Enough even to convince some experienced professional photographers. The truth of the matter is I neither know nor care about anything related to photography. I know about the bits of photography that relate to computers. The cameras and everything related to them might as well be voodoo.

Now, I can of course put together a line of bull and convince someone online I’m an expert in photography. My experience combined with ready access to Google and Wikipedia could make me photographer trollpants of the year. But it does not make me an expert in photography. It makes me (potentially) an expert in digital imagery processing, at best.

So your appeal to yourself as an authority on embedded systems, vehicles (and the galaxy of related topics in between) that you have oh-so-subtly claimed is denied. I don’t buy it, and your use of this argumentative strategy has lowered my opinion of you quickly and dramatically.

The second issue I have with your argumentation is your approach to statistics. You have evidenced issues that fall under the common heading of the ecological inference fallacy. You have made judgments about me based not upon examination of the individual, but examination of the group to which I belong. (People who drive across a set of tracks when a car that’s acting up.)

Now admittedly, your trespass into this area was not as deep as that of others. (Seriously, “trying to avoid a tow charge?” You people pay for towing? Don’t you have auto clubs?) But you still made (wrong) assumptions based upon the behaviours and actions of other individuals who are members of the same group, instead of examining the motivations and behaviour of the individual in question. Entirely apart from the reasons that this is a fallacy on its own (and I’ll leave you to Google that), is can – and often does – lead to another logical fallacy; the hasty generalization.

The last one that I want to bring to your attention – and I am cutting this off at three rather than iterate the entire list due to comment length, not due to running out of violations – is the Ludic Fallacy. (This is a relatively new term, but Wikipedia does actually have a decent article on it: https://en.wikipedia.org/wiki/Ludic_fallacy.)

The heart of it is “using games to model RL.” While I have my disagreements about the exact representation and caveats typically assigned to the Ludic Fallacy, when extended to apply to both “using games theory to model RL” and “using statistics to model RL” it becomes somewhat more feature complete.

In short; statistics can inform us about the majority of cases. Games theory can do the same. But not all cases are identical, and – statistically speaking – there will always be a certain number of events within a given set that fall outside your 3 (or 5) sigma threshold.

I will not disagree with you for a fraction of a second if you say that “in most cases (or even in nearly all cases) when one is experiencing car troubles, one should not risk taking a car across any set of train tracks.” I would accept that judgment as correct based upon empirical data that you and I both accept as true.

This was empirical data I had in hand when I made the decision I made. I believed – based on circumstances as well as additional information about driving patterns, light timings, observations of neighbouring vehicles, etc. – that this was a situation falling outside at least 3 sigma, and possibly 5.

In other words, I did not take a statistically absolutist approach to risk management and analysed the situation itself rather than believing that the top of the bell curve can speak for all data points.

The particularities of this individual situation meant that it was actually *less risk* to try for the tracks than it was to sit there and get killed. There is zero – understand me literally zero – doubt in my mind that had I not made the choice I made, I would in fact currently be dead.

This is where the appeal to authority comes in. I have in fact talked to individuals I consider to be subject matter experts. Friends and family who drive that road every day, including my insurance broker two police officers and a professor of Mathematics at the local University. Every one of them except one agrees wholeheartedly with my decision. (And he believes that I was “categorically insane to drive that road in the light piece of tinfoil [I] call a car in the first place.”)

I accept them as authorities in the matter, whereas I don’t accept you as same. You may be in possession of some statistical knowledge. You may even have programmed an embedded system or two in your time.

That does not make you a subject matter expert, nor does it make “the statistical norm” apply to every situation. (Which you should damn well know if you do program embedded systems. Otherwise, what the hell are you doing in charge of people’s lives?)

So sir, I find fault with your logic. I find fault with your style of argumentation and I find fault with your so-called “evidence.” I called you on it. If you choose to take that as an ad homenim, feel free.

But to be perfectly clear, the comment was directed at the coherence of your arguments. Arguments that were absolutist, positing assertions as fact and given the number of your posts in this thread alone, an attempt at “accuracy by volume.” (And quite possibly “proof by intimidation,” given your abrasive tone.)

If you want to argue this further, please feel free to return to the article and select “email the author.” My free time is spent debating virtually every topic in existence on the Ars Technica forums with some of the best trolls on the planet. I would be entirely pleased to go 100 rounds with you about this, but there is zero reason to subject other commenters (and the poor moderator) to that much back and fourth.

Trevor_Pott Gold badge

@AC

Going to have to agree to disagree here, mate. I will trust the judgement of others who live in my city and drive that road every day than random commenters on the Internet. The whole thing has been thrashed out at length locally amongst folks whose judgement I trust, and shockingly they do in fact disagree with armchair opinions from across the internets.

So I guess we’ll have to leave it at that.

Trevor_Pott Gold badge

Actually, when it lit back up, I had already put her in neutral, undone my seatbelt and had my hand on the door. You have to understand that pushing her across the tracks would have been simple. She's a light car, I can do it from the driver's side doorframe.

More to the point, the tracks are only 15m-ish worth of space. The other side is a slight downward sloping hill. I knew before I made the turn that I could have easily pushed her out of the way before the next train. Conversely, if I had tried to stop anywhere on that southbound road, I'd have been run over.

There was little (if any) real possibility of getting hit by the train before I could get the car out of there; it had just passed, another wouldn't be along for 8 minutes. But damned if you don't start freaking out when you realise you are stuck on train tracks.

All those perfectly logical cold calculations pale to the overwhelming power of adrenaline.

No, my issue here is that the computer should have let me know that it was taking over. It would have made a difference in how I reacted. Frankly, had I known it was the computer, I’d have turned it off, put it in neutral and pushed as soon as it died, rather than leaving it idle. The computer has control of the brakes. A potentially squirrely tranny doesn’t.

The only thing that mattered, really, was getting across the tracks to safety. Half a block to home free. And 8 minutes to get there.

Trevor_Pott Gold badge

The front wheel certainly could have spun out of control for a brief period while accelerating from a stop at a light. The car didn't provide any feedback to indicate this was happening. I am certain I would not have heard it; the heater was on full, and there were two enormous pimped out rig pig trucks with fart cannons on either side trying to race to the next red light.

It is a stupidly dangerous, frustrating road filled almost entirely with full-tonne trucks, SUVs and cars on large risers. The civic design is so poor that something like 70’+ people speed dangerously to try to beat the endless series of poorly timed lights, often whipping through reds. There is an LRT going down the middle of the road, and the whole thing is the feeder to the freeway serving a pair of set of huge bedroom communities. Oh, and it was the first 20 minutes of rush hour.

So no, unless the car provided some form of tactile feedback, or an audible tone in the car, I’d never have known that the wheel had spun.

As to the “completely stopped responding” bit, that is an interesting one. See, the car was working just fine at 20kph for several blocks (as I *desperately locked for a way to escape the death trap road I was on, which had big cement walls an no turn-out lane.)

It wasn’t until I complete the left hand turn (incidentally taking me across the tracks) that the thing went dead. But only for an instant…then the speedo freaked out for a second (again) and the car went right back to 20kph.

There is no exaggeration or hyperbole there. Not even about the terrifying, terrifying road. (I only ever even drive that road anymore because the other links are all under construction.) My issue indeed is emphatically not with the computer itself. I think it did the job it is supposed to do.

My issue is with the lack of feedback. That computer should have told me that it was overriding my inputs. Loudly. And I take serious issue with and designer or engineer that allowed the creation of such a system without informing the driver that this sort of thing was even possible. (It is not how the system normally behaves.)

Let’s take ABS as a great example of a well done system. In every car I have ever driven, there is a universal signal (the pedal “pumping”) for informing the driver that the computer was overriding user inputs. This is good. This is proper engineering. It is a universal feedback system and – critically – it informs the driver when it is in use.

The trac computer in this instance declined to do anything of the sort. Thus my philosophising. Are we – as a society – comfortable with this? Is the burden of knowing when a computer is overriding a human – as opposed to mechanical or other error – on the human? What happened to the principles of good design that did things like provide a feedback mechanism for ABS?

As to make/model of car, I decline to answer. The last thing the Internet needs is another holy war about car manufacturers.

Trevor_Pott Gold badge

@Intractable Potsherd

Ah, but you forget that Vic - as with many internet commenters - is armed with the invincible knowledge provided by ardent conviction in spite of things like "facts" or "on scene assessment. It is no different, really, than the sort of armchair warfighting self-important twats get up to when reading about another soldier killed in Afghanistan.

Well *obviously* they should have *simply* done Y! There is simply no excuse for it! As someone who uses the Internet, they unquestionably know better than the soldiers on scene!

Same self important logic, different scenario.

It is a simple case of binary thinking. Life is *obviously* either one or zero. There are no states in between. And they wonder when I believe that nerds should have their work reviewed. Nobody is immune to bias, snap judgements, or simply ignoring inconvenient truths because it makes the pre-existing logic of our prior prejudices and beliefs work out.

When that gets translated into code there exists the potential for problems. Anyone so utterly intractable in their thinking as to ardently and vociferously asses such a complicated situation with so very many variables in an absolutist and binary manner is making my point for me.

They will however never understand that.

Trevor_Pott Gold badge

@Martin Usher

Probably true. Though this is not the type of thing that goes through your mind at the moment of.

Trevor_Pott Gold badge

@Vic

Well, the issue at hand is one of getting off the road without getting killed by the twits in the Escalades. Walking away from the car alone is probably riskier. Remember: the car was at this point working just fine at 20kph. There was no reason to believe that it would suddenly stop responding to input. (I believed it to be stuck in 1st it pretty much is how it was behaving.)

20kph is certainly more than enough to get through an intersection, and frankly it was less risk in pretty much every possible way than trying to exit the vehicle at that intersection.

It’s really easy to pass judgement across the internet. Especially when you don’t have all the details of circumstance. It is another thing entirely to know all the details and have to make those choices on the fly.

At the end of the day, she made it where she needed to go, and I didn’t get massacred by the idiots screaming through the lights at 100kph. Again, there was no place to “leave the car.” Not without getting killed. So my choices here were “almost certainly get killed by either staying in the car with the blinkers on, or attempting to exit the car at that point.” or “try to make it 15m across the tracks at 20kph.”

I stand by my choice, even in the face of judgement from random people on the Internet.

Trevor_Pott Gold badge

The mechanic

The mechanic in question is probably the best trained and qualified for this make/model of car in the city. He has over 30 years experience as a mechanic with this make of car and is highly sought after. (He gets someone trying to headhunt him on a very regular basis.)

The chances that he has misdiagnosed this - and trust me, he looked at the sensors as well as ran his own tests - are so slim as to be nonexistent.

The other issue here is that no, the computer isn't really designed incorrectly at all. That was never the point of the article. The computer sensed something bizarre and did exactly what it should have (throttle things down) to prevent further damage etc. The design issues are about reporting what the computer is doing – and why – to the driver.

The computer did its job. The driver (me) had no idea that the computer was even involved here, let alone why it was doing what it was doing. There is the issue. The question is not one of “make the computer capable of handling every possible situation.” That isn’t possible. The question is “where does the burden lie in ensuring that ‘what is occurring and why’ is properly communicated to the user?”

Should the communication of that information be a required aspect of the design of the computer system? Should this be buried in a user manual or EULA and we simply wash our hands of it, telling the user it is their problem? Should the user have to know every possible operatin and failure mode of the computer in thier car? Should we offer training, or make training mandatory? Who pays for that training?

Those are important questions relating to design (and real engineering!) that are only going to become more important – not less – as out society becomes more computerised.

Trevor_Pott Gold badge

Aye

At the end of the day, I really wish my car had an error tone – and separate error lamp – for computer-related codes. Whether the code is mechanical, electrical or computer based, my vehicle has one lamp: maintenance required. A lamp that was already on because she needed an oil change.

Not only that, but I wish I knew my car’s computer could “fail” in this mode. I mean, I know how the thing works, but typically it will kick in for 1-2 seconds with a distinctive beeping. Apparently, it only does the beeping thing when it detects a certain kind of traction loss condition. (Beeping occurs when it applies brakes, bot when it is “only” throttling down.)

Should I have known that? Hell yes! Why didn’t I? Nobody ever told me, and it wasn’t laid out in the manual. (At least not remotely so plainly.) Honestly, I wish they covered these issues in driver’s ed. I would gladly pay the money to take a course covering my model of car’s little quirks.

*shrug*

The entire thing was a learning experience. In hindsight, there is lots I should have done differently, many things I’d wish I’d known. But I really do think that there needs to be some serious consideration of “how much knowledge will the people using your device have? Where do they get that knowledge? How critical is that knowledge to the proper functioning of your design?”

And I’d like to meet the civil engineer who designed that intersection. I have questions about how he ever thought it was safe for someone whose car just went splork.

Additionally, for all the folks who are convinced that they'll "just know" when they have a wheel on a (likely very small) patch of black ice: get over yourselves. You won't. Ask any Canadian. The point of it being "black ice" is that you simply cannot see it or detect it beforehand. No matter how superman you think you are. Now, you have some bit of this stuff under a wheel at a red light, there is zero possibility you know that it is there.

That wheel spins out while the other doesn't? On my car, it doesn't make a different sound or provide any different feedback than "regular acceleration from stop." Indeed, it didn't "sit there for a while, then go." Nope, I pushed the pedal, she went forward. Didn't give me grief until the part of acceleration from stop where I should have surpassed 20kph.

So from a user feedback standpoint there simply was no way to know a fault had occurred other than erratic behaviour. It really isn’t a cut-and-dry situation. Thus the question: “who is to blame?” Some blame has to be mine…but how much? What should I have known, and when? Where should I have gone to learn what I needed to know? Where do those resources exist? Why didn’t I have a handly little paper detailing how I could access these vital information resources when I bought my car?

I don’t have solutions, or even a sure place to heap “blame.” Just a lot of questions, and some philosophising.

Oh, and a copy of that computer on order from an online retailer. So I can go over it in the lab with a bloody micrometer.

Trevor_Pott Gold badge

Fair question!

I normally the spedx limit +/- 5%. -30% on a snowy day. But this is Edmonton. Our roads are covered in snow/ice 6 months of the year. Some times, a water main can break, and the roads become sheets of black ice that you can't see. Even at 40kph, if you are going down the hill to the underpass at 97th st and Yellowhead, you can (and almost every other week someone does) slam into the concrete pillars.

Unlike most folks, I am driving a fairly light, not-going-to-win-a-fight-with-a-concrete-pillar kind of car.

At 40kph, I'd still be dead.

That conputer has prevented a few, especially during my early years.

Trevor_Pott Gold badge

Rebooting only works

if you know it's a computer problem to begin with. :)

Trevor_Pott Gold badge

Aye

Thus the pulling in to the parking lot. Lots of snap judgements in the thread about what I should/should not have done. Fair enough; hindsight is 20/20. But the design of the road I was on meant that there *was* no place to "pull over and wait for a tow truck" excepting across those rail lines and in that parking lot.

For all intents and purposes, my car picked the worst and most deadly possible road in the city to try this. So yeah; tow truck was front of mind, primarily because I though the tranny was done for. When I tried for the intersection with the tracks on it, it was with the number of collisions at that specific location in mind.

I couldn't see a train, and figured that even if I had to drop her in neutral and push her across the line, it was probably safer than putting the blinkers on and hoping I didn't get turned into confetti by an escalade.

What isn’t in the article is that I called for a tow truck to meet me at the parking lot. I was asked to move my car to another stall as the one I was in was reserved. When I lit her up to move, she behaved fine. I cautiously drove around a bit more, determined she was handling okay, and cancelled the tow truck. The few remaining blocks to the mechanic weren’t OMG DEATH, so if she died on me, I could just use the blinkers.

Calculated risk.

Trevor_Pott Gold badge

Roadside assistance

They were summoned; but the only remotely safe place to stop was across those tracks. It's hard to describe without giving a thesis on the horrible design of this road/intersection, but suffice it to say that blinkers on in that intersection is more likely to get you killed than the train.

Now, stupid civic design, that's a whole other rant...

Trevor_Pott Gold badge

@Dick Pountain

To be fair, there is a great big "TRAC OFF" button that kills the computer. I seriously hope ever car with a computer has one...

Trevor_Pott Gold badge

Agree 100%

Could we also throw in a course: "new cars, what they can do, how they can fail, things that are different than what you learned on?" A 2005 [make/model redacted] is a heck of a lot different than the 1986 Crown Vic Police Interceptor I learned on and had for a first car! :D

Even brilliant sysadmins need help plugging holes

Trevor_Pott Gold badge

@Vic

The free audits in question were from the ICO in the UK. My understanding is that these were paid for essentially by the taxpayer and were not "free audits" from some security company.

I believe there is a significant difference there. The ICO has some stake in ensuring that companies are indeed compliant. There isn't a conflict of interest to be worried about there; the more companies can prove compliance, the less work the ICO has to do, and the more likely consumers are to be protected. Win/win/win. Unless the business has something (shoddy security/accounting/whatever practices?) that they don't particularly want examined.

Virtualised storage: the perfect space-saving solution

Trevor_Pott Gold badge
Megaphone

RAID 5?

WHAT? Please read this: http://www.zdnet.com/blog/storage/why-raid-5-stops-working-in-2009/162

Then please, PLEASE, never use RAID 5 ever again. RAID 10 or RAID 6, minimum. Using RAID 5 is almost as unpardonable as using “RAID” 0.

I've seen RAID 50, 51, 60 and 61 becoming far more common for larger arrays, but this is really controller dependant.

Far more importantly, I wouldn’t trust a centralised filer that didn’t do synchronous block-level replication to a twinned partner device. In my environments I’ve moved entirely to “twinned” RAID 10 for high-demand data, and twinned RAID 6 for bulk storage. (Effectively RAID 110 and RAID 61.)

All of that needs to be borne in mind with the idea that RAID IS NOT BACKUP. You always must plan for array failure. Always.

Additionally, please bear in mind that a single filer can run multiple arrays. The failure of a single array on a filer does not mean the loss of all LUNs on that filer. Merely the loss of the LUNs on that array. (And frankly, if you are set up right, it should automatically fail over to the “twin” system at that point anyways.)

I agree that storage virtualisation isn’t a magic wand. You don’t solve all your problems simply by buying a filer from EMC or IBM.

Storage virtualisation is a good way to drive up utilisation, reduce disk usage and solve a host of many and varied problems. Overall, it is a more efficient – and probably quite a bit more safe – way to do storage. But it does absolutely require careful planning and execution; otherwise it can indeed be quite the disaster.

Just like anything else in IT.

story gone

Trevor_Pott Gold badge
Pint

I, for one, welcome our bar-clicking overlords.

Usenet: Home of the cyclical - but eternal - "top post versus bottom post" wars. Amongst others. *shudder*

But as said above; all inputs need to be taken with a grain of salt. Commenters obviously have a voice, so do people who e-mail the author/editor. But clicking the ratings bar is another form of feedback as well, simply one that takes less time. All methods are open to abuse and bias, but they do provide feedback that can help mould writing style (amongst other things) to better suit the target audience.

As to "commenttards being sharper than bar clickers," I don't think there's any way to gauge that. Just as the many readers who are functionally voiceless (excepting by their hit counts) can't really be judged for any level of "sharpness" because of the low level of interactivity.

I am however going to go out on a limb here and say that the people who read El Reg are – on the whole – brighter bulbs than would be found in the average pack. I welcome any form of interaction with readers. Bar clicking may not take as much effort as commenting, it’s true. But rather than presume that bar clickers are somehow not quite up to commenttard snuff, I prefer to presume that they are simply too busy to engage in that sort of faffery.

They might just have real work to do. Much like I should be...

Trevor_Pott Gold badge
Pint

Titles: now optional!

The rating system is essentially "the voice of the readers." Combined with hit stats, it is how information gets back to the editors about who they should continue to employ. Someone vehemently hated, but who keeps the page views up is obviously worthwhile. As is someone beloved who does the same thing.

But what about that fuzzy middle? Does El Reg get its money worth by keeping those of us who aren't the superstars around? Which articles can be pointed at as failures, which as successes? How do we as writers get feedback on what we’re doing right and what we’re doing wrong?

Ratings – should the readership become more generally aware of it – have the potential to be a useful tool. It takes the general reader a lot less time to “click the ratings button” than wade through comments. (Thus hopefully more people will make use of it.)

The comments are a useful tool in getting some feedback, but the commenttard community is a fractional representation of the total readership. Beyond that, it takes a certain type of person to take time out of their day to comment once, let alone come back and check the thread for updates.

As with commenttards, those who click the ratings buttons are quite probably not a proper representative sample (from a statistical standpoint) of the general readership. It’s a self selecting subset, and thus both feedback mechanisms have to be taken with a large bag of NaCl.

Despite that, anything that gets more feedback about the preferences and opinions of the broader readership is a fantastic idea. As a commenttard, I like the ability to poke a button and let he author know “good job, man” without having to comment. It is also a nice additional tool to help me learn to become a better writer.

Who’d have thought that I’d ever actually /like/ a web-2.0 anything? Your opinion of “feedback buttons” of all sorts changes dramatically when you start creating the content and not just consuming it…

The Register Guide on how to stay anonymous (part 3)

Trevor_Pott Gold badge

@Destroy All Monsters

IE9 + Active Directory makes for a beautifully manageable browser. I love it to bits.

But the world doesn't use grandpa computers for everything anymore. We've moved into a world in which heterogeneous computing is no longer something for closeted Linux nerds and the aforementioned "coloured pencil department."

So yes, IE9 + AD? Grand. But that doesn’t help me with Android, iOS, OS X, CentOS...

Thus the only path for the foreseeable future is multiple management tools. And that really, really sucks.

Trevor_Pott Gold badge
Linux

@Microphage

By far, the majority of active exploits on Windows 7 systems are browser plug-in based. Very few exploit holes in the operating system or browser itself. I think you are clinging to an outdated viewpoint here.

I am no fan of Microsoft's traditionally lax approaches to security...but credit where credit it due. Windows 7 is a good operating system. It has it's flaws, but then again, so do all the competitors. OSX can be pwned by trojans, and gods know Linux sure can.

But all three operating systems suffer from the same two attack vectors: social engineering the user into doing something stupid...or browser plugins running amok. I am certain there /are/ operating-system vulnerabilities for each. There always are. But the point here is that a fully up-to-date Windows can still be made a very safe place to play.

I prefer the heightened awareness that a decade of Microsoft faceplanting has brought to security on PCs. People are /wary/ of things when they use Windows. They expect that behind every link is a boogyman, that every attachment will nom their system.

It's better than the false sense of security you get from Linux or Mac. Hell, the Mac Sandbox is a trap! http://arstechnica.com/apple/news/2011/11/researchers-discover-mac-os-x-has-its-own-sandbox-security-hole.ars

I’m not trying to big up Microsoft here. I use CentOS most of the time, because MS are greedy basrtwards whose VDI licenceing is absurd. I would not be surprised to learn that each line of Microsoft’s VDI lisencing documents are written with the blood of kittens.

But honest credit where credit is due. Windows 7 is not Windows XP. And IE9 is not IE6. IE and Windows have come a long way. They aren’t quite “as secure” as Macs ro Linux in every possible way…but they have an entire industry devoted to helping increase that security, and they don’t pass along a false sense of reassurance that gets their users pwned either.

As far as I can see, it's really six of one, half a dozen of the other. Application availability, compatibility and endpoint management are far more significant concerns to me than the theoretical vulnerability of an oprating system or browser based on unproven assumptions and outdated predjudices.

And now...back to trying to build a CentOS install disc that uses XFCE as the default instead of Gnome...

Trevor_Pott Gold badge
Pint

Coloured pencil departments

I actually laughed until I cried. Thank you, sir. Thank you.

I owe you a pint of your favourite.

Trevor_Pott Gold badge

That's what I said!

And then I took a month to do some really in depth research for this article. And the hell of it is...

...Internet Explorer 8 actually /is/ a really secure browser. IE9 is more so. IE10 even more. Now, default out-of-the-box configuration, IE might as well be trying to protects you from rabid dogs by covering you in rancid meat.

But if you take the time to properly configure the thing, you find that there are a crazy amount of important settings which can in fact make the browser very secure whilst still being actually usable for the end user.

It has come a /very/ long way since the days of IE6. Colour me impressed…and that’s hard to do. Especially with Microsoft. > 2 decades of futzing with their software had me more than a little jaded. But I was pleasantly surprised at how far IE has really come.

Ended up making a whole string of GPO changes in the organisations I manage as a result. Learn something new every day!

Trevor_Pott Gold badge

"force all outbound traffic through a proxy"

What happy fuzzy unicorn-filled love world do you live in where all corporate internet traffic occurs behind the perimiter firewall?

I want to live there.

Virtualisation: just a lot of extra software licences?

Trevor_Pott Gold badge

@Anonymous Coward

It's about $2500 Canadian, for about $5000 a server. Or about 20 physical servers. I.E. Not that many boxes. Assuming you're talking Windows Server only, and not System Center, Exchange, etc...

Trevor_Pott Gold badge

@Eddy Ito

I sense the spirit of Simon in you.

Trevor_Pott Gold badge

@John Sanders

Both excellent points. Two additional cons to virtualisation are network and memory bandwidth saturation.

Trevor_Pott Gold badge

Patching and rebooting

Patching is a centralised operation. It's significantly easier in a virtualised environment because I have one application per operating system. I have to test if that patch on that OS affects that application. I have no weird interactions between all these different applications in to debug. One app per OS. Test and release centrally. That part is easy as pie.

Now, rebooting is again made easier by "one app per OS." Rebooting the OS reboots the infrastructure under ONE application. Just one! I don't tank the whole business with a single reboot, I don't have to schedule reboots around 15 different departments. I call up the people who use that application in question go "hey guys, I need to reboot the server for updates, mind if I do that tonight at 7:00pm?"

I get a yay/nay and move forward.

I can schedule and co-ordinate each application independently of the others, and that is a bloody GODSEND. You see, I work in a business where IT doesn't have the almighty word of God. We don't dictate when computers will be available. We work with the affected business units to ensure the best possible quality of service with the fewest possible interruptions.

That means worrying about things like downtime. It also has to bear in mind the real world, where we have telecommuting workers in the systems 24/7.

I can not even conceive what it would take to coordinate a shutdown of the entire corporate infrastructure at any of the companies I oversee. A miracle perhaps. Or 6 months worth of proactive planning.

Virtualised and containerised environments make patching/rebooting EASIER. Yes there are more widgets to reboot, but you can do it without nearly as much angst or worry.

As to tracking and monitoring and securing a fleet of Windows servers, have you tried combinations of some or all of the following:

Active Directory

Novell Zenworks

Windows Server Update Services

Windows InTune

Microsoft System Center Suite

Nagios

Spiceworks

Puppet

++squillions of others

If managing a fleet of servers - physical, virtual or otherwise - to know "are they up, are they patched, are they infected - is a difficult chore for you, then you are doing it wrong. It's easy to do...and there are programs that let you do it for free.

Managing computers is EASY. Managing people (and budgets) is hard.

Trevor_Pott Gold badge
Linux

Amen.

And don't forget the byzantine restrictions on the use of Office in a VDI environment! Microsoft's approach to VDI is the best advertisement for Linux I've ever seen. I have lost count of the number of companies that have decided that VDI is necessary to make their business more flexible, but won't pay what Microsoft is asking.

Especially the "you have to have both Office and Windows licences for each endpoint that connects to the virtual desktop. Considering these folks could be connecting to a single desktop from a dozen or a hundred different devices (personal computers, laptops, mobiles, tablets, hotel systems, friends’ houses, etc.) this is just insane.

The kind of money they would have spend on that crap, they spend retraining their staff for Libre Office with Zimbra or Gmail on CentOS/Mac. (And porting their documents. Some spreadsheets are a pig to port.) And they rid of that god-awful ribbon bar! They’ve never been happier.

It’s funny, you know…10 years ago I was running Linux servers and Microsoft desktops everywhere. Today, I am running fleets of Windows servers front-ended by Linux, Mac and Android endpoints.

Bizzare how it all works out, eh?

Trevor_Pott Gold badge
Linux

Cost of Microsoft Virtualisation

I hate to break this to you, but if you have more than 10 Windows Server licences worth of servers to virtualise, Microsoft's Hyper-V virtualisation (and associated management tools) provide the lowest virtualisation cost on the market. (Unless you are prepared to go KVM/Xen without any management tools but your own shell scripts.)

RHEL's Enterprise KVM is the best virtualisation/management cost item if you are not hosting a large number of Windows Server licences, and you don’t need more than the basic functionality. (Which, let’s face it, most of us don’t need anything more than is found in KVM.)

But if you are in fact hosting a pile of Windows Server-based VMs, then Windows Server Datacenter is in fact far cheaper than any other alternative.

I say this as someone who has done the research on behalf of a number of companies that desperately need virtualisation, yet can’t even /dream/ of paying the kind of money VMWare asks.

Every saved dollar is critical, but you have to look at the total cost. The cost of the virtualisation platform, the cost of the licences you need, and the cost of nerd-hours to keep the lights on. Microsoft are actually quite competitive in the server virtualisation racket.

VDI is another matter entirely, and I have nothing but seething, bubbling rage for Microsoft regarding desktop (and office) virtual licensing. It’s why my VDI is CentOS + XRDP + Libre Office. And the devil take the first user who cries “Windows.”

The Register Guide on how to stay anonymous (part 2)

Trevor_Pott Gold badge
Pint

You are correct.

I was wrong. That is a tidbit of information I should know, but which has passed out of my brain as 95% of the websites I go to are either whitelisted, or don't run scripts.

I apologise.

Can we get a beer here?

Trevor_Pott Gold badge
Happy

But Miiiiiiiiinecraft!

Wait, you use in-browser java to play Minecraft? Why not just download it? It's safer. That way, if an update trundles along that breaks everything (because that NEVER happens in Minecraft...) you can play the old, downloaded version until Moot caves and fixes whatever new features made everything go boom.

*Block*

*BlockBlockBlockBlockBlockBlockBlockBlockBlockBlockBlockBlockBlockBlock*

*Block*

Trevor_Pott Gold badge

@Destroy All Monsters

Is it, really? I thought the point of browser-side Java was to provide me an applet that performed a service. Either it was a game, or it was a file browser, or some other useful widget. The browser-side java something-or-other should be obvious. Something visible that provides the user a benefit.

What it shouldn’t be is a 1px dot somewhere under a div, hidden away whose sole purpose is to plant a cookie, read files off your PC, or drop malware via an exploit.

Browser-side java can be a good thing. In the real world however the bad guys are using it a heck of a lot more than the good guys. What’s worse, when the good guys do use it, they are typically slow to update, requiring older versions of java (with known exploits!) to be used if you want to use that one critical java app on that one website.

Any browser plug-in, be it Java, Flash or Silverlight should be obvious when in use. It should ask the user “do you want to use this third-party software that may screw up your computer, kick your dog and end the world as we know it?” It should warn you each and ever time that it kicks in if your version of the plug-in is out of date.

Browsers – by and large – are secure. Yes, there are exploits discovered for each of the main browsers every year. But far more as discovered – and regularly exploited – for these “common browser extensions” than the underlying browsers themselves.

Browser extensions should never be activated without your knowledge. Especially risky ones like Java that have a great deal of access to your PC. The idea of browser-side java is to provide the end-user with a useful applet that does something the user wants it to do.

Not to operate – ever – without the user’s knowledge.

Trevor_Pott Gold badge

Download a new android keyboard now!

Re: previous post: through = thorough. Autocorrect is my nemesis.

As a follow-up, to the previous discussion it seems like this behaviour goes away if you disable Flash's ability to tinker with mic/webcam via the Flash settings page. Under some circumstances. If that setting isn't properly secured, then the physical mute button seems to be something flash can block no matter the active window context.

With the mic/webcam setting properly secured, Flash still seems able to block the physical mute button, but only when that specific window is active. Whether or not the tab in question has to be the active tab seems to depend on the individual browser’s sandboxing capabilities.

Later versions of Firefox for example can be set up to launch a new sandbox every few tabs. So the behaviour seems weird and inconsistent, but there is an underlying logic to the whole thing.

Assuming f course that there was ever any logic to the ability of Flash to ever be able to prevent you from using the physical mute (or volume up/down) buttons in the first place.

Note: tested only under Windows XP and 7. I have not tested under Windows 8, Android or Linux.

Trevor_Pott Gold badge

Dave 126

I can independantly verify this. Worrying, to say the least. More through lab time is required to figure out which combinations of browser/flash/OS are affected.

Trevor_Pott Gold badge
Pint

Vulture Icon

Could be a lot of reasons for that. My account was originally created way back before I ever started writing for El Reg...could be that somewhere in the CMS it's flagged as "old-fashioned commenttard" instead of "user group that has access to El Reg icon." It could be they reserve it for some subset of writers that I don't belong to? (I am a freelancer, not a staffer.) It could just as easily be as simple as "I've never asked."

In short: I have no idea. It’s all good though; I’m a commenttard first, and a writer second. Why should I have a different icon than all my other commenttard brethren?

Regarding CCleaner: don't lose hope! The folks behind CCleaner do a great job of trying to keep up with the times. They have made significant efforts specifically relating to the evercookie before, and I suspect that they will come through for us in the future. It takes time, research and effort to keep up with the kind of scum who use evercookies. The kind of effort that sometimes seems like a legitimate parallel to malware research.

Maybe we should be asking Microsoft/AVG/Kaspersky/Symantec/etc. to step up and add it to their antimalware products.

Trevor_Pott Gold badge

Another tidbit about Java

It can store files anywhere it's running user context can store them. So exactly do you build an evercookie killer when the Java storage component can be anywhere? (Each site could tuck a file away in a new place.) Not good.

Trevor_Pott Gold badge

CCleaner

I spent 4 hours trying to get a straight answer myself. Short version: nope, CCleaner doesn't kill them. Apparently, in addition to the methods talked about in this article - which I wrote about a month ago - a couple new HTML5 methods have come into play which CCleaner doesn't kill.

Unless browser makers seriously change their lax attitude towards the issue, HTML5 will be the death of individual privacy on the Internet.

Trevor_Pott Gold badge

Java

A fully up-to-date Java probably isn't a much bigger threat than an up to date flash. But the problem is that java is often not updated! It is usually less up-to-date than flash, in many cases due to compatibility with critical applications.

During 2010, java exploits skyrocketed. Many researchers claimed they had in fact surpassed flash's astounding list of vulnerabilities. Sandboxed (in theory) or not, Java has become THE way to use an exploit to drop malware on a windows PC.

If there is enough interest, I'd be happy to write a quick summary article talking about the research. Suffice it to say that no, I'm not a paid shill for anyone here - why would Microsoft, aspiring cloudmongler of extraordinaire – hire a sysadmin? They are busy spending billions trying to ensure my kind are completely unnecessary!

I am willing to bet that Silverlight, Flash and Java have a roughly equal number of bugs with roughly equal severity per line of code. The issues that lead to their risk level are a combination of distribution and patching.

As discussed above, Java is the one that is least updated amongst the bunch, and who really cares about malware on the PCs of both Silverlight users? As near as I can tell, Silverlight is used for Microsoft properties, small handful of other websites to display video and storing evercookies.

So no, I’m not promoting or demoting one technology over another here. I am saying that allowing Flash/Java/Silverlight (or anything remotely similar) to simply launch on any website they feel like launching is bad. Not only is it a privacy issue, but it is a security hole by which fun things can take over your PC.

Use a plug-in to force them to ask for each and every website whether or not they have permission to execute.

If you don’t believe me about Java, then there is a safe, simple experiment you can run. Set the Java console to enabled at startup. (http://download.oracle.com/javase/1.5.0/docs/guide/deployment/deployment-guide/console.html) Every time Java is launched from a webpage, a popup will appear that shows you “hey, something is activating Java…and this is what it is doing.”

Browse around the web for a few months with it on. You’ll be quite surprised how many websites use Java that you didn’t know about, and how many are trying to exploit vulnerabilities. The largest problem behind Java isn't that it is vulnerable, it is that nobody seems to realise how broken it really is.

We all defend against Flash. It's time to do the same against Java.

The Register Guide on how to stay anonymous (part 1)

Trevor_Pott Gold badge

@Irish Donkey

Sadly, that is largely wishful thinking. Keep an eye out for Part 2. That article will cover why this is so.

An ode to rent-a-nerds and cable monkeys

Trevor_Pott Gold badge

Childhood?

Not nearly so interesting as the cataclysm that was my parents' divorce!

My upbringing did however leave me a largely introspective person. If nothing else, I feel I am somewhat rare amongst IT folks because I often take time out to look at myself, my behaviours, how I am communicating with others and trying to find ways to improve. Parents were big believers that any and ever personality issue could be managed and overcome with adequate willpower, patience and education.

I question the real world applicability of that, excepting that said introspection periodically spawns an article such as this one. (Or that Anonymous feature of mine. That eas entirely borne out of a bit of introspection and a desire to understand how various internet nerds tick.) If nothing else, they attract a reasonable # of comments…

Trevor_Pott Gold badge
Pint

Derogatory affection.

Oh, commenttards...

Trevor_Pott Gold badge

@OzBob

Nail --> head.

Communication between various tentacles of IT is often terrible. Gods know, I'm no saint; I shoulder my share of blame here too. Indeed; my CEO and I discuss it regularly. It's really interesting to me how we can all get "the wrong impression" of someone very easily.

There are people in this world who think that I exist only to prevent them from doing what they want. It's not true - I'd love to help them, because helping people gives me the warm fuzzies - but I often have to do that very mean thing and ask them to clarify their request.

"It's broken" does not tell me what I need to know to start solving a problem.

“I didn’t do anything!” is yet another common statement that is entirely unhelpful.

By and large however, if you come to me and are willing to talk about your issue – rather than make sweeping demands without being willing to listen to potential niggles in your Grand Plan – I will do my damnedest to help you. Even to the point of working 16-18 hours a day, 7 days a week.

The problem is that when someone comes to me with “I need A, B and C” and I return to them with “this causes issues with X, Y and Z: for me to move ahead we have to have a discussion with the stakeholders for those areas before I can do anything,” it can all go badly. I am sure some of it is presentation – I am working on improving soft skills! – but some of it is the individual involved as well.

I find that business/accounting types are able to sit down with me and discuss the issues surrounding their request. They have a problem, in order to affect a resolution you require these resources and/or these other problems solved first. They seem able to deal with this pretty transparently.

IT nerds however – and for some reason sales/marketing people – don’t seem as universally accepting. For reasons I fail to comprehend, a significant % of those professions take any roadblocks to solving their issues personally.

I have actually been accused of trying to sabotage someone because I refused to install a pirated copy of Photoshop on their computer, insisting instead that we get funding clearance from the beancounters. (I don’t have a software budget of my own; I cannot dip in to any funds of any kind to “make this happen.” Software that expensive absolutely must be cleared by the brass.)

Communication is huge. Huger than huge. It is the number one reason behind project (and business) failure. The hell of it is we all seem to thing that we’re not the problem, that we have nothing more to learn in the “how to communicate” department.

Makes for interesting articles, though!

Trevor_Pott Gold badge

Guys, give me a little credit here, please. I am not talking about getting push back from developers in instances where you approach them with vague requests for features that don’t have a usage case. I am not talking about asking for features that are in the application already. (Step 1: peruse the helpfile/manual. Step 2: Ask on the forums. Step 3: e-mail dev and ask if feature exists.)

Let’s put some concrete examples on the table here.

Example 1) So I have a point-of-sales system that is trying hard to evolve into a “manage all the things” kind of application. CRM, ERP, inventory management, accounting, etc. Our company has been using it for ~20 years, and are one of the largest deployments. We have paid for features in the past, driving functionality changes that were long overdue.

We pay our bills on time, we pay extra when we have a feature request. We have a dedicated on-site body just to deal with this application, he knows the ins and outs, he knows the developers, he knows enough about databases, programming and systems administration to properly convey “what we want to do” into the appropriate dialect fo nerd.

For business reasons, we changed out payment provider. The extant payment provider was costing us a great deal of money, and wouldn’t allow for many advanced features like online payment integration, etc. This was a long process that took the company months and tens of thousands of dollars.

A year later, we approached the developer about the possibility of integrating the point-of-sales system with that payment provider. We had never done payments directly from within the software before, but were hoping to begin such. We offered a Big Bag Of Money, provided a gigantic pile of API links, and even put them in touch with the geeks from the payment provider who were eager and willing to work with the POS devs to get this done.

The POS devs flatly refused. “We integrate with [the payment provider we binned a year ago.] If you want to use a payment provider from inside of our software, use them. We don’t care why you don’t like them, change your business around because we have zero intention of ever providing support for another payment provider.”

I was *floored*. I brought it back to my CEO who then escalated this within the dev’s company. No dice: the response was “adapt to us, we will not add this feature for you.”

Sot he Big Bag Of Money is going into a pot that will hopefully soon allow us to ditch this application, and move our POS/accounting/etc. package to that provided by a better developer. (And no, the developer didn’t come back to us with “give me more money.” Money wasn’t the issue. They just weren’t going to do it, no matter what.)

Example 2) An industry specific application was developed with a very narrow usage case in mind. Essentially: it was coded for the desktop environment of the developer in question, with the idea that end users would always be single entities. The database and its attendant front end were single-user, the netcode so bad that nothing you could ever do would cause the thing to read files over the network faster than ~2Mbit.

Almost immediately, people with more than one body that had to use this program ran into problems. Multiple people needed to be able to access the database from multiple PCs. Businesses larger than “the one man shop” needed to be able to make use of division of labour such that one body could be dedicated to order tracking, one to reordering one to database input, etc.

The developer was offered a Big Bag Of Money. The dev was given concrete, specific requirements, including links to APIs, relevant libraries (and the licensing requirements of each,) they were given specific usage cases and everything that I as a developer myself could ever have asked for to “do the job right.”

The developer’s response – and I kid you not – was “change your business practices.” The developer said that he just wasn’t going to support that kind of environment. If you had more work than one person could handle, simply divide it in two. Each person with a completely isolated, dedicated instance on a dedicated box.

Individual A would be responsible for records on system A, and individual B would be responsible for records on system B. If you wanted to get hold of information contained in A, then you would have to talk to the individual responsible. They would do absolutely everything regarding records in that database.

Absolutely, incomprehensibly, world-endingly INSANE.

That sort of ethos wasn’t acceptable in the 90s, let alone in today’s massively interconnected world! But, it is quite literally the only piece of software that does anything remotely like it, so the developer gets to do whatever they want.

What did I – the dirty network admin – do? I set up a virtual server with a dozen VMs, put an instance inside each of them, then created a middleware website that tracked which VM contained information on which record. I then ensured that anyone in my client’s organization could RDP into those VMs in order to look up/change info. It wasn’t pretty, but he end result was a multi-user usage environment.

The developer was *NOT AMUSED*. Again, this – like most instances – wasn’t a matter of “not offering enough money.” The developers simply won’t budge, no matter what is offered. In this case, there were multiple companies willing to pool money and resources to Make This Happen. Money WOULD BE FOUND if that were the only issue: without this software, the job simply couldn’t be done.

In the end, this group of companies put their money towards hiring a pile of their own developers and have not only written a new application that does everything $stubborn_developer’s app did, but very nearly has met all the feature requests of the group of sponsoring companies.

So stepping back from the hypotheticals and the “well, users and their specifications are never good” and all the developer horror stories, here are a pair of concrete examples of why devs have gotten under my skin for my entire career. I have more. Many more.

I have done development involving scope creep. I have gotten requests for features that already exist. What I don’t get is a developer denying a feature request that has a solid usage case and is legitimately backed by a Big Bag Of Money. Developers who say “alter your business practices to suit our software” are, IMHO, the worst of the worst that IT has to offer.

They are right up there with a network admin who flat out states “it cannot be the network” and doesn’t even check.

There are good devs, I am sure of it. The guys at Plex are a beautiful example of developers with the perfect attitude. But they are, in my personal experience, rare. I recognize that my personal experience does not necessarily reflect the real world – I haven’t experienced all of it, have I? – thus why I am trying to overcome my prejudices rather than caving in to them.

I am sure you all have horror stories of your own that have caused you to become prejudiced in one way or another. Have you never found it hard to look past those frustrations? Am I alone in having to work to overcome the sum of my past bad experiences with $class_of_IT_geek? The often divisive nature of inter-disciplinary rivalries in IT would seem to suggest otherwise…

Trevor_Pott Gold badge

@David Haig

I have done development myself. Indeed, I am still lead developer on three different projects. The difference between a developer I can work with and one I can't is all in the attitude.

If I approach a developer and say "I have here a Big Bag Of Money, and I need the following features/changes made within X time frame" the developer should say "yes I can do that" or "no I need more time."

What the developer should most emphatically *NOT* say is "why would you need that? Obviously you are doing everything wrong because you don't understand the workflow we are trying to accomplish here!" I understand just fine. Lack of developer clue here is stemming from the inability to grasp that "one size does not fit all."

So as much as I am trying to get over my prejudice towards developers, I keep running up against developer obstinacy at every turn. I am sick of having to code custom middleware because some developer is too damned proud to accept that their design doesn’t meet everyone’s needs.

This is made infinitely worse when dealing with developers who are sitting on top of /the only piece of industry-specific software that does anything close to the job required./ Then they are in the position to really tell everyone to bugger off: there simply isn’t an alternative to their offering on the market.

So you build hacks. And kludges. And stacks and stacks of custom middleware.

AND THEN…the developer realises that you violated their pride by going ahead and creating a tool to do the thing they explicitly refused to do. So they change how their software works, and you have to go back and rewrite some chunk of your middleware. (Especially bad in this day of hosted SaaSy apps and automatic/forced updates keyed to some “call home” online DRM.)

So you end up in some vicious battle of trying to alter your application to deal with some stubborn dev with – quite frankly – the entirely thing should have been settled by an appropriately large bag of money right at the outset.

The worst of the worst slime are the devs who get themselves into a niche, then try to built a “stack” of software on top of it. Some critical application that your entire business – hell your entire industry – couldn’t live without is created, and it’s reasonably good. The catch is, that it won’t import/export data or that feature requests are made into entirely separate products that tie into the core product.

So you end up with some Redmondian super-stack of software costing you *just* the little bit less per year than it would cost you to hire enough devs to completely rewrite the software on your own, zero input into the development cycle, zero hope of ever breaking free of the lockout and zero trust in that the vendor won’t turn around and Oracle the prices on you.

But I am trying not to be bitter. I don’t have time for prejudice. I just have to get the work done, and provide my clients with computer systems that do what they want it do.

No matter how many square pegs need bodged into round holes to do it.

Trevor_Pott Gold badge

WE fix things. THEY break things. Period.

I don't know about you, but if I am being honest, then I break things just as often as any other user. Most especially when I am trying to implement $project for $manager or trying to fix $bad_programming_bug in $ancient_program.

Testbeds are fine and good, but when you put things into production they go boom, almost without fail. Sysadmins are users too.

If you want an "us versus them," I am still trying very hard (every single day) to overcome my burning hatred and inbuilt prejudice against programmers. You see, my job is all about providing my clients (and their users) with CHOICE. Freedom to do things in a new and innovative way. The flexibility to use a different business process and workflow to accomplish a task.

This leads to efficiencies that give that business an edge over competitors. They make money. I get paid.

Programmers however are a completely different breed. They write a program with a very narrow usage case in mind. It is to work this way, on this kind of system, with these inputs and those outputs and never shall anything else ever be considered AMEN.

So 99.99999999% of my job involves trying to invent a widget or a thingamabob that converts “how people actually work” into “how the programmer demands that you work” and then do the same to the outputs. Contacting programmers with feature requests etc is utterly bloody pointless because they have no intention of helping you “kill their children” by approaching the task with a different workflow.

(I ask you: if you and your competition all use the same programs in the exact same way, what is there to differentiate you from them? Why should a potential client choose you over them? What edge do you bring if everyone is doing everything exactly the same way? http://dilbert.com/strips/comic/2008-09-03/)

If I am being honest, there is far more animosity in my life (that I am putting real effort into overcoming) towards developers than there ever will be towards users. Users break things, sure. But in my experience only 5-10% of them are so stubborn that they don’t learn from the experience.

Conversely, only 5-10% of developers I have ever interacted with have been anything excepting “stubborn to the point of aggravating uselessness.”

But I am working to grow beyond that prejudice. Working. Very. Hard.

Trevor_Pott Gold badge
Pint

Trololololololololo....

If people pay you just to type things onto internet fora/do some copypasta, then I should be able to retire off of what Ars Technica owes me! Oh, to be paid per word for forum posts...

Pano's virtual desktops go from zero to hero

Trevor_Pott Gold badge
Pint

It's been so long since I have worked with either.

When I last worked on either of those technologies, it was late 80s, early 90s. I was somewhere in the 10-15 years old range, building robots, sensor packages, thermostats and displays. (Oh, and lasers. A lot of work with lasers back then.)

FPGAs weren't really a THING then. They were JUST coming into the mainstream about the time I got out of Deep Hardware and moved on to networking. Back then, what we would call an FPGA today was essentially a primitive PLC. (Well, not quite, FPGAs are more akin to a roll-your-own logic gate factory in a box, whereas PLC were just firmware computers that could use - but didn't require - things like RAM.)

I guess my foot-in-mouth here should serve as a great example of what happens when the tech moves on, but your experience with it doesn’t. I just ordered a bunch of sample FPGA kits, and I am going to build me some robots in the lab, just so that I actually learn something from this incident. If you're going to embarass yourself on the internet, at least take the time afterwards to learn more about the thing you got wrong.

Trevor_Pott Gold badge

Aye

An FPGA is more accurate, you are correct. The widget I am describing is indeed an FPGA. (I do make mistakes!) I apologise.

The whole of the Pano device however is a bit hard to define without thinking of PLCs: they are a dedicated device that takes a fixed input type and delivers a fixed output type. They have very near real-time requirements on that, but no real extensibility beyond that.

Indeed, the Pano devices remind me a lot of the mid-80s PLCs, with the exception that instead of controlling a robot arm or some such, it’s taking a packet from the NIC and doing something slightly more interesting with it. (Remember EEPROM?)

I remember putting a lot of those out where the configs were tested, loaded and then locked down so that they could never, ever be changed. Many of the things are probably still in service. Converting the inputs they receive into the appropriate outputs.

Modern PLCs of course can be – and often are – hugely more complex. But the old ones – from the 80s – are the closest I analogy I can build.

You can’t reprogram a pano device to speak a different protocol. You can’t patch or update them. It’s not necessary. All they are designed to do is take information they receive form the NIC, convert it to display, and take input from mouse/keyboard and send it back. It’s elegant, simple.

Just like those old EEPROM logic boards of yore.