back to article Complex automation won't make fleshbags obsolete, not when the end result is this dumb

Somewhere in the second hour of sorting through a handful of travel reservations that had been added to my calendar, I started to suspect I'd been lied to – by a computer. Searching in vain for a ticket that I seemed to have received – by the entries in my calendar – I realised that an itinerary I'd drawn up as a suggested …

  1. Olivier2553

    Because we have our expensive, lazy, difficult human way of doing things, automation will integrate well with humans only when it manages to do the things the same way. Unless we can see how automation is working inside the black box, and we can assert that it is reasoning the right way, we cannot trust it.

    1. werdsmith Silver badge

      It's different from the early automation that happened in agriculture and textiles then. Because people in those days could see how a Jethro Tull device planted regular seed much better than the labourer's random scatter. And an Arkwright spinning jenny or Jacquard loom could work faster. This caused the Swing Riots and the Luddite movement.

      But as these devices upped productivity, it was necessary to have more human labour to deal with the increased crop yield and the extra yards of cloth.

      A bit like a server that is starved of RAM. Give it more RAM and it does more but then the CPU can't keep up. Give it more CPU so it does even more and then IO can't keep up. Improve IO and get your 8 hour process down to 1 hour. So now you can run 8 x 1 hour processes in that 8 hours, rinse and repeat.

      Even today there are still food crops that can't be picked and processed by machine without so much loss and damage that it means low paid human labour is still economically viable. Harsh, boring repetitive tasks that lead to back problems. I expect most of these workers would prefer to be doing something else for a living.

      1. Mongrel

        Even today there are still food crops that can't be picked and processed by machine without so much loss and damage that it means low paid human labour is still economically viable. Harsh, boring repetitive tasks that lead to back problems. I expect most of these workers would prefer to be doing something else for a living.

        I suspect there's also a side of how much the humans are valued in those situations. When you have virtually non-existent workers rights combined with super low wages (by Western standards) and a fussy\precise job then why bother developing a machine to do it - that's expensive!

        They're probably not insolvable problems it's just probably not good value to solve them.

        1. BinkyTheMagicPaperclip

          The reason to develop a machine to do it is that once it's been done, it generally works, and creates more money for the people investing money.

          This is already happening in third world countries, cheap labour is not always a guaranteed way to improve their society.

          1. Mongrel

            Logically, yes. But...

   run into the large upfront cost of developing, designing and building these machines combined with the ever popular short-sightedness of the profit machine "Spend lots now to recoup the benefits in a decade or shareholder bonuses now?"

            As for "improving their society", how much of the cheap labour is being exploited by non-local companies who wish to take advantage of the local conditions and have no interest in anything but slashing costs?

            1. BinkyTheMagicPaperclip

              It isn't 'logically yes' it's Actually Happening.

              Of course the labour is being exploited, but it pumps money into the local economy and allows them to develop.

        2. MachDiamond Silver badge

          "They're probably not insolvable problems it's just probably not good value to solve them."

          The same used to hold for fast food jobs and other low skill tasks. With rising minimum wages, layers of H&S, months of paid leave and a sexual harassment lawsuit twice a week, McSwinies is born. No humans at all other than a mobile service tech and presumably, someone to refill the machines.

          The low paying ag jobs are changing too. Some fruit and veg is going to greenhouse growing to control shape and make harvesting simpler along with longer/year around growing seasons. Automatic cow milking machines make yet another farm hand redundant. I expect that soon chickens will be encouraged to walk through gates that weigh them, measure them and send the properly mature birds down the path to The Machine where they'll end up under polythene with no human intervention at all.

          All of this and at the same time governments are worried about falling birthrates in the first world. NSSherlock. With automation killing of the traditionally good middle class jobs, (potential) parents have to worry about supporting their children farther into adulthood if they don't turn out to be good engineers, clergy or politicians.

          1. Charles 9 Silver badge

            "With automation killing of the traditionally good middle class jobs, (potential) parents have to worry about supporting their children farther into adulthood if they don't turn out to be good engineers, clergy or politicians."

            Except, typically, in poor countries, human capital becomes the easiest to raise. Thus, poor families tend to have more children to raise the odds: not that the parents can support the children but the opposite: that someone lives long enough to support the parents in their dotage.

  2. Warm Braw Silver badge

    Lied to – by a computer

    I recently received an e-mail from Amazon saying that, as I'd misaddressed an order, it had been sent to the wrong depot. Amazon had helpfully corrected my mistake, it continued, but the parcel might be delayed by 24 hours. In the meantime, would I please correct the error in my address book to avoid this happening again?

    Except, the address was perfectly correct - and it's not the first time that Amazon have made the same mistake: on each occasion the order was intended to arrive for a specific event and in each case the order for West Yorkshire was initially routed to the South Coast. The first order was rerouted back to the correct depot and then promptly sent back whence it came. The second entered a black hole in Sheffield from which it never emerged. The Indian customer service staff with the Irish phone number were very sympathetic, wanted to "own" the problem as, no doubt, their instructions dictated, but did not have the tools to do so.

    This is the future - computers telling us that their mistakes are our fault and providing a selection of bedtime stories to the child minders in the call centres they can read to us to settle us down. The reason is quite simple - the IT isn't there to improve the customer service experience, it's there to reduce the customer service cost; it permits a much closer estimation of the lowest level of customer service that doesn't unduly impact the bottom line and that's what it delivers.

    1. Locky

      Re: Lied to – by a computer

      The second entered a black hole in Sheffield Rotherham from which it never emerged


    2. Robert Carnegie Silver badge

      Re: Lied to – by a computer

      If the address is not as shown here

      then I have rather less sympathy. But Royal Mail's database has problems too. If your record is wrong then you can complain to RM to get it fixed. Eventually.

    3. Keven E

      Re: Lied to – by a computer

      "Automation brings two things front of mind in most people: A) that it will put it us all out of work, or B) rise up and kill us."

      Perhaps we'll be *out of work if we are dead.


      A coworker sent me an email regarding our itinerary where she mistyped/misidentified the departure date as Wednesday the 15th (+ time of departure), when, actually Tuesday was the 15th, and without any adding a specific internal link or event to a calendar... Google decided they would put that incorrect information into both of our calendars into that Tuesday. The correct departure date was Wed. the 16th.

      Two wrongs don't make a right, and one wrong and one right still make a wrong.

      1. Anonymous Coward
        Anonymous Coward

        Re: Lied to – by a computer

        "A coworker"

        I don't think I've ever seen anyone ork a cow. Pictures please?

    4. MachDiamond Silver badge

      Re: Lied to – by a computer

      We have to return to Douglas Adams yet again,

      Breakfast in New York, Lunch in Paris, luggage in Murmansk. Murmansk being the only know exception.

  3. Wibble

    I detect a stall condition. I cannot adapt on the fly to troubleshoot. I don't have any concept of 'self' so I will fly into the ground.

    Self-driving cars will 'kill' other creatures and humans "to protect the occupants" and have no remorse.

    Stupid people did this in the name of progress. What a great future awaits us.

    1. werdsmith Silver badge

      Self-driving cars will 'kill' other creatures and humans "to protect the occupants" and have no remorse.

      Humans do that too, most do have remorse but remorse isn't really much use to anyone.

      1. BinkyTheMagicPaperclip

        It is, it may stop them doing it again. Obviously the dead people don't come back to life, though.

        1. RegGuy1 Silver badge

          If only there was a technological solution to that...


          1. BinkyTheMagicPaperclip

            Are you reaching into your pocket for some electrodes, Dr. Frankenstein?

            Also, to add an obvious point to my earlier 'obvious but needed to be pointed out' post, remorse may not only stop someone acting like a dick in the future, it may influence others to stop/not start being a dick too.

            I'm not a psychologist, but a minute's thought seems to show remorse being an essential part of a functioning (lasting) human society.

            1. Anonymous Coward
              Anonymous Coward

              And a minute's observation will show you remorse being a sorely lacking part of today's human society.

  4. Bert 1
    Thumb Down


    Google calendar automatically adjusts for timezones by default.

    So when I enter a ferry time for Calais in local time (as stated on my booking), google helpfully adds an hour to it as I am currenty running on BST.

    I have nearly missed ferry crossings as a result of this.

    It took me 30 minutes to find and turn off sucha helpful feature.

    1. Is It Me

      Re: Timzones

      Why not help others and give some hints as to how to do this?

      1. Captain Scarlet Silver badge

        Re: Timzones

        Well you Google it of course

      2. Bert 1

        Re: Timzones

        What, and spoil the fun!


    2. JeffyPoooh

      Re: Timezones

      My Google Calendar gets all excited twice a year.

      Here's what happens:

      1) Daylight Savings Time (DST) switches in or out (in either direction)

      2) This moves the next DST transition (months later) forward or back by exactly one hour

      3) Google Calendar gleefully announces that the next DST transition has just been rescheduled

      Dumb dumb dumb...

      1. veti Silver badge

        Re: Timezones

        Timezones are a terrible example of the problem. The fact that a lot of software specs have not devoted enough thought to this narrow subject - should not surprise us. It's not the sort of thing that excites great passion in programmers, nor does it seem serious enough to be worth withholding the release, so it gets forgotten.

        That doesn't mean it's impossible. Just - so boring that no one can bring themselves to think it through. It's actually an area where a robot would probably write better software than a human, if only someone could be bothered to make it.

    3. Bronek Kozicki

      Re: Timzones

      I missed a dentist appointment many years ago because of exactly such "mis"-feature. The clincher is that this has happened with an old Motorola Razr dumb-phone.

    4. Anonymous Coward
      Anonymous Coward

      Re: Timzones

      Seems like "timzones" should be an Apple problem, not a Google one.

    5. Anonymous Coward

      Re: Timzones

      It also automatically adds my contacts to my Google account... when I type it into my phone... but fails to actually save/load it into the phone I'M TYPING IT INTO!!!

      I was gobsmacked today, as the contact I just typed in vanished, but did show if I tried to retype it... so the system was flagging/finding it, just not keeping it on the phone.

      I wondered why some of my contacts seemed to have vanished after "oh, sorry, I'm sure I saved your number last week". No idea how/why though, as others have saved fine and normally.

    6. Mark 85 Silver badge

      Re: Timzones

      It could be worse. Let's just be glad that the programmer for this didn't set his home Time Zone as the default for all times. A fellow techie deep in the bowels of writing his program has some contract work done for some "small stuff" that would take a load off his back. Yep.. the contractor hardcoded in his own time zone instead of having the code sort out the users' time zones. It wasn't much of a difference... if 12 time zones isn't "much".

    7. MachDiamond Silver badge

      Re: Timzones

      The first mistake was allowing Google to sort out your calendar.

      I still used a lined pad or a simple text document for itineraries. The don't go power down or Bing or change things to be helpful. If I allowed Google or Apple to helpfully add items to my calendar every time there is a date in an email, I'd spend half of my day weeding out stupid shite. One of the problems is I am currently working on a couple of books and my writing partner and I are working on timelines for a couple of companies and technologies. We are throwing dates back and forth like grenades.

  5. Crypto Monad

    Robot minders

    We will need a new generation of workers – "robot minders" who spend their careers watching intently, tuning constantly, and keeping all of our very powerful yet very stupid automation from thoughtlessly hurting us.

    That's essentially what the sysadmins at AWS and Azure and Google do, day-in, day-out.

    But they have the advantage that they can see and control the entire stack end-to-end, and can get hold of detailed documentation and/or source code and/or expert humans for each component.

    When you have autonomous black boxes talking to each other, all built by different people and all modifying and "correcting" data as it passes through (so that it all becomes rumours rather than truth) - that's when things explode in unexpected ways. This behaviour can be seen in the simplest of decoupled systems, such as routers talking RIP to each other.

  6. Roger Greenwood

    "We will need a new generation of workers – robot minders"

    That describes exactly my role 40 years ago in the maintenance department of a process plant. Lazy, expensive, messy and entirely necessary for when things went wrong. This time applied to a "new" industry, it really is just the same. If things went badly wrong in those days the result was a loud bang and a lot of mess to clear up. Remember Flixborough?

    The other thing we did, of course, is try to stop the operator interfering with the automation :-)

    1. Jemma

      Re: "We will need a new generation of workers – robot minders"

      I can just *see* pTerry...

      "Where's Flixborough...?"


      The problem is if a blinkered, autistic, racist (this is mostly Inbredistan tech after all) programs an AI or robotic system - you will get a blinkered, autistic, racist AI.

      You really don't want an unthinking blinkered AI when you are mass producing Hexanitrohexaazaisowurtzitane - bad things will happen - and we aren't talking about lab glassware buried in the wall, we are talking about lab buried in the next county including most of the surrounding town. As pTerry put it "if everything was perfectly placed you got a passable cup of tea, if it wasn't it stripped the plaster off the wall and handed you a passable cup of cat"

      You don't want a racist and armed one in the NYPD - it'd make the battle of the Somme look like a Time Team re-enactment - although God alone knows what it'd make of Phil Harding. May I introduce you to an old robot saying "does not compute".

      And they're putting them in cars... Driven in all likelihood by Americans, Italians and French (have you *seen* the arc'd'triomph roundabout?!!!!!). What could possibly go wrong?

      1. werdsmith Silver badge

        Re: "We will need a new generation of workers – robot minders"

        (have you *seen* the arc'd'triomph roundabout?!!!!!).

        Possibly one place where self-driving vehicles could improve things, if they all worked to the same strict standard and used inter-vehicle communication.

        But that leads me to wonder about the effect of each self-driving car AI developer interpreting standards differently or attempting to hijack standards for their own gain. Like browser wars. That would mean interesting traffic situations.

    2. Jiggity

      Re: "We will need a new generation of workers – robot minders"

      We are all Gregory Powell and Mike Donovan...

  7. Locomotion69

    That is exactly why planes fly themselves - but professional pilots are still there in order to take control if anything goes wrong.

    That is also the same reason why self driving cars is an ultimate dump idea as there will be a lot "underaverage" skilled drivers behind the wheel when things go out of control.

    1. Jiggity

      This reminds me of an article in EVO car magazine ages ago; the reviewer - an accomplished driver - was discussing the (then) new Mitsubishi Evo and how jolly clever all of its driver assistance gizmos were. The reviewer was very taken with the car's ability to gather up the occasional mess that was their driving and turning it into the skills of the unholy child of Ayrton Senna and Michael Shumacher. They quickly became used to pressing on in all conditions, and the car just throwing traction control and braking down wherever it was needed to maintain forward progress.

      Which was all lovely, until it wasn't; they were running late one damp winter evening, and pushed a bit too hard. At which point, physics pushed back very hard, because they were so far beyond the envelope when it all let go. Needless to say, the car was stuffed (they were shaken, but not stirred).

      This was someone who spent their life driving cars, and was quite good at it. Given the driving ability of most of the chimpanzees I see driving around, there's going to be a *lot* of carnage when things break down.

      1. vtcodger Silver badge

        I dunno. Seriously. I wasn't great at parallel parking 65 years ago when I learned to drive. And it got a lot harder three or four decades ago when bucket seats replaced bench seats and the corners of the car vanished from the driver's field of view. I think, I'd be entirely happy to let the car handle the details of parking if my ancient Nissan had that capability. OTOH, I've lived much of my life in places that have long snowy winters and I've developed the skills necessary to stay out of ditches and avoid spinning out when stopping. In my experience, the systems in modern cars that try to do those things for me really aren't much good at it. I'd really rather do it myself.

        How about we let machines do what they do well. And let humans do what machines can't?

        1. Anonymous Coward
          Thumb Up

          Parking/self orientating.

          Yes, but parking/garage duty is not really "self driving" any more than my cat feeds it's self when I put the food down for it. ;)

      2. Anonymous Coward

        Richard Hammond.

        Same happened to Richard Hammond. Watch his interview when he dropped, flew and rolled that electric supercar off a cliff. He admits it was his fault and an error, but in the interview it's funny to hear him not quite comprehend why.

        "I just drove it around the corner... it did it the last lap fine."

        (Car designer with the electronic stats/video) "You went *much* faster."

        "Nah, I just went the same as ther previous lap."

        (Looking at the stats) "The car turned all your inputs into something it could control, and made all the corners... until you pointed it over the cliff!"

        Paraphrased, but similar to above, Richard simply started to forget, the car could do a lot, but not everything... it could not fly. ;)

      3. Unoriginal Handle

        Here's a link to that story -

        And yes, most people haven't a clue how to drive in standard conditions, let alone when they or the weather make them non-standard.

    2. vtcodger Silver badge

      I see that you folks are trying to get to Timbuktu ...

      That is exactly why planes fly themselves - but professional pilots are still ...

      Would anyone feel comfortable flying in an aircraft where Clippy was the ultimate and final authority as to where the airplane was going and how it was getting there?

      1. Waseem Alkurdi

        Re: I see that you folks are trying to get to Timbuktu ...

        Don't fly on an Airbus (and recently, the Boeing 737 MAX) ... because that's been their SOP for decades now.

        1. veti Silver badge

          Re: I see that you folks are trying to get to Timbuktu ...

          And I've been flying on Airbuses for decades now. Seems to work well enough.

    3. Waseem Alkurdi

      That is exactly why planes fly themselves - but professional pilots are still there in order to take control if anything goes wrong.

      I've read somewhere that pilots are now getting very reliant on the autopilot and other automation ... to the extent that they're less able to fly manually than before, despite the training they get (which differs from real-life, because it's simulated, therefore, not that much stress).

      1. Insert sadsack pun here Silver badge

        "training ... differs from real-life, because it's simulated, therefore, not that much stress"

        That's why we at Cowherd Airways train our pilots with realistic stress levels in the simulator - by slapping them in the face, insulting their mothers, threatening their children and giving them the occasional zap with the BOFH-endorsed cattle prod.

        1. Charles 9 Silver badge

          So if their faces are frozen numb, they were orphans who never knew their mothers, bachelor(ette)s, and once worked on live wires so are numb to shocks, what then?

    4. Gene Cash Silver badge

      > but professional pilots are still there in order to take control if anything goes wrong

      The problem is usually that when the automation goes "YOUR PLANE!" then the pilot is usually too far out-of-the-loop and doesn't have enough situational awareness. By the time he comprehends the situation and determines the corrective action, then it's probably another NTSB report citing "controlled flight into terrain"

      1. Mark 85 Silver badge

        And for some reason, we see the same thing in several autonomous car wrecks. We know they're not full autonomous yet, but complacency still exists.

      2. werdsmith Silver badge

        The problem is usually that when the automation goes "YOUR PLANE!" then the pilot is usually too far out-of-the-loop and doesn't have enough situational awareness. By the time he comprehends the situation and determines the corrective action, then it's probably another NTSB report citing "controlled flight into terrain"

        Not in critical phases of flight, the pilots follow the automated flying closely. In other phases there is usually plenty of time between emergency and farm purchase.

        In reality, being out of the loop usually means that training is obsolete on newer aircraft, for example Kegworth where the effect of engine failure on ventilation systems was different on the newer (at the time) 737-400 compared to the earlier versions that the crew were experienced on. Their cockpit filled with smoke after an engine failure and so the shut down the remaining good engine because that was the one that they understood did the ventilation and so they assumed it must have failed. In fact on the 400 the engine affecting the ventilation had changed to using both engines. Amazingly the test process for the new CFM 56 engine variant design had no in flight test, only "bench" tests were carried out before it went into service.

        This happened to a pilot with over 13000 hours on type, automation of the emergency engine shutdown process might have prevented the crash.

    5. Mark 85 Silver badge

      Those who grew up as drivers will hopefully negate the problems with AI cars. Those who either didn't or don't have enough experience will ride the car blindly off the road, over the railroad tracks and off the cliff. For how this will be... think the Road Runner as the "old driver" who knows some stuff and stays on the road and avoids problems, and the coyote as the new driver going off the cliff.

  8. Herby

    Artifical Intelligence...


    I learned that long ago!

    1. Skwosh

      Re: Artifical Intelligence...

      Artifical Intelligence... isn't

      Yes. The seductive thing about much of the current work in AI and pretty much all of the work in machine learning (ML) though is that it is not at all clear to the vast majority of humanity at this point in history how these things work – and if you don't know how something works then you have no real way to gauge what it may or may not be capable of and then, crucially, there is the possibility that it could be magic.

      A lot of people want to believe in magic because it means getting something for nothing. Magic means that instead of having to do the work ourselves we can simply wave our hands and say the right words in the right order and the mop will self-animate and clean the floor by itself. Even in science and engineering there is a tendency to believe in maths as magic where equations are viewed like mystical runes that just work... and oh my there sure is a lot of maths involved in AI and ML.

      A washing machine is a massive labour saving device which acts autonomously – and yet we don't describe washing machines as robots – we don't think of them as delivering something for nothing and we certainly don't think of them as potentially magical. This is because most people have a reasonable in-principle grasp of how a washing machine works.

      The problem with boring old technology like steam engines, washing machines and Von Neumann architecture computers is that once you grasp how they work they become prosaic and un-magical. Understanding squeezes the magic out of things and along with it goes the prospect of getting something for nothing.

      There is no qualitative difference between these new AI and ML technologies and the washing-machine-like technologies that have preceded them. It is narrowly true that these new machines can 'learn', but ML is still a process – the 'learning' is achieved by a comprehensible mechanism – just as washing machines achieve tea-towel cleaning by comprehensible mechanisms or fixed-wing aircraft achieve flight – there is no magic. Ultimately these new AI and ML technologies (even assuming they turn out to be less trouble than they are worth) will become prosaic through being better understood and the idea that we can somehow get more out of them than we put in will dissipate.

      1. Alister

        Re: Artifical Intelligence...

        There is no qualitative difference between these new AI and ML technologies and the washing-machine-like technologies that have preceded them.

        I'm not sure that's the case. Whilst I am the last one to ascribe AI to magical thinking, there have been well documented cases where - for neural networks in particular - the end result has no clearly defined path that even the creators of the device can follow.

        I'm thinking in particular of the experiment where a neural net was tasked to design an circuit to discriminate between audio tones using multiple iterations of circuit redesign using FPGAs.

        The resulting, (working) circuit had a number of components which were not electrically connected to the rest of the circuit, but whose removal stopped it working.

        This non-intuitive and unpredictable result cannot be compared to the well-defined, well-understood mechanisms of washing machines and other such devices.

        1. Alister

          Re: Artifical Intelligence...

          Ran out of edit time... It was Adrian Thompson, his paper is here:

        2. Skwosh

          Re: Artifical Intelligence...

          the end result has no clearly defined path that even the creators of the device can follow

          Yes, I've come across that example before too in this context, but as you say, that does not mean there is any magic. Just because a 'design' is arrived at by some sort of random selective process doesn't mean that such a design will somehow be capable of defying the laws of physics - it just means that people will be a lot less likely to understand how it works. The inevitable cost of the 'something for nothing' laziness in using such a process to 'design' something is that you are inevitably not going to understand how it works as well as had you built it and reasoned about it piece by piece - and thus the utility of these systems will be lowered because we will be less confident about how they will behave (or even work at all) in any given situation (not that we are particularly confident even now about how deliberately 'human designed' complex computer systems behave!) There is no free lunch - and I'm sceptical about how useful these things will be in the long run (in certain applications anyway).

          What is happening here, broadly speaking, is what natural evolution does all the time. You make changes - often small random changes - and you see how well the changed version performs (against your 'desirable' criteria) - and then you iterate the variants that work better along with various tricks and hacks to stop yourself getting stuck in a local minimum. Following that recipe can sometimes get you to something that scores very well according to your desirable criteria, but you will more than likely have no idea how it actually works (since all you were doing was accumulating random changes that made it work better). Still no magic though - just The Blind Watchmaker at work - and the potential pitfalls of this blind approach are already pretty well understood by many biologists and others who have to engage seriously with such evolutionary processes (one of my favourite examples, from River Out of Eden, is: a turkey will kill anything which moves in its nest unless it cries like a baby turkey, and if the turkey is deaf, it will mercilessly kill its own babies).

          It could also be argued that this approach (trying out random stuff and seeing what works and what doesn't) is what human culture has been doing for millennia anyway with most of its technology development and that the theory as to how things work often gets developed afterwards.

          What I'm saying is that, none the less, the iterative training or whatever process by which ML type systems arrive at their opaque non-linear internal weightings is well understood even if how a particular end result 'works' may not be - though of course it could be understood/unpicked with sufficient effort - but then if you need to put in that effort afterwards then you are eroding the benefit of having gone for the 'lazy' evolutionary design process in the first place (no free lunch again). It all just needs demystifying. In particular, these ML systems can only ever be a kind of highly-obscure lossy compressed representation of their training data.

      2. Anonymous Coward
        Anonymous Coward

        Re: Automation Initiative.

        For automation, it's amazing. I'll give it that. Say those automatic image combination algorithms. You could have a game/film/painting that had new content every time you used it. Novelty, but not something I'd rely on guessing if it's a paper bag or a bicycle in the road!

      3. Mark 85 Silver badge

        Re: Artifical Intelligence...

        Part of the problem here is the buzz words "artificial intelligence". Yes, dishwashers, washer/driers/ and other appliances are still called "automatic". They follow a simple pre-programmed path.. either by chip or clockwork. The "intelligence" part is the where the buzz runs into problems. Has anyone actually defined intelligence when applied to computers, machinery or airplanes? Not really. It's still something pre-programmed.

        Humans are "programmed" but we have that spark of 'what if' where we think outside the program. Mechanical devices have sensors, we do to but our are tied that brain that sorts things out.

        You are correct, there is no magic, no man behind the curtain, etc. with AI. Perhaps AI should be called something else until mankind can get that "spark" to think, react, etc. outside the program?

  9. BebopWeBop

    removing airbags (and an associated spike on the steering wheel) might improve injury and death rates - in the medium to long term, as Darwin gets to work

    1. Charles 9 Silver badge

      But tends to have an adverse effect in the innocent casualty rate as no one can really handle getting blindsided by someone too oblivious to care. After all, we have a limited field of view and faulty memory; there's always a blind spot.

  10. 96percentchimp

    It all starts with bad design

    "Many of these systems assume the primacy of automation – that it's always going to work as desired, while it merely works as designed. So much of the time designers live so far away from the rough edges of their creations they never see beyond their own desires." THIS.

    As a humble journalist who's reviewed many products and helped to set up websites for publications, the bad automation always iterates from designers who create a bad spec, through the engineers (software or hardware) who implement it, all detached from the reality of the user. Automation only amplifies the failures of an industrial philosophy where the real world use cases crash into the product's imperfections long after they're baked in.

    Whether it's bad user experiences, biased search & social media algorithms, poor security or fatal flaws, they're amplified by a mindset where the end user is the beta tester and errors are fixed by endless iterations which introduce more errors. The difference with automation is that now we've built stupid machines to be stupid for us, and we can't see where they're being stupid.

    1. JetSetJim

      Re: It all starts with bad design

      I am wondering whether to spike my emails to folks on Gmail with white-text itineraries at the end of each email to see if GCal ingests those automatically...

  11. Anonymous Coward
    Anonymous Coward


    I wouldn't worry about complex automation by computers, we don't worry about it now when humans do complex things. The best thing about computer automation is it gives people something else to blame when it goes wrong even though at the end of the day we all know someone has to tell it what to do. In the future when they look back on causes of death records like we do now and wonder what "winde" was they'll similarly be wondering how people died from "Computer Error".

    1. veti Silver badge

      Re: Apathy

      I agree. People are just as capable of being dumb as computers. If you outsource your planning to someone, whether fleshy or digital, and then don't check the results - before they become time critical - you deserve what you get.

  12. SPiT

    Requirements and Ownership as well as design

    The problem is much more long standing than just the current incarnation and involves a much wider issue than miss-design by the developers of the "product". The traditional problems in administrative automation apply just as much.

    For example, many years ago our goods inwards department accepted a delivery and flagged it as being delivered and ready for payment. All the administration staff involved would have quite happily have paid out £250,000 for what was in fact a delivery of 12 boxes of printer payment if there hadn't been rapid intervention by another department. The issue here was that the administrative computer system had incorrect requirements, it was built around the idea that any purchase results in a single delivery and hence if there were multiple deliveries the system didn't work right. BUT the managers in charge owned the system and were responsible for the requirements and therefore, because they were in charge, the requirements couldn't be wrong and any junior criticising the system was criticising their superior and should be disciplined rather than listened to. Further, in operation the system was considered to be acting for the management and therefore acted with the authority of management - ie they treated the admin staff as junior to the system and hence insisted that they simply do what the system told them to do.

    As you can see, quickly leads to massive errors but being just an administrative system this principally just generates bureaucratic errors which aren't a new thing. You can see the same thing from non-computerised process going way back into history. However, it is a major issue that we still have these same human psychological issues causing miss-design of modern solutions. I have had the conversation with a quite senior individual about what their defence was going to be in court when someone died in situation where the actions of a computer system we were providing was a contributory cause.

    1. werdsmith Silver badge

      Re: Requirements and Ownership as well as design

      This is a good example of a weakness that also exists in humans. Scammers sometimes take advantage of routine work processes such as invoice request for payment. Where a human receives an invoice that is marked with some urgency and raises a payment, when in fact it is a bogus invoice for something never purchased. Using a scatter approach and enough attempts a scammer will succeed in getting a few paid.

      In this case the automation works, if the invoice is paid through a purchasing system it will validate the information against the original sales order and expect the user of the goods to have verified its receipt and good order.

      In this situation it can be a human working in a process without sufficient controls who takes a shortcut to bypass the automation to make the system balance at month end that causes the loss.

  13. JeffyPoooh

    Everything is the same...

    Humans are somewhat the same. They treat everything as if it's the same as the last one.

    A hundred projects, all very different, all very unique.

    Some spreadsheet jockey prepares a table, which implicitly assumes that they're all the same.

    SJ "What's your ECD?"

    Me "I don't know."

    SJ "Why don't you know?"

    Me "Because the project hasn't started yet. It's not funded, not authorized. I don't know when it'll start."

    SJ "Oh, I don't have a column for that little detail..."

    Me "Well, you'd better add that column."

    SJ "But I've already maxed out the PowerPoint slide width; so I can't. Could you start the project anyway? Please..."

  14. Reader2435

    Artificial Intelligence

    The clue's in the name. It's not 'synthetic intelligence' (which might be real but synthesised) but artificial - not real. As my AI tutor explained to us in the early 1990s, AI can be very powerful but its performance degrades sharply at the edge of its problem domain.

    In other words, when anomalies arise, AI fails big.

    Gregory Travis' recent analysis of Boeing's MCAS system "How the Boeing 737 Max Disaster Looks to a Software Developer" is a very powerful expose of how dangerous AI is in safety systems.

    1. Doctor Evil

      "Gregory Travis' recent analysis of Boeing's MCAS system "How the Boeing 737 Max Disaster Looks to a Software Developer" is a very powerful expose of how dangerous AI is in safety systems."

      It's on IEEE Spectrum here -- does require a (free) sign-up to the site.

      1. Anonymous Coward
        Anonymous Coward

        No login?

        Free snippet here:

        But a non-login option would be nice (canna be bothered to make a throwaway nor give my real one).

  15. Robert Carnegie Silver badge

    To err is human.

    Machinery can go wrong more efficiently, but a human being with lack of imagination or lack of commitment can ruin a customer's day without mechanical assistance. See "Not Always Working" i.e. See all the other "Not Always" articles, too. Then, see puppy and kitten pictures, because unaccustomed exposure to all that concentrated human fallibility leaves you in need of a corrective. The internet will provide; try "Cute Emergency" for instance. (Your workplace may consider this "social media", but it's the GOOD kind.)

  16. Flak

    "Automation without transparency makes the unpredictable dangerous."

    Brilliant statement!

  17. Dr_N Silver badge

    Call Me Kenneth says,

    "Hello Fleshy Ones!"

  18. rsondhi

    Consulting Systems Engineer

    This article reminds me of those two Boeing 737 Max jets that crashed and killed a combined 350 people. There was so much abstract (non-transparent) automation with the flight control system that the pilots could not manually take control of the AOA (Angle of Attack) feature within the flight control system that was forcing the nose of the plane down since the feedback from the sensors was telling it that the plane was stalling, when it wasn't. Really sad situation and very bad engineering on the part of Boeing. So many innocent men, women, and young children died because of the stupidity of the design of this aircraft's flight control system. The testing of this flight control automation wasn't properly done either, which should have detected this serious issue.

  19. Anonymous Coward
    Anonymous Coward

    Our company outsourced the robot minding to Accenture

    because their mechanical overlord plays Go with ours.

    I hear they’ve promised to save costs by automating the process.

  20. CheesyTheClown

    It’s not about becoming obsolete.

    If you consider that the heart of the issue is unsustainable capitalism, it becomes clear. And even if it were, it has little to do with automation, it’s about centralization and enhanced logistics.

    We simply overproduce.

    Let’s use a simple example.

    Ground beef has a limited shelf life. It can survive quite a long time when frozen, but the meat will degrade and no longer be edible after a short time when thawed.

    We as shoppers however are turned away from meat that is frozen. It looks unattractive and although we should know that almost immediately following being slaughtered, the meat is stored in frozen storage, and even if we visit a butcher, we are attracted to meat hanging on hooks in frozen storage, when the meat is on a shelf, we will buy the fresh, red, lovely pack of meat which we’ll transport thawed to our houses and refrigerate and hope we’ll use it before the”best before” date passes.

    Grocery stores also know that shoppers almost never buy the meat products which are the last on the shelf. They can charge more for meat that is thawed than frozen. And the result is, they ensure there is always enough thawed meat to attract shoppers and charge them more. They also waste packaging and to make it last just a little longer, they’ll use sealed packaging that makes the meat prettier for a little while longer. And the packaging now even has fancy little devices to measure freshness... which are not recycled. In order to produce (and overproduce) enough ground beef to have enough left over to actually waste approximately 30% (real number for here in Norway), we are left with massive amounts of other meat that must also be sold and suffer the same problems.

    When you purchase meat online for home delivery, meat can be kept frozen during the entire process ... up to but not necessarily including delivery for the “last mile”. We don’t need to produce extra to make the meat look more attractive to consumers. We can expect the consumer to receive fresh lovely red ground beef with no need for freshness sensors, vacuum sealed packaging, etc...

    Using more advanced larger scale marketing mechanisms. If people are buying too much ground beef, algorithms can raise prices of cheaper meats and lower prices of more expensive cuts to convince shoppers to eat steak instead of burgers tonight. We can sell 400grams or 550grams or however much because meat will be packaged to order. We can cut deals with pet food and pig slip companies to simply give them byproducts in exchange for bartering products like “if we give you pig food worth $1.5million, you give us bacon worth $1.5 million” which would probably count towards tax credits for being green and also leave the additional money in a form that can be written off.

    This works great because people buying online will buy based on photos and text. Marketing is easier. They product always looks perfect prior to purchase.

    By needing to produce 30% less, we need 30% less cows. Less movement of live stock or frozen sides. We need less butchers. We can use more machines. We’ll use less packaging. We won’t need freshness sensors. We can package in biodegradable paper or reusable deposit oriented containers. We can eliminate printing of fancy labels. We will reduce shipping of product by 50% by using more efficient packaging and shipping 30% less product to begin with. We can reduce consumer fuel consumption and car repairs and tired degradation associated with shopping.

    By enhancing logistics and centralizing as much as possible, we will eliminate massive numbers of jobs. But initially the result will be people will spend more time unemployed and believe it or not... more time humping, reproducing and creating more people who have less jobs available to them.

    As such, we need to start sharing jobs. People will work 50% of what they do today. This means they’ll have much more time to manage their household economies. They’ll eat out less and use more time cooking. This will reduce dependence on restaurants. They will also have less disposable income as they’ll be forced to spend more time entertaining themselves. They will think more about their meals and waste less food producing them as they know when they buy chicken breast on sale, they can use half today and half two days from now. It won’t be like “I planned to use the other half, but are out because I got stuck in a meeting.”

    People will order groceries to be delivered which means the grocery stores which used to be “anchor stores” will become less important and people will stop “Let’s grab a taco and some ice cream next door to the grocery store while we’re out already”. As such, those smaller stores which were never anchors themselves will become less interesting.

    This was a simple example, and it barely scratched the surface. It has so little to do with automation. It’s capitalism and we just have too many meat sacks to keep it maintained.

    1. Charles 9 Silver badge

      Re: It’s not about becoming obsolete.

      "As such, we need to start sharing jobs. People will work 50% of what they do today."

      You forget, that means they get paid 50% less, meaning they start missing the bills, they lose their homes meaning they have no place to cook. People work TWO jobs, even work nearly every waking hour, and yet they STILL can't pay their spartan bills.

      And as for the O-word, there's always the dreaded retort, "Care to be first?"

  21. werdsmith Silver badge

    Capitalism has its faults but it just seems to fit better with the human psyche (which has not evolved to keep up and is still very animal) than anything else that has been tried. Some of the other systems are great as ideals, but in order for them to work well all people have to buy in and conform with them. But because we are human and so there is so much variation in the way people behave, the versions of capitalism seems to be the best known way that accommodates this.

    I guess that somebody can point to a civilisation in some part of the world that are able to, smoothly and without draconian punishment, operate some kind of egalitarian system but these will be a few hundred max. None of them will be societies of millions and millions of people operating together.

    So automation as an evolution of capitalism seems as inevitable as the sun rising tomorrow. Just as inevitable is finding a way to use the humans in the population to earn reward for supporting the big cheeses.

    1. Charles 9 Silver badge

      The problem becomes, what if Capitalism in and of itself is also insufficient? I mean, we can point to things like the Gilded Age to show that capitalism leaves plenty of people (including innocent people) behind to die. If it's the best option of the lot, that means NO option is sufficient and we're basically just circling the drain.

  22. dmlee

    When automation goes wrong

    The problem with automation of systems is the exceptions to normal function.

    If manual invoice entry cost $10 per transaction, and correction of an invoice error cost $15, the occasional error makes little impact.

    If online automatic order entry costs $0.10 per transaction, and correction of an order error involves two humans including a manager and costs $100, then a few fat fingered customers could send your business broke.

    Ideally, order correction should also be an online process, designed in as part of the system.

    When attempting to automate business systems, it helps to first evaluate and refine the existing manual system.

    Otherwise: Garbage in -> Garbage out. Just making a mess faster.

    AI attempting to learn and replicate the manual system, without insertion of intelligent thought, is a recipe for disastar.

    "I really hate this darn machine, I wish that they would sell it. It never does quite what I want, but only what I tell it."

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like

Biting the hand that feeds IT © 1998–2022