back to article Why you'll never make really big money as an AI dev

Among the stupider things I said in the 1980s was a comment about Artificial Intelligence, including neural nets - or perceptrons as we called them back then - saying we needed "maybe a processor that worked at a hundred megahertz and literally gigabytes of storage". I also believed that following our success using Fuzzy Logic …

  1. aenikata

    There's plenty who haven't made good money. However, the primary assertion is that the technical architects make all the money. They tend to get good pay, but there's plenty of developer jobs around that pay well above average salaries without being architects.

    The downside for Machine Learning is trying to communicate why what you're trying to do is hard, and make people understand what you can do (as well as can't do). Given the limited understanding of statistics in general, it's a challenge to get people to move from the Operational business information systems that provide basic dashboards and KPIs on to Decision Support systems that can help with future planning and higher level decision making. Both are valuable, but the former is much more commonly used than the latter.

    All that is focused on business operations, though. It ignores that the last 15 years have seen huge improvements in the state of the art for driver aids (and partial autonomy), speech and image recognition and many other areas, to a point where we assume that Facebook will recognise our friends in photos better than we do, and that we can ask Alexa to play some obscure band and it will (generally) understand what you asked for. In a noisy room, even. These are the areas where people can see the potential, not business planning.

    Those are also the areas where the architects who don't understand AI can't help much. They can't really help turn your also-ran system into one that has an edge over the competition. There's some pretty serious knowledge requirements for architecture planning, too - scaling to handling streams of data that you can't reasonably store.

    The point is, though, that if you look in the right places and work hard there are opportunities in many niches, and well paid ones in all areas of IT. And if the rates aren't good enough for you, then maybe you need to work at getting on the BBC highly paid list as a presenter instead. But the real money, the big money, goes to those who have an idea (and there are many untapped uses for AI) and who go out and build and sell that idea for themselves, not someone else. They're the ones that make the rich lists, not the architects.

  2. Anonymous Coward
    Anonymous Coward

    "Backward chaining is appearing as a “new” technique"

    Er, are you confusing backward chaining with backpropagation? Confusing perceptrons and ANNs is understandable - they at least share a common root, but the only thing backpropagation and backward chaining have in common is the word "back".

    1. Destroy All Monsters Silver badge
      Holmes

      Re: "Backward chaining is appearing as a “new” technique"

      Yes, that struck me too.

      Plus backchaining and forward chaining is now merging in LPS.

      It's about time.

      Yes, I know, a very primitive version of this existed in PLANNER in the 60's as Carl "I invented everything, but earlier" Hewitt keeps saying.

      Also, "Blackboard Architectures". These are VERY 80s. Basically a device useful when you don't really know how to attack a problem and the solution is "throw specialized agents at it that shall communicate via a common database know as a blackboard". How exactly the inter-agent communication should be formalized was never really spelled out. I still have the Addison-Wesley Fat Book "Blackboard Systems", maybe I should have it scanned and put online. I would imagine that today problems would be more often solved by additional fat injections of appropriate mathematics in search and optimization so that one knows what one is computing and can even get performance stats.

      1. Dominic Connor, Quant Headhunter who wrote this article, really, honest

        Re: "Backward chaining is appearing as a “new” technique"

        Blackboard systems are very *1970s*, like my taste in Music.

  3. John H Woods Silver badge

    Really big money ...

    ... cannot be made on salary.

    1. Anonymous Coward
      Anonymous Coward

      Re: Really big money ...

      You made really big money, but the bourgeoisie stole it.

    2. macjules

      Re: Really big money ...

      Tell that to the CEO of Barclays, certain senior traders at JP Morgan, the chairman of BAT Industries ... the list is endless.

  4. Peter2 Silver badge

    Having programmed primitive AI's I still believe that I am not in any real danger of seeing a truly intelligent AI in my lifetime.

    The definitions most people use to describe AI as being intelligent seem to be "It did something I programmed it to do, but didn't expect it to do" which could be applied to either chess programs 25 years ago or an overly complex macro in Excel.

    1. Dylan Byford

      "A macro in Excel ... did something I programmed it to do, but didn't expect it to do"

      Hurray! Can I claim to have been coding AI most of my working career then?

  5. jMcPhee

    Not always...

    Look at all the SAP consultants who make piles of money selling a "Knowledge Management" system which ends up being little more than a large shoebox full of electronic index cards.

    Perhaps a better term would be "AI Shaman"?

  6. amanfromMars 1 Silver badge

    A Long Tried and Well Beta Tested Route and Proven Successful Root ...

    .... to Obscene Fortunes of your Own Choosing

    Weaponise AI simply, DC, and you will be showered with riches beyond your wildest dreams. And nowadays does such easily start to be capitalised in the trillions of dollars ....... given the unbelievable destruction AI can certainly deliver to key strategically vital systems and virus-ridden networks ..... with hardly any being so exposed and weakened and more liable every day to flash crash attack as the paper tiger backed dollar itself.

    And that crazily dependent upon the acceptance of myth and legend system realises full well the catastrophic weakness of its own contrived and false market resultant, ponzi position and fears the emergence of truth which will destroy ....... well, a fabulous surreality is an apt APT and Advanced Cyber Treat which can survive and prosper, but only if they acknowledge the current system of things to be so and that they can also be under sustained attack from anonymous superior forces with as yet unknown sources of almighty power and fantastic energy, if they choose to ignore the clear information revealing NEUKlearer HyperRadioProActive IT and IntelAIgents ..... Spookery which suffers not the Fool nor their Tools and the Gook, nor the Geek or the Freak.

  7. GidaBrasti
    Pint

    Prolog IPA

    Have a pint from me, sir, just for mentioning by beloved Prolog

    1. Destroy All Monsters Silver badge
      Thumb Up

      Re: Prolog IPA

      PROLOG'S LIVE MATTERS!

      May I interest the Gentleman in Mercury (Haskell as Prolog), Answer Set Programming, Lambda Prolog (see also: Uniform Proofs as a Foundation for Logic Programming or even the experimental Bedwyr?

  8. Mark Honman

    Good to have Dominic back

    "Neural Networks were a joke in the 1980s. I built one, for a given value of "built" since it never ever did anything useful or even go wrong in a particularly interesting way."

    Some Transputer-using friends got a bit further than Dominic, then... they trained their neural net with photos of team members, using a wheelie bin as control. Despite their best efforts, the net never managed to distinguish the bin from the team's rugby-playing member.

  9. Calimero

    We will construct the brain and it will write poetry

    Are you telling us that if we build 50 beellions artificial neurons they will not be able to write poetry or compose a symphony? You've got to be kidding! Computers (deep learning to be fair) are already producing musing - you may call it cacophony and ouch! I will have to agree!

  10. TheElder

    AI does not exist

    Re: Having programmed primitive AI's I still believe that I am not in any real danger of seeing a truly intelligent AI in my lifetime.

    True.

    Quote: "I learned that social scientists have known this for decades and have experimentally shown that people will claim a factor was important in their decision despite only being told it after they’d made the choice."

    I am doing Brain Mapping at the local University. I have a fair bit of medical knowledge as well as experience in medical engineering. I can tell you the various lobes of the brain plus why and what they do as far as we know at this time. The work I do is all about how we make decisions. The professor I work with gives a nice little talk titled "Why we do the Dumb Things we do." It is all about how we so often make mistakes that are highly influenced by our own internal biases. What we so often think is right is frequently dead wrong.

    The human brain is not a binary computer. It is a massively parallel neurochemical analogue computer. It does exhibit spikes when the various proaxonal inputs finally add up high enough to cause a very sudden phase transition. When that happens the entire neural structure of the brain can suddenly synchronize from anterior to posterior in less than 2 milliseconds. This includes the frontal superior cortex all the way back to the posterior Parietal and Occipital lobes. That is a decision made.

    The idea that it is somehow binary is nonsense. There are over 1400 neuro chemicals and proteins that may affect just one synapse. That does not include things like slight differences in genetic structure that everyone has. Just the ability to talk and listen depends on Brocca's Area as well as Wernicke's Area along with a dense network of axonal fibres. They often fire randomly and that creates a lot of background noise. Just blinking or moving eyes can create values that are a thousand times stronger.

    There is a lot more I could say but it can be very complicated. The one thing I can say is that the amount of actual electrical power required to simulate a full brain using current technology would be megawatts if it was only one nanowatt per synapse. The brain only uses about 20 watts per hour.

    1. Anonymous Coward
      Anonymous Coward

      Re: AI does not exist

      "The brain only uses about 20 watts per hour"

      I don't think that is what you meant to say.

      1. TheElder

        Re: AI does not exist

        Look up watt hours.

        1. Richard 12 Silver badge

          Re: AI does not exist

          Watts per hour is (joules/second)/hours. This is a rate-of-change of energy used per time period.

          Watt-hours is (joules/second) * hours. It's a measure of total energy used.

          Neither of those make any sense in the context of your post.

          Units matter, the basic difference between energy and power will save your thesis some day.

  11. hnwombat
    Coat

    "We will have a true AI in 50 years" is a time-invariant statement. And yes, I've done AI stuff as well, and am now a social scientist, so I mix the two freely. We are at least five orders of magnitude from having the same computing power density as the human brain.

    Actually, arguably, we are infinite orders of magnitude away from the same power. As the previous poster pointed out, the human brain is actually an exceedingly complex analog computer; thus it has infinite possible states, and since our computing power is not currently infinite, QED.

    Also, the brain is not deterministic. So not only is it not digital, it's also a non-deterministic analog computer. It's going to be a long, long, long time before we get anywhere within shouting distance of its actual power. Unless P=NP, which is seeming less and less likely (and it never was very likely) as time goes on.

    1. Richard 12 Silver badge

      Nonsense

      The brain is not infinitely complex, and must be possible to replicate because unskilled labour is able to do so, and there are over 7 billion examples of this within a few thousand km.

      A better argument is whether it is possible for a human brain to understand a human brain. That may well be impossible.

    2. Dominic Connor, Quant Headhunter who wrote this article, really, honest

      What is "deterministic", who cares anyway

      It's a trivial result in CompSci that the set of outcomes that you can achieve with non-deterministic computers is exactly the same as with deterministic, Monte Carlo and other ND techniques are merely easier to program.

      Also of course, nothing has infinrite states, not even the quantum vacuum 'nothing'. We live in quantised universe, the numbers are large of course, but not infinite. I've even written some really bad SF on the consequences of that.

      Even if "infinite" orders of magnitude meant anything, we're not that many from human complexity and even with just the faux monentum left after Moore's law we will hit that in the 2020s even without any theoretical breakthroughs.

      As others have shared, you do need some maths upgrades.

      Firstly "infinite orders of magnitude" is *exactly* the same as "inifite integers", "infinite prime numbers" or "infinite multiples of 42". It's call computability, a decent universities that's first year CompSci, even ar Reading "University" some students have heard of this.

      Second "watts per hour" is a perflectly reasonable physical unit. *FOR ACCELERATION* A computer that ran at 20 watts per hour would within a year be the same temperature of as the core of a nuclear reactor. At 24*365 = 8760 hours per year, * 20 = 175,200 watts. Assume you live for 80 years you'd be over 14 megawatts. You would literally go blind if you looked at any 80 year old within 100 metres, and if you had sex with an octogenarian you'd be getting fatal radiation as well as being cooked.

      If you look at the simplified section in Wikipedia https://en.wikipedia.org/wiki/Stefan%E2%80%93Boltzmann_law you will see that it quickly moves to a topic of "Temperature of the Sun", which is valid because that's about where you'd be given a 0.2 square metre surface area for the brain.

      1. Naselus

        Re: What is "deterministic", who cares anyway

        "Firstly "infinite orders of magnitude" is *exactly* the same as "inifite integers", "infinite prime numbers" or "infinite multiples of 42"."

        Georg Cantor would disagree with you on that.

        1. Dominic Connor, Quant Headhunter who wrote this article, really, honest

          Re: What is "deterministic", who cares anyway

          Note I was talking about countable sets like integers, primes etc. As you say they aren't *that* big compared to the set of Reals (or the set of sets of Reals, etc).

          Howevever he was clearling talking of "infinite orders of magnitude", multiplying by 10, hyowever many times you do it is merely countably infinite.

          1. hnwombat
            Facepalm

            Re: What is "deterministic", who cares anyway

            Okay, fair point, I was a little sloppy in that. The set of orders of magnitude is countably infinite. I should have used muliples of aleph-1 instead.

            The point is, it's a phase change. The deterministic model (ALL modern computing, except possibly quantum computers, and I remain to be convinced that they are truly non-deterministic) simply cannot be used to model the brain. It's like trying to apply Newton's laws of motion when you're in a relativistic frame. The model is inadequate and fails.

        2. hnwombat
          Stop

          Re: What is "deterministic", who cares anyway

          Beat me to it, and *exactly* my point. Göedel has a few pithy things to say about it, too. As, in fact, does Plato, albeit in a different context; see _Phædo_, which (among other things) he points out that language is an insufficient construct to truly convey thoughts.

          Along those lines, to the person who accused me of failing first year comp sci-- I was not making the full logical argument in CS terms. Because, frankly, it can't be made (again, see Gödel), at least without fully including Gödel's proof*S* (yes, there were more than one, and in some ways the subsequent proofs were more important). If you're only aware of the incompleteness thereom (as I suspect you are, and probably only in the _Gödel, Escher, Bach_ form, which is itself incomplete), I suggest a few remedial logic classes. Finally, if you do not understand the truly gigantic implications of the difference between deterministic and non-deterministic Turing machines, you are the one that needs some remedial CS. Hint: it is not merely computability. That is merely *one* of the implications, and one of the least interesting.

          Now, the obligatory _ad hominem_ out of the way, the point is, that analog computers are in fact infinitely better than digital, simply because of Cantor's proof. That there is a discrete quantum underlay might one day become an issue, but I suspect it would be overwhelmed at that point by the probabilistic (analog!) nature of the choice of those discrete states. Further, don't confuse the map with the territory. Quantum mechanics is a *model*, it is not the *reality*. For instance, it currently requires that we treat EM radiation (e.g. light) as a wave and as a particle means that our map is not sufficiently accurate-- the radiation is neither a light nor a particle, both of which are models, but something else that we don't actually understand very well. We can use the "particle" model or the "wave" model at certain times to predict certain behaviors, but that does not actually mean that light is changing between being a particle and being a wave. It's always light. Someday we might have a more accurate model of something that seems to behave as both models (and, in fact, maybe we do; I'm not a theoretical physicist, though I play one on the Internet), but for the time being switching maps when appropriate is sufficient. It's kind of like relativity versus Newton's laws. Newton's laws are sufficient for predicting how your car behaves. They're not for predicting how a neutrino behaves. You choose the map that fits the resolution you're using. Even relativity is only a model-- at some point, *it* will almost certainly prove to not have enough fidelity, and need to be modified to better model the strange, strange thing we call reality.

          How does this get back to AI? Well, we have a model of how the brain works. It's nice and all, and can do some fairly amazing things. But it's still only a model. Neuroscientists are still trying to figure out some of the grosser ways that the brain works; we're a long way from understanding it at more fundamental levels. The complexity is staggering; hundreds or thousands of different chemicals interacting, modified by a non-deterministic network of interconnection, all operating in an analog fashion. Hell, we don't even have a Newton's laws of motion level of fidelity of model for the brain, much less relativistic models. How can we hope to replace the human brain when we are not only not even the stone age of modeling it, we're probably still some small animal scurrying around under the feet of T-Rex trying not to get squashed in our understanding and modeling of the human brain. Hell, probably even of the *flatworm* brain.

          So, no, we're nowhere near true machine intelligence, much less machine consciousness. And probably won't be for hundreds of years.

  12. HieronymusBloggs

    "my dad invited expert systems".

    ...but they were late to the party.

    1. Peter2 Silver badge

      Re: "my dad invited expert systems".

      The thing is that processing power is a red herring. As the author of this article said, they once thought that you'd need hundred of megahertz worth of performance to create an AI.

      We can now buy processers mounting three gigahertz processors (with eight seperate cores) for a few hundred quid, and beyond that you can even hire absurdly massive amounts of processing resources via Amazon Web Services that are beyond the wildest dreams of somebody using computers 20 years ago, which lest we forget was when the newly announced (and probably unavailable) 233 MHz Pentium MMX.

      Simply, sheer processing power is not the problem. The problem is that nobody has *any* more idea how to go about programming a general purpose AI than they had 20 years ago.

      Sure, we can produce little modules that can solve simple defined problems like playing chess or do statistical work like heuristic analysis or bayesian inference which work very well for specific problems they are set, but that is not (IMO) an actual AI as defined by anybody other than some marketing wonk somewhere. They aren't self aware or capable of defining themselves problems to solve and we are no closer to producing a real AI today than we were 20 or 30 years ago. I see no reason this won't be the case in 20 or 30 years time.

      Fortunately or unfortunately, since Stephen Hawking is probably right that a truely intelligent AI could/would be dangerous. (to anybody who can't figure out how to go and trip the circuit breakers in their house/office. Being in a rouge AI controlled, keyless car could be more exciting if the fusebox stays in the bonnet.)

  13. polymathtom
    Thumb Up

    Yes.

    A walk down memory lane. The more things change, the more they stay the same. I suspect that those who have seen it in person will nod, while the younger crowd will not realize how it all goes until they are the older crowd.

  14. allthecoolshortnamesweretaken

    A very informative and enjoyable read, thanks!

  15. Seajay#

    "I learned that social scientists have known this for decades and have experimentally shown that people will claim a factor was important in their decision despite only being told it after they’d made the choice."

    This is included in the article as a cause for pessimism about AI because it makes it difficult to design a model for the AI. But actually given the current trend of not providing a model just chucking more data at the problem, that's irrelevant. I actually see this as a cause for optimism. Despite some very shoddy software, humans manage to make acceptably good decisions. Therefore the bar for how good our AI software needs to be to be useful is lower than we thought it was.

  16. Kevin McMurtrie Silver badge

    I'm trying to return my defective duck

    - Hi, this is John of AI Other Duckers. How may I provide excellent service?

    - You sent me a defective duck. It's mute.

    - Does it look like a duck and quack like a duck?

    - No, it's mute duck.

    - I don't think it's a duck.

    - !%$, it's a duck. It's mute duck and it looks unhappy.

    - Are you sure this is the animal we sent you? I'll need to hear it quack to continue with service.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like