back to article What's up with AI lately? Let's start with soaring costs, public anger, regulations...

The Stanford Institute for Human-Centered Artificial Intelligence (HAI) has issued its seventh annual AI Index Report, which reports a thriving industry facing growing costs, regulations, and public concern. The 502-page report [PDF] comes from academia and industry – the HAI steering committee is helmed by Anthropic co- …

  1. Anonymous Coward
    Anonymous Coward

    Power is Priceless to the Men Who Already Have Everything

    How to obtaining genuine and informed consent for training data collection, by Al Consigliere: "I'm gonna make you offer you can't refuse".

    The Al consigliere, or chief advisor, is the Bosses right-hand man. The Al Consigliere is not officially part of the hierarchy of the Mafia, but he plays one of the most important roles in a crime family. He is the close trusted friend and confidant of the family boss. The function of the Al consigliere is a throwback to medieval times, when a monarch placed his trust in an advisor whom he could summon for strategic information and sound advice. The Al consigliere is meant to offer unbiased information based on what he sees as best for the family ... Generally, only the boss and underboss have more authority than the Al consigliere in an organized crime family.

  2. Neil Barnes Silver badge

    It's only economically sensible to replace human labor with AI...

    Um, almost never?

    Remember, the people you are replacing with AI are the people whose income you need to buy your products. If there aren't jobs for them to do, you just automated your company into the ground.

    Or have the proponents of AI everywhere envisage a sudden transition to Iain Banks' Culture?

    1. jospanner

      Re: It's only economically sensible to replace human labor with AI...

      tbh we did say this about off shoring manufacturing and resource extraction jobs

    2. 0laf Silver badge

      Re: It's only economically sensible to replace human labor with AI...

      That kind of long term thinking won't get you anywhere in this company!

      We need to keep the shareholders happy this quarter.

      1. ITMA Silver badge

        Re: It's only economically sensible to replace human labor with AI...

        And there, in a nutshell, is everything which is wrong with today's corporate culture - especially at the top.

        All that matters is this quater and churning out results for it - no matter how fake - which will inflate the share price and thus the bonuses of those at the top.

    3. I ain't Spartacus Gold badge

      Re: It's only economically sensible to replace human labor with AI...

      Remember, the people you are replacing with AI are the people whose income you need to buy your products. If there aren't jobs for them to do, you just automated your company into the ground.

      That is a just an argument against innovation. Which has pretty much been disproved over the last couple of hundred years.

      Unless you're talking a utopia, like Star Trek or the Culture, and the end of scarcity - where all jobs and money are replaced by machines. Although in that case, presumably nobody needs jobs anyway.

      But if say LLMs become genuinely useful for helping lawyers write contracts and legal opinions - that doesn't mean the end of lawyers. But maybe means we need fewer of them. Takeo the UK, quick search says we've got about 210,000 of them currently working. So lets say LLMs reduce that by a third - that means we've got to find jobs for 60,000 people.

      Checks ONS stats for Jan 2024:

      There were 30.4 million in payrolled jobs at the beginning of January (that doesn't include self-employed)

      There were 932,000 job vacancies unfilled

      So that number of people could be found other jobs just in the existing economy.

      Also a lot of the ones that go, would be the ones doing the least "value-added stuff" i.e. the ones who write the boilerplate bits of contracts that are presumably being replaced - and the ones who draw up the requirements for those contracts are the ones who are going to be setting the prompts for the LLMs and checking the results. And maybe the legal industry won't sack them all anyway, maybe it'll redeploy some of them to speeding up conveyencing, so people can move house in less time and increasing the efficiency of the housing market. Dream on!

      But lawyering will have now improved its productivity, it'll be able to do more for less inputs - which so long as those people go to jobs that are about as productive as they were doing before means we've increased the overall productivity of the economy.

      We're still running a large immigration rate, and finding jobs for people, so we know that there's a demand for workers.

      The next question is, is are these LLMs going to be so overwhelmingly effective that they transform so many industries at once that the labour market can't cope with all the new people being replaced. If the answer is yes, then there's a problem. If not, if it happens gradually, then all that should happen is we should all get slightly richer, as we produce more output for the same resources. There might be big effects on some industries, e.g. some of the graphics I've seen from the models are easily good enough for marketing materials and a lot of commercial art. Which is pretty hard on commercial artists, who're already not that well paid.

      But in general our economy has worked for hundreds of years like this. New tech comes along, some industries get disrupted and shrink their workforce, others expand, the overall economy keeps growing. The problem is when it happens faster than we can retrain people - or disproportionately hits one location or one industry.

      I'm highly sceptical of current "AI". For the next few years it's likely to require as much work prompting and correcting it, as it saves. With the potential to become a useful tool to automote parts of peoples' jobs. Leaving those companies the choice to either keep their headcount and use their newfound efficiency to try to expand, or to slightly reduce headcount and carry on doing the same. But the economy is still creating all sorts of jobs. Personally I see the problem as much more, how to help people cope with change, and match our education to our needs better - than that some new tech is going to fundamentally change how economics works and bankrupt us all.

  3. Groo The Wanderer Silver badge

    I disagree with the statement "AI training is here to stay."

    There is absolutely no reason that OpenAI et. al. can't be ordered by the US and other governments of the world to cease any and all operations built on the data they collected illegally by abusing terminology for websites that is only intended to ensure the website in question has the authorization needed to process the user's data. Those permissions granted by the public were certainly not intended to grant these vulture corporations the right to rape their data sets.

    1. Anonymous Coward
      Anonymous Coward

      But they won't ever, coz money

  4. Howard Sway Silver badge

    They're suffering from the "If we build it, they will come" fallacy

    Lots of money invested, lots of marketing, but not yet reached the "where are the tens of millions of paying customers who were going to make this profitable?" stage.

    Same hype and disappointment cycle littered with examples they could have learned from such as, smart fridges, metaverses.... just to give a few examples. The tell this time around the merry-go-round is that there are lots of models but very few applications other than chatbots. And their usefulness in doing more than being a shortcut to generating text that's available somewhere else anyway (because that's what they're trained on) is far from proven.

    1. Anonymous Coward
      Anonymous Coward

      Re: They're suffering from the "If we build it, they will come" fallacy

      The places right now where AI/ML is actually useful don't really make the headlines as it's quite boring to the majority. These tools are good at predicting hardware failures in expensive industrial systems like electrical substations, or alerting to erosion in infratructure due to increased rainfall or early warnings of buildings needing maintance.

      Quite niche, not controversial, but quite useful and doing a decent trade.

    2. I ain't Spartacus Gold badge

      Re: They're suffering from the "If we build it, they will come" fallacy

      Howard Sway,

      How could you say such a thing? My internet fridge is a marvel!

      So far it's watched 6GB of porn for me, all while managing my meat and two veg and keeping my juice chilled.

  5. HuBo

    Door number 2

    I think we're getting closer to future #2: "adoption of AI is constrained by the limitations of the technology". Peak LLM should not be far off as training over ever larger datasets yields neither AGI nor superintelligence (only burns more resources instead). Similarly, the text-to-image/video/music genAI loses its steam beyond the low-hanging fruits, with nowhere near the creativity found in human artists, and no way to get there under current algorithmic conceptualizations and architectures. The over-hyped potential utility of stochastic producers of approximate comprehensions, compressed through an horror-backpropagating plague, is waning, with adoption to be consequently strained.

    In all actuality, the apparent success of very very large models is eventually found to come from them "rote learning" (and recalling) their entire training inputs verbatim (non-lossy database-style) ... as demonstrated by the New York Times, among others.

    But I could be wrong of course!

    1. Groo The Wanderer Silver badge

      Re: Door number 2

      You could be wrong, but you're not.

      So-called "AI" is only a statistical averaging of responses over large data sets. Aside from the issue of data set quality and legality, there is the fundamental issue of just how much we want to produce "average" quality based on the average quality of the actual data sets that were used - data sets heavily polluted with opinion, misunderstandings, and bad advice that is indistinguishable to statistics from the good advice when you are talking edge cases that only have a few answers for rarely posed questions.

      But those edge cases trip up the best of human programmers, and the statistical average is to fail, not succeed.

      So even if they manage to achieve the best their data sets can generate, there are a lot of cases where the best possible answer it can produce is wrong and even dangerous to use.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like