back to article How DARPA wants to rethink the fundamentals of AI to include trust

Would you trust your life to an artificial intelligence? The current state of AI is impressive, but seeing it as bordering on generally intelligent is an overstatement. If you want to get a handle on how well the AI boom is going, just answer this question: Do you trust AI? Google's Bard and Microsoft's ChatGPT-powered Bing …

  1. heyrick Silver badge

    Given that what is called "AI" is trained on vast amounts of information in order to let it make it's own conclusions, is it possible to fully understand how it does what it does? The basic algorithms, certainly, but how about the inferences that it makes?

    I would not be inclined to want to trust a, an opaque process, and b, anything that was trained on the best of the internet (other story today, they're being trained on 4chan!).

    My worry is that by assigning some notion of "trust" (note the scare quotes), it can be used by people who should know better to claim the machine doesn't make mistakes.

    1. jmch Silver badge

      "what is called "AI" is trained on vast amounts of information in order to let it make it's own conclusions"

      Re vast amounts of information....Chat GPT was trained on 570GB of text data

      A human child at 1000 days (about 3 years) can walk, understand and talk at a basic level, and comprehend abstract ideas like colours and shapes. You might think that 1000 days (12000 hours if half time is spent awake) is not a lot of training data, but then consider that human body has millions of sensory nerve endings that are acting as training set inputs to a training machine with tens of billions of connections. 10,000,000 sensory inputs giving 1 sensory input / second over 12000 hours would give 432,000 GB of data if the data was merely 1/0 digital. In reality each nerve can give a range of sensory inputs, at 'timeslices' far closer than 1/sec, and I'm pretty sure I've also underestimated the number of sensory inputs as well.

      Our brains (amygdalas, actually) are fantastic at filtering out unwanted pieces of sensory input and focusing on the relevant ones (sometimes excessively so - see invisible gorilla effect).

      1. heyrick Silver badge

        Our brains are fantastic, period.

        Just a shame it uses lossy compression, though.

    2. DS999 Silver badge

      I experienced an AI "surprise" recently

      Fortunately it was a good surprise, but I still struggle to figure it out.

      I was changing my iPhone's lock screen setup last weekend and decided to choose a new background. I wasn't really sure what I wanted so I looked through the available options, and of course there was an option for using a photo. Out of the several thousand photos on my phone, it chose one of the few truly artistic photos I've taken - a rainbow in the distance over the Nebraska Sand Hills at a remote golf course in the region. It was such a good photo, and had been cropped (I took it in landscape mode but it had been cropped a lot on each side to fit the portrait screen layout) that at first I believed it was something that came with iOS. Like Apple's equivalent of Microsoft's green hills and sky from the Windows XP days. But when I selected it to look at more closely I thought the scene looked a bit too familiar so looked back in my photo roll to six years ago when I took it and was shocked to realize it had chosen one of my photos!

      This is a photo that as best as I can remember I never posted anywhere nor sent to anyone. If I did it was once, and I have plenty of photos I've been a lot more active with. It wasn't recent, and I haven't looked at it for ages. I take terrible pictures as a rule, but this is one that while it might not win a photo contest it would certainly fit right in if I entered it in one.

      So how the hell did it pick that photo? Was it just random coincidence? It would several thousand to one to pick that at random, but I guess those aren't overwhelming odds. I suppose the iPhone AI was choosing photos that look like landscapes? I have only have a few hundred such photos, so those are better odds. Maybe it picked it because of the rainbow? I think I have maybe a dozen of those, so even better but still... There are probably reasonable explanations, but it really shocked me when I realized what it was! It even chose to crop more on one side than the other (despite leaving where the rainbow meets the ground off center) to avoid some clutter in the area where I stood when I took the picture. Was that random too or does it know that cropping out the edge of the limestone wall leaving it all "landscape" would improve it?

    3. Citizen of Nowhere

      >trained on vast amounts of information in order to let it make it's own conclusions

      LLMs don't really reach conclusions in the sense we talk about them in human thinking. The data is used to train it to produce a stream of text which is likely to be an appropriate (for varying values of appropriate) response to a prompt fed to it.

      1. Michael Wojcik Silver badge

        We don't have firm evidence that humans reach conclusions through a significantly different process.

        While I'm not particularly impressed with transformer LLMs (particularly unidirectional undifferentiated ones like the GPTs), I'm also rather weary of dualist handwaving assertions that human cognition is necessarily somehow a different category.

    4. Michael Wojcik Silver badge

      "Fully understand" is the wrong bar, because that ship sailed long ago. We already have all sorts of control systems that we don't fully understand. We don't fully understand ourselves.

      What we need for large information-presenting and agentic systems (whether you want to call them "AI" or not; personally I don't think that's been a useful term since at least 1980 or so) are qualities like interpretability, explicability, tracibility, corrigibility, and predictability within bounds. There's a great deal of research being done in these areas, but how much progress is being made is questionable.

  2. Omnipresent Bronze badge

    I don't even trust this phone.

    Pretty sure it's F'ing with me just for fun at this point!

  3. Howard Sway Silver badge

    Operating competently, Interacting appropriately, Behaving ethically and morally

    Yes, just how ideal humans are supposed to behave. And vast numbers of real humans don't behave. Let's face it, the morals of these systems are going to reflect the morals of the people who operate them, which generally stretch no further than wanting to have as much money and power as possible. So don't waste your time hoping for AIJesus.

    1. Ken Hagan Gold badge

      Re: Operating competently, Interacting appropriately, Behaving ethically and morally

      True, and relevant, but let's not hold AI to a higher standard than NI. I wouldn't trust most humans to build a bridge, but I don't deny their intelligence.

      DARPA's criteria are for a useful AI, not just an AI, and it is fair to point out that all the hype this year has been about AI that is demonstrably not useful. 'Tis a pity that this point is not more widely appreciated in the media.

      1. Doctor Syntax Silver badge

        Re: Operating competently, Interacting appropriately, Behaving ethically and morally

        "demonstrably not useful"

        For some values of useful. Some people are clearly using it. What value they get is debatable.

      2. Michael Wojcik Silver badge

        Re: Operating competently, Interacting appropriately, Behaving ethically and morally

        let's not hold AI to a higher standard than NI

        On the contrary, I think that's exactly what we must do, if GAI is actually achievable (and I don't see any evidence that it isn't).

  4. b0llchit Silver badge

    Pipe dream ethical AI

    The current state of "AI" is that we have successfully created an automated disinformation engine. Sure, it will also spill out true stuff, but the success of disinformation is to intersperse falsehoods with truth in creative ways. That is what the current engines do very well.

    Then the article states:

    Behaving ethically and morally...

    We humans can't even behave ethically and morally. How would you "teach" an AI ethics and morals when humanity does not agree on a universal set? Well, yeah, you can relate it to the UN charter, but then why do we accept the wars, suppression and bickering among states and people of the world? That says a lot about the state of humanity and the chances of ensuring ethically and morally behaving AI. There is no ensured reason for any party to respect other's ethics or morals.

    Therefore, AI behaving ethically and morally is a pipe dream. A misguided discussion of false realities using utopia vision.

    1. Persona Silver badge

      Re: Pipe dream ethical AI

      ethically and morally is a pipe dream

      They are not human constants either. What some people consider ethical and moral is very different to how others would judge them. It also varies with time. What is judged ethical and moral now is not how it was judged in the past or will be judged in the future.

  5. Doctor Syntax Silver badge

    I'm not even sure about the bridge metaphor. There are occasional road closures to repair bridges built at a time when I'd expect bridge design to have been allegedly mature.

  6. Filippo Silver badge

    I don't think those objectives are feasible with current tech.

    Hallucinations are an intrinsic property of how LLMs work. Same for the inability to reliably explain why a particular input resulted in a particular output. Ditto, for that matter, for the ability to remove specific information from a trained model (something European regulators are wrestling with).

    From what I understand of the underlying theory, none of those problems are truly solvable. We also do not currently have a theory of how to make an "AI" that doesn't feature those problems. They might be mitigated to some degree, but I suspect it won't be enough.

    All of that said, if anyone wants to give it a shot, I wish them well.

    1. Michael Wojcik Silver badge

      Unidirectional transformer-stack LLMs are not the entirety of "current tech".

  7. Pascal Monett Silver badge

    "The current state of AI is impressive"

    You are easily impressed.

    What is currently abusively called "AI" is nothing but a statistical inference machine. It is only as good as the guy who programmed it understands statistics.

    It has nothing to do with AI, can be easily confused with brightly colored clothes, and couldn't tell you what a mammal is if its electricity depended on it.

    And the statistics expert can't even prove why it produced its conclusions.

    So it's the closest thing we have to vastly overrated 8 ball.

    1. chivo243 Silver badge

      Re: "The current state of AI is impressive"

      So it's the closest thing we have to vastly overrated 8 ball. Plus 1 for that!

      Should we then call the computer running AI a ouijamotherboard? We all know that Ouija Boards are controlled by those little fingers guiding the thingamobob...

    2. LionelB Silver badge

      Re: "The current state of AI is impressive"

      > What is currently abusively called "AI" ...

      That bird has long flown; the term "AI" has de facto become synonymous with machine learning.

      Railing against AI also raises the question of what "artificial intelligence" is actually supposed to mean, and even if we'd recognise it if we saw it. If it means simply "like human intelligence" (and it appears that to many it does), then we've a very, very long wait indeed, insofar as (a) we are light-years away from understanding the design and functional principles behind human intelligence, and -- obviously not unconnected -- (b) human intelligence benefits from billions of years of evolutionary "design", sophisticated sensory apparatus, lifetimes (and, via human culture, beyond) of learning on real-world data, and processing power vastly greater and more time and energy efficient than the largest cloud-based/superdupercomputer technology we can currently muster. So don't hold your breath for that one.

      > ... is nothing but a statistical inference machine.

      Well I wouldn't actually knock that one so glibly: there are credible (and in some scenarios even testable) theories gaining ground, that posit that biological, including human cognition, behaviour and intelligence may in fact be construed as statistical inference writ (very) large. Look up predictive processing/coding, for example.

      > And the statistics expert can't even prove why it produced its conclusions.

      He, he. I certainly cannot, in general or with any great confidence, prove why I reach my conclusions - maybe that's setting the bar a bit high ;-)

    3. Michael Wojcik Silver badge

      Re: "The current state of AI is impressive"

      While transformer LLMs are not "a statistical inference machine" in any useful sense of that phrase, I'll agree they're not terribly impressive either. The big transformer LLMs are performing pretty much as I would have expected since "Attention is All You Need" was published. They're doing context continuation based on a transformer stack trained on a big corpus. What's to be impressed by?

  8. xyz Silver badge

    They're not AIs....

    They're very naughty LLMs.

  9. Claptrap314 Silver badge


    Trusting your life to the software running, for instance, on any Telsa within a hundred yards on the freeway.

    This is not acceptable.

  10. VonGell

    I offered the DAPRA to create AI in 2003-4, but they didn't want to...

  11. Duncan10101

    Interesting lecture

    Many moons ago (when I was but a whippersnapper of a student) I was in an AI lecture. We were told that it was very difficult to work on the problem, because we don't have a definition of intelligence.

    However, a definition was given.

    At the time, we all laughed. But thinking about it now (in the world of test-driven development, and behaviour based testing) and considering we don't seem to have ANY standard of testing for these things whatsoever, perhaps it wasn't so crazy after all. It was this:

    "Intelligence is the ability to pass intelligence tests."

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like