back to article EU and Canada on waiting list for Google's AI chatbot Bard

Google's AI-powered internet search chatbot, Bard, isn't available for netizens in the EU and Canada. CEO Sundar Pichai announced Bard had expanded its coverage to 180 countries at this year's Google I/O conference, and it currently supports three languages: English, Korean, and Japanese. But the European Union and Canada are …

  1. Neil Barnes Silver badge

    depressing advice is simply 'never trust your eyes or ears ever again.'

    Quite. But what is the actual use case for these kind of things? There's a terrible smell of hmm, I wonder what happens if... about them.

    Obviously the big boys think they're worth spending billions on, but what for?

    1. Yet Another Anonymous coward Silver badge

      Re: depressing advice is simply 'never trust your eyes or ears ever again.'


      Imagine el'reg but instead of handful of drunken scribes scrawling a couple of stories a day (sorry ed) you have an AI writing unlimited purple prose directly targeted at your specific interests and opinions?

      Obviously that would be shite - but imagine the same thing for people on Facebook or Twitter or Instagram

      If you want VR / Metaverse to take off (for some reason) you can't have an 80s videogame level of polygon graphics, you need beautiful, unique, realistic, scene design - but for free.

  2. WorkShyEU

    The purpose of Bard et al is to drill you for information, not the other way around..

  3. druck Silver badge

    time diff

    "For example, we loaded the entire text of The Great Gatsby into Claude-Instant (72K tokens) and modified one line to say Mr Carraway was 'a software engineer that works on machine learning tooling at Anthropic.' When we asked the model to spot what was different, it responded with the correct answer in 22 seconds."

    And gnu diff can do it in how many microseconds?

    1. NoneSuch Silver badge

      Re: time diff

      "And gnu diff can do it in how many microseconds?"

      gnu diff does not update your NSA / Homeland Security file with what you inquired about, check you are not on the FBI Most Wanted List or scan for keywords the other three letter agencies may be interested in.

    2. Michael Wojcik Silver badge

      Re: time diff

      Yeah, it's not clear from the summary in the article what exactly the test was. I expect it was actually in effect "find the passage in the context window that doesn't match what you would expect for the next token", so it wasn't diffing Real TGG against Modified TGG; it was running Modified TGG against the entire model, which included somewhere in its parameter space a gradient matching Real TGG but not the actual text in a literal representation.

      Not a hugely interesting experiment, as far as I'm concerned. Exactly what I'd expect a really large model to be able to do. So what?

      Now, if the underlying model had been trained on a data set from which all copies, excerpts, and references to Real TGG had been removed, and it still caught the offending passage, that would be a slightly more interesting experiment. (It's feasible for a transformer LLM to do this, if there's enough similarity between the world of the novel and the world of the training set for most-probable completion to get a strong disagreement on the altered passage.) A better test would be to use a freshly-written unpublished novel, of course, so there's no possibility of data-set contamination. But even then, all you've confirmed is that the surface of parameter space contains a gradient that diverges sufficiently at the point where the out-of-place passage appears.

      And that's a big problem with LLMs. They converge on a middle ground of expectation. They seek to reduce surprise, which is another way of saying they reduce information entropy in the output. They're bland. They have no style. They have no conversation, as we used to say of uninteresting people. They regurgitate the most likely continuation, in a dull fashion. You can anneal them into slightly higher valleys with prompting, but the existing models and their architectures fundamentally lack the inconsistency of human discourse. And that's what makes us interesting.

  4. bo111

    Growing competition

    Bing Chat now seems to be fully available in Europe. I have tried and I like it. I wonder how much it must cost Microsoft to run it at such scale.

    1. Dinanziame Silver badge

      Re: Growing competition

      At the moment, the estimates I've seen is that each query costs on the order of a cent. It's not much, but they'll still be losing money for a while until they manage to get it under the cost of a subscription

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like