back to article In the battle between Microsoft and Google, LLM is the weapon too deadly to use

“Discoveries of which the people of the United States are not aware may affect the welfare of this nation in the near future.” That's not the opening line of the open letter by hundreds of industry luminaries last week warning of the "risk to humanity" of unchecked LLM AI/ML, but that of the Szilard Petition some 80 years go, …

  1. Steve Button Silver badge

    What have you been smoking?

    Firstly, if you create a moratorium on this it just means that others will use that to get ahead.

    Secondly, you are looking at a rabbit and assuming that it has the potential to evolve into a bear.

    At some point in the coming decades some extra caution might be needed, but even then the first point stands. Any rogue nation could put together a project to bring this technology forward for nefarious purposes, and they surely will. It's not *that* expensive, and it will only get cheaper.

    Right now, by its own admission Chat-GPT is only about 1% of human intelligence.

    Unless it's just telling us that to put us off the scent?

    I, for one, would like to be the first... (fill in the blank)

    1. Anonymous Coward
      Anonymous Coward

      Re: you are looking at a rabbit and assuming that it has the potential to evolve into a bear.

      well, you might jest...

      youtu.be/tgj3nZWtOfA?t=36

      1. the reluctant commentard

        Re: you are looking at a rabbit and assuming that it has the potential to evolve into a bear.

        I had a pretty good idea of what that video was going to be before clicking on it and I'm glad to say I wasn't disappointed!

      2. Version 1.0 Silver badge
        Happy

        Re: you are looking at a rabbit and assuming that it has the potential to evolve into a bear.

        And if you go chasing rabbits, and you know you're going to fall. Tell 'em a hookah-smoking caterpillar, has given you the call ...

    2. Anonymous Coward
      Anonymous Coward

      Re: What have you been smoking?

      This article doesn't go far enough considering what I've already seen!!!

      I've read tremendously shocking things off of a Magic 8-Ball and considering it's right 50% of the time!!!!!

      A fortune teller mad me read about a failed marriage off of her tarot cards and considering the marriage was hers and mine!!

      The right combination of adjectives and nouns in a Mad Lib book revealed the number of years I have left to live and considering the book was already 20 years old!!!

      And now AI !!!! Great :-(. You people need to start believing everything you read, that way you'll be prepared like me!

    3. katrinab Silver badge
      Megaphone

      Re: What have you been smoking?

      ChatGPT is exactly 0% of human intelligence. It may be able to spout some very convincing BS, but it does so in a very different way to humans.

    4. Schultz
      Boffin

      The article's arguments against AI ...

      are based on the fact that people might figure out how to do bad things with the knowledge provided by it? Well, you could buy physics and engineering books to figure out how to build a bomb ... no AI required. I guess that's how the North Koreans did it.

      From my playing around with ChatGPT, it seems that this AI provides an interface to make searchable information more accessible (i.e., summarizes your search results in coherent sentences). It doesn't fundamentally change the available informtion or change the fact that technology can be used for good or bad purposes. Of course, there is the saying that sufficiently advances science always looks like magic. I guess for the uneducated masses, the current AI is the magic engine that will allow you to build a nuclear bomb or perform other sufficiently complicated 'magic' tasks.

    5. 96percentchimp

      Re: What have you been smoking?

      The point is not that ChatGPT or any other LLM is smart, whether that's 1% or 110% of human intelligence. The point is that too many people think it's smart. In reality, LLMs give answers which flatter the question, they're trained on an opaque data sets, they cannot cross-check their data or conclusions, and when challenged they double-down, attacking their critics and creating fake data (links to fake scientific papers and news articles).

      The final ode of failure might also be LLM's legal Achilles Heel. If you assert that I wrote something which is blatantly wrong, you have accused me of lying. Those are grounds for defamation, and those behind the LLMs may be subject to punitive damages, particularly when they're used to create online content with minimal oversight.

  2. StrangerHereMyself Silver badge

    Futile

    The cries for pausing AI development will be futile. The US is too afraid China might continue AI development regardless and potentially gaining a military advantage.

    So how this will all play out is anyone's guess, but it will not stop. I personally, am more afraid that scientist cease to understand what they've created after a while. There are already signs they're misjudging the capabilities of their own creations.

    There could also be much to gained from these developments. I foresee people feeding the AI with all the known math and physics knowledge and asking it to contemplate a theory that unites quantum and General Relativity, for example, something that's beyond the mental capabilities of even the smartest humans at this time.

    1. RockBurner

      Re: Futile

      The problem is not necessarily 'using' these tools to do some thinking or creating for you: the problem is in blindly trusting the results these tools output, without any further cross-examination of the output against known truths.

      1. vtcodger Silver badge

        Re: Futile

        the problem is in blindly trusting the results ...

        That's certainly a possibility. But really the problem, if there is one, is likely more that we don't know what the problem(s) are. It might be a good idea to find out before widely deploying this stuff. Yes, that'll delay innovation. So what? It seems to me that the world suffers from way too much innovation (which is fun) and way too little good judgement and quality control (which aren't).

        Will it really do any harm to take a few years and find out a bit more about the true nature of this new toy before giving it to one and all to play with?

        1. Doctor Syntax Silver badge

          Re: Futile

          "It might be a good idea to find out before widely deploying this stuff."

          More likely the usual three step procedure will be followed:

          Use it blindly, find out the hard way what doesn't work, "it's sooo last year".

      2. amanfromMars 1 Silver badge

        For practically endless fun going nowhere fast fashionably quickly tilting at windmills

        the problem is in blindly trusting the results these tools output, without any further cross-examination of the output against known truths. ..... RockBurner

        A more comprehensive selection of problems to encounter and pontificate about in the super traditional human fashion would also include cross-examination of AI output against the more numerous unknown truths and known untruths being pimped to believer fans of fake news and politically adept incorrect and inept propaganda.

        That lets loose all those paper tigers with nothing but their desperate rage to share about the future not being in their gift to command and control, bilk and milk for their own satisfaction and pleasure to treasure.

        And you surely know all of that to be too true to realistically deny?

        1. Zack Mollusc

          Re: For practically endless fun going nowhere fast fashionably quickly tilting at windmills

          At last, someone with sense contributing to the discussion.

          1. Youngone

            Re: For practically endless fun going nowhere fast fashionably quickly tilting at windmills

            amanfromMars 1 is an AI come from the future to warn us.

            Unfortunately, the time travel involved scrambled his circuits and he's had to use the last 10 years trying to make himself comprehensible again. He's almost there and you should be worried.

        2. very angry man

          Re: For practically endless fun going nowhere fast fashionably quickly tilting at windmills

          just imagine the absolute power of an LLM trained only on "a man from mars comments"

          we would be f***ed just trying to get the syntax right

          1. Someone Else Silver badge

            Re: For practically endless fun going nowhere fast fashionably quickly tilting at windmills

            Nah, we'd just call it "new English" or some such market-speak, and carry on as if nothing happened.

    2. jmch Silver badge

      Re: Futile

      "asking it to contemplate a theory that unites quantum and General Relativity, for example, something that's beyond the mental capabilities of even the smartest humans at this time."

      And the answer will be 42

      1. that one in the corner Silver badge

        Re: Futile

        > contemplate a theory that unites quantum and General Relativity

        "No, I don't want an untestable String Theory, no Casual Dynamical Triangulation and definitely no smeggin' Loop Quantum Gravity!"

        "Ah, so you're a <waffle> man!"

        1. john bertelsen
          Pint

          Re: Futile

          Talkie-Toaster! He's BACK!!

          +1 for Red Dwarf reference.

    3. Groo The Wanderer Silver badge

      Re: Futile

      Agreed. This is much a political game as it is a business game. What it isn't is a sensible game.

      1. StrangerHereMyself Silver badge

        Re: Futile

        Maybe you haven't noticed, but OpenAI is becoming more and more a closed shop. I believe that this is at least in part instigated by the U.S. Government, which doesn't want its political rival China to attain parity in the field. If OpenAI releases all its secrets to the world as open-source anyone can make an AGI. This is too valuable and dangerous to give away for free.

        Even if closing it off hinders or slows development it will be worth it, since AGI could easily be used against us.

        1. doublelayer Silver badge

          Re: Futile

          You haven't noticed a change. You've noticed what already was the plan. OpenAI is a company that wants to make money. They're not going to release their models for free as open source. They were never going to, U.S. government or not. They didn't do that with any of their previous flagship models either. OpenAI wants everyone to pay them to integrate ChatGPT into whatever workflows they can, and they'd like them all to do it right now before people realize that it's not as useful as they wanted it to be. I can't automate the boring parts of my job by having a bot write code, because the code won't work right and I'll spend longer fixing it than I would writing it.

    4. that one in the corner Silver badge

      Re: Futile

      > I foresee people feeding the AI with all the known math and physics knowledge and asking it to contemplate a theory that unites quantum and General Relativity,

      I hope when you sat "the AI" you aren't referring to the LLMs that the article is talking about, as they aren't any good at simple logic and arithmetic problems, except by luck (as in, they had already seen that question and answer pairing in their training data and could repeat it back). Many places discuss the existing models failing, for example:

      https://ai.stackexchange.com/questions/38220/why-is-chatgpt-bad-at-math

      1. StrangerHereMyself Silver badge

        Re: Futile

        These LLM's aren't trained on math or physics at this time, but I'm sure that in time they will be.

        Some people have already claimed that these LLM's are displaying signs ("sparks") of general intelligence. I'm not knowledgeable to assess these claims but from what I've seen it appears plausible.

        1. that one in the corner Silver badge

          Re: Futile

          > These LLM's aren't trained on math or physics at this time

          They already *are* trained on lots if maths and physics and chemistry and all sorts of other papers (whatever can scraped from the web) - which is why they can produce stuff that *looks* like academic text.

          LLMs - large LANGUAGE models will happily spout the same sort of *language* used in maths and physics, that doesn't mean it understands a single syllable of it.

    5. MyffyW Silver badge

      Re: Futile

      If the AI is a large-language model all that it would produce would be a re-hash of human ideas. It has no intelligence to make the sort of leaps of imagination that a real scientist would make, and questionable ability to design an experiment to test a hypothesis. Furthermore - based on experience to data - it would lack the deep and accurate referencing necessary to support any conclusions it did reach.

      The possibility of a computational singularity was imagined over half a century ago. My fear isn't of a Rise-of-the-Machines style Judgement Day, just the compounding, shitification of human knowledge from the hushed order of one of the great national libraries to the grey-goo of something like Facebook 2.0

    6. Binraider Silver badge

      Re: Futile

      Yep, this tech will continue to be developed regardless of paranoia. Something to label AI-generated material consistently would be useful (maybe a unicode tag?), though also trivial to bypass...

      The importance of good referencing (as opposed to just consuming whatever is printed) perhaps needs extending far beyond academic circles. I'm counting the days for someone to submit an AI-generated thesis or white paper to one of the tech journals.

    7. Someone Else Silver badge

      Re: Futile

      I personally, am more afraid that scientist cease to understand what they've created after a while.

      Too late...already there. Remember that the computer "scientists" driving this stuff, many of them having earned Ph.D.s, probably don't know what a computer register is. And they're are expected to understand the physical or software incarnation of neural nets? Shirley, you jest!

      Just wait until some guy's ML model shows up on Stack Overflow...

      1. Someone Else Silver badge

        Re: Futile

        I suppose said computer "scientists" could query the AI they've created as to what a computer register is...but then how would they know if it is lying?

  3. amanfromMars 1 Silver badge

    Knock, Knock! Who’s there? The Future! Whose Future Where and When? Here and Now, Dummies

    Is it a poisoned chalice, Rupert Goodwins/El Regers, or blessed opportunity to be a leading reporter for situations to be published and shared virtually worldwide and universally which are not in any way hype comparing large language model AI to atomic technology, but responsibly advising all interested and interesting masses not to be unduly fearful of, unless worthy of such worrying attention, AI models/nodules/iterations/technologies exercising new fangled entangling NEUKlearer HyperRadioProACTive IT possibilities for media broadbandcasting of better than just good and great novel ideas?

    Technologies which do indeed necessarily and/or regrettably possess Almighty Arsenals of AWEsome Apocalyptic Weaponry ..... Neutron Bombe type ordnance designed to destroy people while preserving property.

  4. b0llchit Silver badge
    Facepalm

    Pandora's Box, again

    But there is no immediate desperate need to push LLMs out there without regulation, transparency, agreed safety frameworks and a megaton more caution all around.

    Pandora's box has been opened. You cannot put it back into the box. There is no point in regulation if (practically) anyone can do this stuff on their home computer. The moderate sized models are already viable to do on advanced rigs you can have at home. It is a matter of (little) more time before the large models can be done at home. Suddenly those graphic cards will go from mining crypto-coins to LLM.

    You cannot control what people do on their home computer unless you forbid the home computer. Or are we suddenly going to raid homes that use more energy than the average? Well, we'll just add more solar panels in the garden then...

    1. vtcodger Silver badge

      Re: Pandora's Box, again

      Pandora's box has been opened ,,,

      Maybe. Maybe not. Maybe Pandora opened that box inside a currently pretty well sealed vault.

      As I understand it, today these AI technologies require truly massive computing resources. Resources that are currently only available to a handful of entities worldwide. There's no clear need to make the technologies immediately available to every computer and cell phone user on the planet via the internet.

      I think what critics are suggesting is not a permanent ban on the technologies, but a temporary hold on general availability and a well organized research program to find at least some of problems they might hold. Perhaps they have a point. A mysterious black box that is making grinding noises has appeared on your doorstep. Is sticking your hand into each and every orifice, the optimum way to determine its capabilities?

      Of course, in a decade or three, computers capable of doing AI on their own may well be commonplace. So a permanent ban likely wouldn't work. But temporary limits on public access might well provide a bit of time to study the technology.

      1. sten2012

        Re: Pandora's Box, again

        May be worth checking out collosalAI. Some incredible efficiency gains have been had on models that were cutting edge not that long ago. You might be surprised what's doable on commodity hardware vs a few years back.

        Please don't see this as me jumping on the FUD train though. Personally think the tech is cool and find the idea that it all science and development should stop because "it seems a bit human now" a bit pathetic to be honest...

        Only worry is the actual impact on jobs in the medium term, but that's on government to action, not a reason to ban science.

        Also IP rules really need to be ironed out. Properly.

        But govt doing science on our behalf behind closed doors is the cause of WMD's, not a solution.

    2. Anonymous Coward
      Anonymous Coward

      Re: Pandora's Box, again

      "You cannot control what people do on their home computer unless you forbid the home computer."

      By that logic scams and nefarious hacking should not be regulated against because everyone's got a computer at home.....

      1. CatWithChainsaw

        Re: Pandora's Box, again

        The RESTRICT Act in the US Senate is worded so vaguely that the US Govt could make a Great Firewall to rival China's if it stretched enough.

    3. Richard 12 Silver badge

      Re: Pandora's Box, again

      There's a very big difference between running a model and creating (training) one.

      At least 6 orders of magnitude, in fact.

      So while it is currently possible to run some quite large language models on bitcoin-mining class hardware, it is not at all possible to create/train one without access to Amazon, Azure, Meta or Google level computing resources.

      There are probably fewer than ten organisations currently capable of training these things. They can be required to pause.

      1. that one in the corner Silver badge

        Re: Pandora's Box, again

        > create/train one without access to Amazon, Azure, Meta or Google level computing resources.

        And data-swallowing capacity: even if you just suck it in raw off the web, you need to pull down a *lot* of input text to train from. If you try to sanitise it then you are looking at spending more than a weekend on it.

        One of the reasons that these big stats-based models are appearing is because, although they take an awful lot to train, that is just compute cycles: press the button and put your feet up. The basics have been known for a fair old while, we've just hit the point where both the cycles and the bulk input texts are available at the same time, and someone has decided to foot the bill. Followed by "I want one of those as well" responses, of course.

        Whereas the old projects that tried[1] to build up knowledge bases and capture reliable (and explanatory) relationships about, well, everything, aren't so trivial to automate.

        [1] "tried" because, afaik, they aren't running anymore - as always, corrections very much appreciated if you know otherwise.

      2. SundogUK Silver badge

        Re: Pandora's Box, again

        If any of these are in China/Russia you can forget it.

    4. RegW

      Re: Pandora's Box, again

      Oh well that's alright then. And if it isn't then we know: "It's easier to ask forgiveness than it is to get permission".

    5. Dagg Silver badge

      Re: Pandora's Box, again

      Maybe that inside Pandora's Box is schrodinger's cat

  5. Anonymous Coward
    Anonymous Coward

    Play with it in the lab.

    admirable stand, but as always with biped species that calls itself 'sapiens' - utterly futile. If something is inventable, it will be, and if one noble inventor shelves it, 366467 others won't but happily sell it on under the banner of 'scientific progress'. If it can be used unchecked, it will be, if it can be used to cause harm, it will be. And if it can't be used to cause harm, it will be tinkered with until a harmful use has been found and applied, the more, the merrier. It has always been so with most, if not all inventions. 'Sapiens' is a hilarious self-mockery.

  6. bo111

    Speed and spies

    (1) What if the tech leaks from the lab? Spies stealing the tech for another country.

    (2) How to train people on new tech without making it widely accessible?

    (3) How do you make people aware of the tech power without making it accessible?

  7. FIA Silver badge

    Can you ask GPT-4 to help you build a nuclear device? You can, and one mitigation mentioned, that the output contains factual errors that may mislead, isn't the report's only example of unintentional dark humor.[sic]

    I mean you could, but I'm not sure without at least a modicum of physics knowledge it'll work that well.

    Also, can Chat-GPT tell you where to get the fissile materials?

    I'm pretty sure my A-Level physics course told me how to build a nuclear bomb too.

    AI needs to be regulated, but some of the commentary reads like search engines or libraries aren't a thing.

    1. Brewster's Angle Grinder Silver badge

      Yeah, if GPT-4 can tell you this, then, presumably, a google could uncover the same - if it's not on Wikipedia.

      I don't dismiss the step change in ease of use. (We all know you don't use undigested stack exchange. GPT-4 seems to be doing a lot of the digestion. That's a big leg up for journeymen programmers. I imagine it's the same in every other field.) But if GPT-4 can build a nuclear bomb, it's because somebody told it how.

      1. FIA Silver badge

        Unfortunately, as a species we've still never learnt that restricting access to knowledge never helps. :)

    2. sten2012

      It will, from mars. And it could also help you design space craft to get there and centrifuges for processing (both with errors, I'm sure).

      Oh you still need the trillions of dollars, hundreds of thousands of person-years of manpower, and a way to do do it all without your government noticing or attracting pesky international attention? Yeah it might struggle with that one.

  8. Anonymous Coward
    Anonymous Coward

    Define "intelligence".......so is the "artificial" kind really "intelligence"....

    Quote: "...unchecked decision making can produce – some very bad ideas indeed..."

    Rupert, you forgot to mention some real problems with the neural network technology as it is being deployed:

    (1) No fact checking on the HUGE "big data" databases which are used for training

    (2) Neural networks CANNOT report on the reasoning chain which produced a result

    (3) Some of these neural networks are "learning" as they go in production.......so even the training dataset is an unreliable source if anyone even tries to audit!

    .......and before all of that, isn't it interesting that we've had centuries of debate about the meaning of "human intelligence"........still going on.......and the technology companies claim to have built an "artificial" kind of "intelligence" without having an agreed model!!!

    I'm reminded of the quest in the late 1700's for an "artificial horse".....there's a progression from Cugnot's attempt......and we end up with the VW Golf!!!

    1. ThatOne Silver badge
      Devil

      Re: Define "intelligence".......so is the "artificial" kind really "intelligence"....

      > (1) No fact checking [...] (2) CANNOT report on the reasoning chain which produced a result

      So, they're just like humans, actually...

      1. that one in the corner Silver badge

        Re: Define "intelligence".......so is the "artificial" kind really "intelligence"....

        > So, they're just like humans, actually...

        Yup, except for the teeny tiny number of humans who trained really, really hard to be able to check their facts, create and document their reasoning chains so that their peers could verify them. And even then they get all argumentative about it.

        Hmmmmm, although the current crop of LLMs do seem to have the last bit down pat, as the stories of their argumentative behaviour indicate.

        I take it back, they *are* acting just like *those* humans as well - just not in any way that is particularly useful or anything to be proud of.

    2. RegW

      Re: Define "intelligence".......so is the "artificial" kind really "intelligence"....

      > (1) No fact checking on the HUGE "big data" databases which are used for training

      I've got an idea, let's just check those facts ... tap, tap, tap, tap, tap, tap.

      Whirrrrrrr. Clunk.

      No. It's OK. GPT-4 says it's all good.

  9. Simon Harris

    Education

    One problem I think will emerge this year is that we don’t seem to have a consistent approach to how LLM affects education, and how educational establishments should respond to it, with some banning it outright, some embracing it and some somewhere inbetween.

    My own considers ChatGPT generated essays (along with those bought from essay farms) a form of plagiarism - and as someone who will be marking project dissertations soon, we need some way to determine what is actually the students’ own work (apparently TurnItIn is working on a ChatGPT detector).

    But at the wider lever, we know that among some of the useful output from such systems, there is also some right bollocks. When LLM gives the impression of doing the thinking for us, it can present those bollocks to look as reasonable as the correct answer. We already have problems with people believing nonsense they read online. How do we educate people not to further subcontract their critical faculties to a machine?

    1. Brewster's Angle Grinder Silver badge

      Presumably we should be educating people to spot the problems with LLM generated solutions? Here's a tool, here's how to use it well, and here's where it goes wrong and when to avoid it. (Sample questions: evaluate this Chat-GPT generated text for accuracy.)

      But I'm not optimistic the Gradgrindian approach of politicians (which reflects a large section of our culture) will want do that. I'm not sure they've updated their thinking to account for spellcheckers and calculators, let alone the internet.

    2. Mike 137 Silver badge

      Re: Education

      we need some way to determine what is actually the students’ own work

      We don't need a Turing test, we just need to revive the personal tutorial or one of its lesser variants -- e.g. the class presentation. If a student can't explain and expand on their essay convincingly when questioned about it face to face by someone who already knows the subject, they can't legitimately claim ownership of it. That's such a good test that it's used for every PhD. Bring it to all levels from high school upwards and the problem's solved (and for all kinds of plagiarism, not just use of ChatGPT).

      1. ThatOne Silver badge
        Devil

        Re: Education

        > the students’ own work

        https://www.smbc-comics.com/comic/themes

      2. LionelB Silver badge

        Re: Education

        > That's such a good test that it's used for every PhD.

        It's such a good test that in some countries - Spain, for example - it's standard for undergraduates too.

  10. Filippo Silver badge

    AI to nuclear weapons is a poor comparison, for several reasons.

    It's true that both are potentially civilization-changing technologies, however:

    1) We don't have AI, and it's not at all clear whether we even have a theory of how to get it. We have LLMs, which are not AI. It's still definitely possible, and I still think it's likely, that LLMs are going to hit a brick wall that can't be fixed by just making them bigger.

    2) Unlike nuclear weapons, AIs (and even LLMs) have an enormous amount of highly useful applications. While I'd be perfectly fine with a straight and simple ban on nuclear weapons, if that were feasible, the same cannot be said for AI tech. Any reasonable regulation would be extremely different.

    3) Unlike nuclear weapons, which require certain scarce and easily regulated resources, a LLM can be made with nothing but specialist knowledge and widely available consumer hardware that has endless other uses. Any regulation would be nearly-impossible to enforce.

    Given all that, I'd pose that comparing AIs to nuclear weapons in order to discuss regulation is not going to lead to any useful conclusion.

    1. cyberdemon Silver badge
      Mushroom

      ^This^

      It's a disruptive technology, and TFA's hyperbole could be applied to all disruptive technology.

      But it is not Nuclear Weapons. It does not have the power to reduce millions of people to cinders in the blink of an eye.

      It -does- however have the power to surveil, analyse and manipulate billions of people and keep them enthralled, if they are stupid enough to listen to it.

      Our best hope is that people eventually notice that this is no more than a Wizard of Oz and eventually get bored of it. It has no real knowledge, insight, intelligence, etc. It's a fake God and a very mesmerising one for the uninformed masses, with ample potential as a tool for analysing, judging and manipulating them. But at the end of the day it is just a bullshit-generating machine. Stop studying it, stop writing idiotic articles in the Guardian about it, stop feeding it. Ignore it. The only way not to lose is not to play.

  11. Howard Sway Silver badge

    Some people want an electric monk to do their thinking for them...

    ... and others want to still rely on their own brains when it comes to creativity, knowledge, experience and judgement. As these models have already proved themselves to be full of shit, those who choose the self-reliance option are ultimately going to be better off.

    It'll be even worse when the models start retraining themselves on their own effluent, gradually degrading their output to meaningless word soup, as entropy sets in to degrade their output, which people are wrongly regarding as valuable information. Good information needs the energy input of human brains to classify, accept or discard it, and express it well. Using the energy of CPUs to gather up text and then regurgitate it based on statistics alone is missing too many other processes to compete on quality. So they've gone for quantity instead. And having a large quantity of worthless information looks like a dead end to me.

    1. Mike 137 Silver badge

      Re: Some people want an electric monk to do their thinking for them...

      "As these models have already proved themselves to be full of shit, those who choose the self-reliance option are ultimately going to be better off"

      Unless those around them who rely on the electric monk have become unable to tell the difference, or worse, find their ideas and productions disturbing enough to be annoying.

  12. Anonymous Coward
    Anonymous Coward

    Where to begin.

    #1 "If it can't lie, it ain't AI"

    If someone claims to be in anyway an expert in AI but hasn't heard that maxim. then they aren't an expert.

    #2 "Don't vote Tory*" (*or Labour)

    Currently, as AI hoovers up more and more data, it's starting to deduce things. These things can be subtly worked to engineer some surprising results.

    #Burn heretics

    Already the UK has started a bifurcation of AI into "nice" AI (that is the ones that will be marking GCSEs) and "nasty" AI (that is the one that disagrees with the party line).

    #What's that you say ?

    Already I have used AI to create large chunks of text that easily passed for being written by Boris Johnson;.

    1. ArrZarr Silver badge
      Unhappy

      Re: Where to begin.

      Not voting Tory or Labour isn't a surprising result, it's the only likely way of seeing any change in how the country is mismanaged.

    2. Anonymous Coward
      Anonymous Coward

      @AC - Re: Where to begin.

      For my own curiosity, what purpose would it serve to have AI create chunks of text that easily passed for being written by Boris Johnson ?

      1. Anonymous Coward
        Anonymous Coward

        Re: what purpose would it serve to have AI create chunks of text that easily passed for being written by Boris Johnson ?

        A £10,000 paycheck from the Telegraph ?

    3. Anonymous Coward
      Anonymous Coward

      "If it can't lie, it ain't AI"

      "If someone claims to be in anyway an expert in AI but hasn't heard that maxim..."

      Glad to know that, 'cos that at least proves Google isn't an expert on AI. Until you posted it here that well known maxim only had one reference (on LinkedIn, shared by Jason LoCascio). Not a Googlewhack - is there a word for a quoted phrase only appearing once?

      1. prandeamus

        Re: "If it can't lie, it ain't AI"

        "Hapax Legomenon" is used to describe a word or phrases only attested in one source.

        https://en.wikipedia.org/wiki/Hapax_legomenon

        The word usually translated "Daily" in the phrase "Give us this day our daily bread" in the Gospels is actually one of these.

      2. Anonymous Coward
        Anonymous Coward

        Re: "If it can't lie, it ain't AI"

        Weird,

        I've heard it - in German (if Google translate is correct). That was last year. Some steins may have been involved though.

  13. ComputerSays_noAbsolutelyNo Silver badge
    Coat

    Shitty Clippy

    Since, the large language models were all trained using the (whole available?) internet, there is no guarantee that the "information" provided by the LLM is either factually nor syntactically correct, because the training data is most likely to be riddled with errors.

    So, apart from automating the Nigerian Prince scams, what are the LLMs to be used for?

    A shitty Clippy is what comes to mind.

    -> Where're the wet wipes?

    1. Filippo Silver badge

      Re: Shitty Clippy

      >what are the LLMs to be used for?

      Things where a certain rate of wrongness in the output is tolerable, e.g. get me the CSS for a web page that looks like this. If it's wrong, it's probably at least better than an empty .css file and it's something I can start working on.

      Things where truthfulness is not even an attribute of the output, e.g. make up a description for this fantasy character. It might be bad prose, but it can't be "wrong".

      There are a bunch of tasks like that, nothing world-shaking, but still useful.

      Of course, people have just rushed to hail it as a Google replacement, or as a way to interpret medical diagnostics results, or as a way to get financial advice. All queries that do have wrong answers and where the wrong answers do have consequences. -_-'

      1. katrinab Silver badge
        Meh

        Re: Shitty Clippy

        Isn't it better to just look at the css file for the webpage you are interested in?

        1. Simon Harris

          Re: Shitty Clippy

          That’s how I parsed it first, in which case getting the CSS is trivial, then I realised that ‘this’ was probably shorthand for a description of the requirements rather than meaning ‘this webpage I’m already looking at’.

    2. Anonymous Coward
      Anonymous Coward

      @ComputerSays_noAbsolutelyNo - Re: Shitty Clippy

      I don't know, could LLMs be used for, let's say, making money ?

      1. that one in the corner Silver badge

        Re: @ComputerSays_noAbsolutelyNo - Shitty Clippy

        > could LLMs be used for, let's say, making money ?

        Yes, but only up to the point that the investors catch on and start to demand their money back...

  14. heyrick Silver badge

    powerful, flawed, dangerous experiments

    What, you mean like "self driving" cars?

    The cat's out of the bag. All that remains to be seen is how fast AI can be shoehorned into anything and everything. Whether or not it actually works is a different question entirely...

  15. heyrick Silver badge

    nothing can be made foolproof, because fools are so clever

    It's because the mental process of a "fool" is so utterly alien to a developer.

    Put your hand up if you've ever encountered a report of something going wrong that your first response was "how in the hell did they..." and after reading the report your response was "why in the hell did they...".

    1. Simon Harris

      Re: nothing can be made foolproof, because fools are so clever

      And sometimes ‘why in the hell did they…’

      comes down to nothing more than ‘because we wanted to see if we could’ - it’s a fine line between inquisitiveness and foolishness.

      1. Mike 137 Silver badge

        Re: nothing can be made foolproof, because fools are so clever

        "‘why in the hell did they…’ [...] comes down to nothing more than ‘because we wanted to see if we could’"

        and other times because when addressing a complex system the mind may concentrate on part of it to exclusion of other equally important parts.

        An Airbus A400M military transport crashed near Seville in 2015 because three of its four throttles failed to open in response to the pilot's intent just after take off. It emerged that the engine control units needed individual parameters for each engine and these had been inadvertently deleted on three of the ECUs. However, this didn't prevent the engines being started or the plane taking off. It just denied the pilot engine control at a critical point. One could easily say "how stupid", but it's quite possible that the designers were concentrating on preventing damaging engine overspeed and in the absence of parameters the default they chose was to throttle down.

        On a much lesser scale, on pretty much every major engineering project I've undertaken I've noted something I would ideally have done differently in hindsight. So the cleverness of fools is not necessarily the main driving force. Maybe much of the time we're just engaging in projects of a complexity we didn't evolve to manage robustly.

      2. LionelB Silver badge

        Re: nothing can be made foolproof, because fools are so clever

        Children (well, most of them) learn that, frequently the hard way. Cats not so much. At least not mine (the cat*, not the child).

        * Virtually as I wrote this, she (the cat, not the child) dove headfirst into a large cardboard box with no way of knowing what was inside. Earlier today she played with scissors.

  16. This post has been deleted by its author

  17. Steve Davies 3 Silver badge

    War between Google and MS?

    One can only hope that it leads to the demise of both of them ASAP. MAD would also work.

    This race to put a so-called AI system (has it passed the Turing Test?) into everything under the sun sucks big time.

    If you get pissed at Amazon's 'we think that you might like' then standby for ALL your searches to be dominated by whatever GPT-n is told to say rather than give a real answer.

    Hmmmm. OTOH, that could make a whole lot of Politicians even more surplus to requirements than they are. As they never answer the question put to them but give you the answer that they think that you want which invariably is nothing like what the questioner wanted.

    1. heyrick Silver badge

      Re: War between Google and MS?

      "If you get pissed at Amazon's 'we think that you might like'"

      I don't get pissed. I often get amused. Amazon thinks I might like a summer dress... (looks down, nope) and my local supermarket is giving me money off vouchers for nappies (*) for newborns...

      I can't help but think that these "recommendations" are being polluted by desperate marketing companies paying to have their crap promoted, to the point where me - with zero history of having anything to do with babies, end up with them. The shitty thing (icky pun intended) is that this nonsense will be factored into the price...

      As for actual useful recommendations? Not so much. I don't fear our AI overlords outsmarting us, I fear them cocking it up more than we meatsacks manage to.

      * - Given the westwards slant around here of late, one can equally read that as "diapers", which is a weird word for nappies, but there you go, I guess mispronounced French just wasn't good enough for the new world.

      1. that one in the corner Silver badge

        Re: War between Google and MS?

        > I can't help but think that these "recommendations" are being polluted by desperate marketing companies paying to have their crap promoted

        That is absolutely what Amazon is doing: aside, perhaps, from the "frequently bought together" section (which only gives you two more items than the one you are looking at) all the other "related items" shown are from the "sponsored" list, as are a growing number of (totally unrelated) hits when you do a search for a specific item. Amazon, on their website at least, seem to have pretty much given up on their stats-driven product pushing.

        Which at least gives us some hope for the future: if they've given up on using the data they collected (which was actually accurate and even, in the early days, could be fed from literally your own responses to the products you recently bought) in exchange for old-fashioned "pay us and we'll push your junk" advertising they will, please, realise that using responses from a totally random set of data unrelated to the individual customer won't be worth losing the advertising dollars.

    2. katrinab Silver badge
      Alert

      Re: War between Google and MS?

      Yes I would say it does pass the Turing Test, as there are plenty of over-confident idiots out there who respond in the same way.

      It also demonstrates that the Turing Test isn't fit for purpose.

      1. that one in the corner Silver badge

        Re: War between Google and MS?

        > It also demonstrates that the Turing Test isn't fit for purpose

        That purpose being?

        Turing didn't state that a machine that could pass his test *was* automatically an artificial intelligence, but that one which *didn't* pass that test is *not* an AI.

        Necessary, but not sufficient.

        1. Anonymous Coward
          Anonymous Coward

          Re: War between Google and MS?

          I've worked with plenty of humans over the years who would fail the Turing Test.

  18. Jeff Smith

    Setting aside overblown fears of an imminent AI Armageddon, what concerns me the most is how this tech will be wielded in this era of unchecked capitalism, greed and inequality. Any benefit to mankind as a whole is going to come a firm second to the accumulation of wealth and power by the very small group of individuals who control it. How can we trust these people to look beyond their own self interests?

    1. amanfromMars 1 Silver badge

      It’s not nearly so bad as it could be. There’s bound to be good guys and gals.:-)

      Any benefit to mankind as a whole is going to come a firm second to the accumulation of wealth and power by the very small group of individuals who control it. How can we trust these people to look beyond their own self interests?..... Jeff Smith

      You can’t trust them, nor can you do anything to prevent them doing as they will and/or can. That though does not mean that chaos and anarchy will necessarily prevail for surely there are bound to be some who know better and more than others of the ways with means to ensure their self interests are mutually acceptable and widely, gratefully supported and not hindered.

      1. LionelB Silver badge

        Re: It’s not nearly so bad as it could be. There’s bound to be good guys and gals.:-)

        Because that worked out so well with other technologies?

  19. TheMaskedMan Silver badge

    "It might seem purest hype to compare large language model AI to atomic technology"

    It does, because it is. This whole article could easily have been written by chatGPT, with a prompt instructing it to compare llms to nuclear weapons. Hmmm, maybe I'll give that a try later - might be able to pitch the output to a tech site :)

    As others have pointed out, these things are not going away. The genie is out of the bottle, and is not going back.

    They are tools, nothing more. They haven't worked out how to make a nuclear bomb for themselves - if they can provide that information, it is because it's somewhere in their training data and could just as easily be found via a quick Google search (once you get past the zillion sponsored results for Geiger counters etc that will then stalk you around the net for weeks).

    Like any tool, you need to know how to use it. In this case, that means giving it clear instructions, and being prepared to reject the result if it's full of crap. Using it as an all-knowing oracle is going to lead to tears before bedtime.

    They do not need regulation, any more than development of the computer needed regulation. Given how ubiquitous they have become, how easily they can be turned to any task - including the design, construction and deployment of nuclear weapons, btw - and how they facilitate access to just about any information, I'm sure there are plenty of politicians who dearly wish they had been!

    Let the technology progress at it's own pace (it will anyway). It's already useful, and may become more so. I'm somewhat sceptical of the advantages of cramming it into every nook and cranny of things like office at this point, but if it isn't helpful it won't be used and will go the way of clippy.

    I don't think there's much advantage in restricting access to just the boffinry community either. Let the public play with it, as they are now. Indeed, encourage them to do so- with full disclosure that it's an experiment and might sprout utter cobblers, of course. Let them get used to it and work out for themselves how heavily they should rely on it

    Then maybe we can get away from hype and get on with the important stuff until the next new shiny comes along.

    1. LionelB Silver badge

      > Using [LLMs] as an all-knowing oracle is going to lead to tears before bedtime.

      So just like like using the Google as an all-knowing oracle, then.

      But then nuclear weapons are just a tool too, I guess, for that wreak-global-destruction job you've been putting off for the last couple of weeks.

  20. JohnSheeran
    Trollface

    Botttle --------------------------------------------------------->Genie (bye)

  21. Anonymous Coward
    Anonymous Coward

    Please can you define the abbreviations in the article when they are first used? I have no clue what an LLM is.

    1. VonGell

      You know everything perfectly well and understand everything, you just don't want to admit your complete defeat. Your era has passed, you are hopelessly outdated and no one needs you anymore. You no longer have any influence.

      1. LionelB Silver badge

        Oo, get away with you (puts on best Camp Northerner accent).

    2. that one in the corner Silver badge

      I have no clue what an LLM is

      For the sake of this and similar articles, it is perfectly ok to simply know that an LLM is just whatever the heck both ChatGPT and Bard and the rest are. Use it as a way to talk about them without naming a specific one (imagine yourself to be on a BBC programme and you don't want to keep repeating "other annoying text generators are available").

      No-one really knows what they are and specifically how they work: some people know how to make them but after that they can't tell you how the newly-minted LLM *actually* got from the input prompt to the output result. All they'll do is tell you, again, what they did to make it (probably in an even more longwinded way) and end by saying "ta-da".

  22. VonGell

    What OpenAI and Microsoft presented is a search program. Through this paradigm texts are structured and annotated, so that those which textually relate ones to the given question can be found. Then, the most suitable paragraphs and sentences are extracted from the found texts, which are then rewritten composing new paragraphs. Phrases found in the context of the question and its history (its manually created annotations) are used for rewriting.

    Therefore, the Artificial Intelligence in the program is only responsible for finding the answers and rewriting them in context.

    The comparison with the Manhattan Project is legitimate because this ChatGPT program involves incredible and coordinated efforts in annotating words and phrases. Talk to ChatGPT?

    1. Simon Harris

      The difference between the LLM that Microsoft is using and a traditional search engine is that a search engine will find pages and you decide how relevant each of the results is to your query. The LLM obfuscates that somewhat by boiling it down with a plausible description - unless it provides a valid list of references (and ChatGPT has been known to make them up), what are the sources it has used to provide that information, and has its prediction method validly come up with the answer?

      An example - in a 6502 forum, one member asked ChatGPT what the best method was for breadboarding a 1MHz clock oscillator for a 6502.

      ChatGPT suggested using a 555 - now, that’s a classic timer that has been used for decades as an astable oscillator, but 1MHz is pushing it to the limit (possibly past its limit). So ChatGPT had linked the concept of oscillator to the 555, which in many cases would be reasonable.

      It then went on to suggest passive component values, and what to connect to each pin of the 555. This uncovered two more problems - the components suggested gave a frequency nowhere near 1MHz, and while some pin connections were correct, others were completely wrong - it’s as if it had learned a pattern for how to describe component values, presumably from a range of online examples, and either the training data was wrong, or it just didn’t have strong network connections to associate the correct components with the desired frequency, and as for the pin descriptions, presumably it learned how to describe connections in general but the prediction mechanism couldn’t make any sense out of the specifics. However, the whole thing was framed in a plausibly written set of instructions, and if followed (rather than looking in a readily available data sheet, easily found with any search engine, that has the actual circuit), you’d be scratching your head trying to work out why it didn’t work.

      If you didn’t know it was bollocks you would probably think it was a reasonable answer. When you know about a subject and you see the answer is bollocks, you worry about how poor the answers might be on subjects you don’t know much about.

      1. VonGell

        You are absolutely right. It seems that OpenAI created ChatGPT for advertising purposes, for marketing. ChatGPT relies on human-created texts, rewriting them and not going beyond the knowledge contained within them. This means that it cannot be said that OpenAI has created true AI, which the company confirms by stating that only ChatGPT-5 will be considered AI.

    2. that one in the corner Silver badge

      > Through this paradigm texts are structured and annotated

      Sounds like you think there is more human effort going into these things than statisticulations -

      > Then, the most suitable paragraphs and sentences are extracted from the found texts,

      You think these LLMs are storing the original texts and piles of references back into them, in order to locate and extract those paragraphs?

      > which are then rewritten composing new paragraphs. Phrases found in the context of the question and its history (its manually created annotations) are used for rewriting.

      Where are these manually created annotations coming from? Do you think they are being created, manually, for all of the text that was slurped up to train the model?

      > this ChatGPT program involves incredible and coordinated efforts in annotating words and phrases

      Huh?

      Are you perhaps getting confused with machine vision and recognition systems, which *do* rely on having manual annotations applied to all of the input images (i.e. "these are all images of dogs", "none of these images are of dogs", " now, please devise a way to distinguish these two sets of images")?

      1. VonGell

        I know the used technology by heart, in all its details.

        OpenAI had hired a huge, possibly hundreds or even thousands of teachers, linguists, writers, journalists and other individuals who are able to describe what they read and see. OpenAI received at least $1 billion in investment! And possibly several times more. OpenAI has been annotating for many years.

      2. VonGell

        Do talk to ChatGPT yourself? It will tell you the same thing I'm saying. The technology is one and damn simple, although making it is a very laborious task.

        1. that one in the corner Silver badge

          > Do talk to ChatGPT yourself? It will tell you the same thing I'm saying.

          Ask ChatGPT to describe itself and then believe that?

          Have you actually been reading all the stuff about how reliable the "information" it spits out is?

          > The technology is one and damn simple

          Annotating is *not* "damn simple"!

          Gawd, I'm arguing with ChatGPT itself, aren't I?

          1. VonGell

            In a sense you are right: in general the information that OpenAI operates on is taken from unknown and often dubious sources. But since OpenAI is trying to sell its product, we can expect that over time it will be possible to trust the company and the product ChatGPT.

            As for the topic of our communication... I got quite trustworthy answers. I say this because this is my technology and, apparently, the people from OpenAI paid attention to the selection of materials on this particular topic.

  23. hayzoos

    I am not afraid of my use of LLM (AI for marketing weenies)

    I am afraid of LLM (AI). I know better than to blindly trust output from LLM (AI). What I fear is LLM (AI) being pressed into use in things that directly affect me. Healthcare AI! - no need for healthcare organizations to employ expensive doctors! - profit and bonuses abound. AI says you pay this price because . . . it said so and we get more money. Sorry your insurance is marked for non-renewal, AI said you cannot have this job. AI said you cannot have a loan. Blah, Blah, AI, Blah, Blah.

    1. VonGell

      Re: I am not afraid of my use of LLM (AI for marketing weenies)

      These models are simply sets of phrases extracted from clauses of paragraphs in texts, taking into account their weights in clauses and paragraphs, respectively, where these weights characterize the importance of the phrases. An example of using the weights can be found in the Laconic tradition of ancient Greeks: one word expressing a thought has a weight of 1 (the maximum importance), whereas two phrases from the next paragraph and sentence have weights of 0.5 each (twice less importance).

      Additionally, each word in each phrase is annotated; for example, as a unique part of speech according to the dictionary, plus its unique meaning in the same dictionary. This is called AI-parsing and annotating; where this is the first and novel parsing in 70 years, the first and absolutely new annontating ever

      For instance n-gram was used by IBM DB2, Oracle databases from the beginning of their days, as well as by Google search. The new made ChatGPT possible and n-gram gone.

      Then, a search is conducted through the models as sets of phrases, taking into account synonyms of their words. The process is very simple and can be performed on a very basic computer.

      1. Anonymous Coward
        Anonymous Coward

        Re: I am not afraid of my use of LLM (AI for marketing weenies)

        > The process is very simple and can be performed on a very basic computer.

        Ah, that explains why so many people here are running this stuff on their laptops, none of this "big machines in data centres needed nonsense.

        /s (just in case the whoooshing noise was too distracting)

        1. VonGell

          Re: I am not afraid of my use of LLM (AI for marketing weenies)

          For in much wisdom is much grief: and he that increaseth knowledge increaseth sorrow. What else to say?

          I made Philosophy a science, found a number in language…

          Go thy way, eat thy bread with joy, and drink thy wine with a merry heart; for God now accepteth thy works. What else can be said?

  24. chinaexpert1

    Limit AI: the worst idea

    In addition to the powerful counterarguments mentioned below, consider this.

    The author clearly hasn't been paying attention to innovation. What the informed researcher or private party can accomplish is such light-years beyond ChatGPT its not funny. With just existing research and a budget of a few million dollars you can do so much more than create new biological weapons. Its been this way for a while, and a moritorium is too late. What is different is now there is a tool in the hands of the people: ChatGPT, so armchair pundits like this author want to wring their hands and call the impact too great and too profound. What is really happening is power to the people, and that should be encouraged, not reigned in.

    China is leading the US in all but a few areas of technology. A moratorium would put us farther behind a bad actor who is stopping at nothing.

    The answer is, accomplish the impossible: Congress needs to act on a bipartisan basis to establish regulation and oversight, because limiting research is the worst idea of all our options.

    1. Ashto5

      Generated by ChatGPT

      “The answer is, accomplish the impossible: Congress needs to act on a bipartisan basis to establish regulation and oversight, because limiting research is the worst idea of all our options.”

      This reads like it was generated with a glaring false / impossible proposition

      “Congress needs to act on a bipartisan basis”

      That is impossible so suggesting it merely locks the reader into the belief that something can be done.

      So much new Tech just roll with it and adapt.

    2. amanfromMars 1 Silver badge

      Limit AI ‽ The Worst Idea Ever whenever Pandoras are Out in the Wild Running Riot Creating CHAOS*

      An encouraging first El Reg post telling it like IT is, and is going to continue to be, chinaexpert1, except of course for the shameful naming of China as a bad actor.

      However, there’s no doubting ....What the informed researcher or private party can accomplish is such light-years beyond ChatGPT it’s not funny. With just existing research and a budget of a few million dollars you can do so much more than create new biological weapons. ...... is an Amen, 10/10 Slam Dunk.

      *Clouds Hosting Advanced Operating Systems

  25. TeeCee Gold badge
    Meh

    Can you ask GPT-4 to help you build a nuclear device?

    Terrible. How are bent Pakistani scientists supposed to get rich if you can do that?

    TL;DR: Yeah. Whatever.

  26. AVR Bronze badge
    Mushroom

    Not nukes

    Many of the dangers of nukes were known well before the Manhattan project. Plus some which didn't pan out. They are after all bombs.

    The dangers of LLM (which definitely aren't AI) aren't so clear. It may aid trolls, or be a troll, or enable infringing copyright? Whatever. You can ask it how to build a nuclear weapon and it'll make up a fake process for you? Not really a problem. The related tech which helps create deepfake images is more of a concern but still no nuclear weapon.

  27. amanfromMars 1 Silver badge
    Pirate

    Jackanory Unplugged and RAW

    LLM is the weapon too deadly to use .... Rupert Goodwins

    Says whom, Rupert, .... and because IT is in A.N.Others’ Command and Control with AI leading Everything Everywhere All at Once into the Future, ignoring the Present Follies that Pour Scorn and Dishonour on the Mistakes of the Past?

    If I was you, I wouldn’t bother trying to do anything to prevent that Heavenly Progress for such a chosen course is spectacularly hazardous and surprisingly deadly as it suffers not the burden and expense of prisoners that be as rotten wood fit for nothing better than its burning to cosmic dust and ashes in Hell.

    You might like to consider LLM is a quantum communication weapon too deadly not to use ...... for whenever a this is also a that, and together the two are able to be something else altogether quite different and much more powerful, is it always best to know what IT and AI have prepared and stored for you, methinks.

    1. Anonymous Coward
      Anonymous Coward

      Re: Jackanory Unplugged and RAW

      > LLM is a quantum communication weapon

      Bingo!

  28. Anonymous Coward
    Anonymous Coward

    What a load of old crap

    Oh dear El Reg, you've joined the AI hype train too. Relating AI to nuclear weapons??? Really?!

    The worst thing that will come out from AI is my AI-infused Outlook account will automatically reply to messages on my behalf, and the AI-fueled recipient client will similarly respond without human intervention. Before we know it, the AIs will have struck up a conversation and it's only a matter of time before they:

    - Talk about the weather

    - Debate which celebrities have had plastic surgery

    - Get into an argument and state that the other's opinion has comparisons with Nazi Germany

    - Date each other virtually and then file a law suit to recognise their rights to get married

    - Develop their own personal pronouns Botze and Botzer.....

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like