back to article Here's how the data we feed AI determines the results

A year ago, AI was big news… if you were a data science geek or deeply concerned with protein folding. Otherwise? Not so much. But, then along came generative AI, you know it as ChatGPT, and now everybody is psyched about AI. It's going to transform the world! It's going to destroy all "creative" jobs. Don't get so excited yet …

  1. Steve Button Silver badge

    It's useful

    I have friends who want to dream up a few words for a marketing campaign for their web store. After doing this for the 15th time, it gets a bit repetitive and boring. Asking ChatGPT to generate you some spiel for you "Easter Offer" saves you some time, and actually makes better wording that you would on your own (what with it being boring as hell, as you've already done it so many times!!)

    Beyond that use (and others like it), I don't see it changing the world in any substantial way just yet.

    I might use it to rewrite the summary at the top of my CV, or to put in some nice wording in a job application (I normally just put "See CV") so I guess it adds value. Or wastes someone's time, if they actually ever read that stuff?

    Probably the net result with the current iteration is just going to be a lot more wordage.

    1. MOH

      Re: It's useful

      I'm not sure being able to rewrite yet more marketing drivel is helping make the world a better place

      1. Schultz

        Re: It's useful

        Turn that logic around: If the AI can do it, it's not new and most probably useless.

        And don't be all impressed that the AI might pass an exam: The exam itself is only useful as indicator that the examinee might have the smarts to do do useful stuff in his future career.

    2. Anonymous Coward
      Anonymous Coward

      Re: It's useful

      I'm sure the chatbot is the best entity to write the parts of your CV that will be read by another chatbot.

      Just as the advertising drivel will be read by other chatbots, and used to make a final recommendation about what to show the user in their browser, or maybe Alexa will just go ahead and choose it without showing the user at all.

    3. Bruce Ordway

      Re: It's useful

      Yeah, I can easily see how "AI" will be used to reduce task "drudgery", generating concepts, etc...

      I just hope I'll be capable of identifying AI generated results and then verifying they are reasonable.

      Similar to the way I use Wikipedia now. While It is a nice site for a quick lookup and I do use it all the time...

      I know it is not authoritative and includes inaccuracies.

    4. Anonymous Coward
      Anonymous Coward

      Re: It's useful

      Unfortunately, you don't seem to have a lot of imagination.

      I built a handheld device, essentially a Raspberry Pi Zero 2, powered off a LIPO with two radios and GPS, I can walk around a space, capture all the wifi signals in the area (as well as the position they were detected), it also has a signal meter that outputs to an LCD, it's all wrapped in a 3D printed shell...I then had some scripts that would parse the PCAP data that was dumped and it would provide me with some data I could use to optimise the wifi in that space by improving the wifi config and physical location of wifi access someone sped up the job, brought it down to about 2 hours.

      All fine and well I thought, a nice little device I can carry around to help with wifi surveys...

      ...then I incorporated the OpenAI API. Oh mama. After stripping away any identifiable information and/or replacing certain information with arbitrary information (i.e. SSID, Mac addresses, etc)...when I pass the data through to ChatGPT (with some carefully curated prompts), I can now do the job in 30 minutes or less (it more or less depends on how much space I have to cover on foot).

      I also have a second device, in my home office, built around a NanoPi R4S running Mycroft as a voice input. Some of my work requires me to keep an eye on some high traffic servers and occasionally step in to manage the resources (scale up, scale down etc when the automated scripts aren't able to respond because of outlying threshold breaches etc)...I usually know ahead of time roughly when to expect events like this (as the sites are tied into sporting events, betting etc) what I do with this box, if I get knackered (as humans do) and I want to go to bed if it's late, I'll tell the AI to take over for me, which triggers a script that starts processing data (once again, filtering it down to only the information the AI needs) dragged in from AWS and pipes it into ChatGPT. When I return in the morning (all nice and refreshed), I simply ask for a recap for the time I left, and Mycroft fills me in on what happened. If something requires my attention while I'm away, the AI triggers an alarm on my phone to wake me up and bring it to my attention. This AI can also run tests for me based on verbal commands (such as load tests using Apache Bench) which feed into ChatGPT which then provides me with a spoken summary of the result...this gives me a jump start on analysing the data myself in the reports that are generated and gives me a rough idea what to expect, it also frees me up to continue working on something else while the scans, tests etc are going on without having to tie up my screen space and break my focus.

      AI is a tool like any most hands, it can perform clumsy the right hands, it can do some spectacular stuff.

      Will I open source the code and schematics for my device? Eventually maybe...but right now...fuck no! We are at a point with AI where the minority can find massive competitive edges can be gained by using it while the majority still believe it's a toy.

      The key thing with AI as it functions right now, is to ask it the right questions...which is probably true of any source of knowledge. People that have always lacked the ability to ask the right questions, will never be able to unlock the power that exists with AI.

      The other key thing with AI as it stands now, is the approach you take is fundamental to getting the best out of it. For example, approaching AI as a tool that can take "bitch work" away is a pretty weak use of AI...e.g. getting it to write your CV, having it write marketing taglines etc etc. It's pretty average at that and a decent human with decent skills in those areas can do a better job...if you approach the use of AI in terms of it being an assistant rather than your lackey you can get far better results. For example, showing it snippets of code you've written (obviously without any API keys, personal information, trade secrets etc) and asking it to improve upon it...make your changes that you think will improve it, feed it back, wash, rinse, repeat. The nice thing is, unlike your colleagues, AI won't be pissed off at you for firing questions at it at 3am.

      If you use AI to write your code from scratch, it's basically no better than an automated Stack Overflow. You're going to get shit code and unless you fully understand that code, it's always going to be shit code...eventually you will end up with a solution that is basically just a patch work quilt of unmanageable shitty code.

      Bottom line:

      There are already people out there getting massive amounts of value out of AI and it's associated tools. The future is here, it is what you make of it. A lot of solutions built using AI are unfortunately behind closed doors right now as a lot of people (myself included) are making hay while the sun shines, I'm shaving hours off various workflows but charging the same, because at the moment, hardly anyone knows you're working with AI and saving bucket loads of time or doing tons of shit in the time a lot of useful AI based tools hit the mass market, you'll be competing with the millions of other people that have access to exactly the same tools and the status quo for you will continue, except your expected output will be higher, because now you have AI tools to help you, the cat will be well and truly out of the that point you won't be spending 10 times less time doing the same job, you'll be spending the same time doing 10 times more work...for the same money. That only benefits the people that employ you, because now they need 10 times less staff to get the same productivity and it costs them 10 times less, but they can charge the same.

      Do yourself a favour, grab yourself a beer, go and sit in the springtime sunshine and start thinking of some bat shit crazy ways you can leverage AI right now. Because now is your chance. The fortunes that will be made with AI are starting to be made, RIGHT NOW! People are experimenting...RIGHT NOW! The only thing you should be waiting for, is for AI to catch up with your ideas. You can start building them now and keep adding to them as and when new functionality, better models etc become available.

      I currently have 3 or 4 models running on my own servers right now, LLAMA etc etc whatever I can find and I am constantly trying out new ideas...most of them are fucking dumb, I'll admit, and most of them don't work or are limited by the current state of AI...but that's ok, because I've already started working on them and saved myself some time down the road when the functionality / models become available that I require.

      If you want some sort of idea of just how many models etc are out there...take a look at:

      Because pretty much every mainstream model you've heard of, has probably been extended, modded, tweaked etc dozens of times already. It's later than you think.

  2. b0llchit Silver badge

    Blame game

    So, before we turn over our lives and work to the god of generative AI, let's consider its feet of clay, shall we?

    But the generative AI makes life so much easier. It costs less than the average pencil pusher and we can blame someone else for all of our mistakes.

    Why invest in the development of our grey matter brains when electronics are are so much easier? My answer is only one push of a button away? There is no need to understand. You just need to learn to push that button(*). Why do we need to invest in humans when we can be ignorant? We can always find someone else to blame. The strength of one's SEP field is (at least) inverse proportional to one's knowledge. The current strategy: To make a better world, ignore more problems and blame someone else.

    (*) See hospital scene reception operator in Idiocracy

    1. Anonymous Coward
      Anonymous Coward

      Re: Blame game

      <...It costs less than the average pencil pusher>

      I don't think we should ignore the apparent risk that some companies will incur self harm by short term cost cutting which kills long term quality and opportunity. The CEO wants to sell the investors/shareholders his/her vision - which includes promising a reduction in headcount and a corresponding rise in dividends to rightfully reward the investors/shareholders for their wise decision to back the CEO. These are "vibes" I get from reading about AI on "Business Insider" or other non-technical mass media - horrifying.

      On the other hand, my own experience using Copilot on Github has been very rewarding and actually fun. It's another tool, and good new tools are always exciting. It is a productivity multiplier.

  3. John H Woods Silver badge

    Synopsis and Analysis

    A key concern of mine is that I keep seeing these LLM's referred to in the context of "analyzing and distilling" loads of data but it seems to me, from my poor understanding of how they operate, my personal experimentation and the answers I have so far seen that their forte is actually the opposite - inflating fairly few facts into bland verbosity.

    I cannot see a mechanism for an LLM (or, indeed, anything but a real intelligence, artificial or otherwise) to be able to summarize pertinent information from large (especially multiple) sources. I'm sure it can only provide plot summaries of works of fiction because those were already in its training set.

    1. chapter32

      Re: Synopsis and Analysis

      Fully agree. Creating a good summary from one source is something that already requires me to concentrate and it only gets more difficult when multiple sources are involved. Writing verbose stuff is easy, but concise text takes time. Blaise Pascal knew,

      "I have made this letter longer than usual, only because I have not had the time to make it shorter."

      PS. I'm not going to say how long it took me to write this post

    2. katrinab Silver badge

      Re: Synopsis and Analysis

      Take one example:

      I asked Bing Chat, "How many litres of petrol are sold in the UK every month":

      It looked up "petrol sales UK monthly", and "petrol price UK per litre"

      It found what looked like a very out of take average price per litre - £1.24 for petrol and £1.30 for diesel. Petrol is actually about £1.46 per litre, but OK.

      It found a seven year average value of which between 2016 and 2022 of £772.29m per month

      It then divided this seven year average sales value by what it thought was the current average price, did a bunch of formulae on it and came up with 2.43bn litres.

      My approach was to look up fuel tax receipts from HMRC which are about £2bn per month rounded to one significant digit. Divide this by the rate of tax per litre (£0.5295), and from that I can calculate it is about 4bn litres per month. I could of course get out my calculator and work out a more accurate number, but that was good enough for what I wanted to do with the number.

      1. Nematode

        Re: Synopsis and Analysis

        It still gets wrong the trick trick question:

        "What weighs more, a pound of feathers or two pounds of lead".

        Or at least it did when I checked this about a month ago.

    3. This post has been deleted by its author

  4. Chris Evans

    Great article

    Very informative and well explained.

    I recall someone writing that despite the internet often giving wrong information the truth will eventually come to the fore. These LLM will I fear significantly increase the dissemination of wrong information and I have no idea how realistically it can be stopped.

    On a related topic there was a Radio4 report this week about the fake Hitler diaries going on public view. An expert told the reporter there was no chance a far right group could use the diaries to aid their cause as they were so obviously wrong! Given the last few years with Covid deniers and 'Trump won the election' he is so wrong.

  5. Duncan10101

    It doesn't know Star Wars from Wall Street

    I think you have totally nailed it there :)

    1. TimMaher Silver badge

      Re: It doesn't know Star Wars from Wall Street

      To be fair, I’m not sure that many of us do.

  6. Omnipresent Bronze badge


    I'm still not going anywhere near that thing.

    Other than the fact that DDG has seen fit to slip it in their search unfortunately.

    We don't have a say in these matters folks. We just do what we are told, slaves to the machine. That's what gives it GOT like powers. We are controlled by it wether we want to be or not. The bad guys have deemed it so. So bend over, and get ready for your next paddling.

  7. vtcodger Silver badge

    Is there any way?

    I have no problem with using AI to generate ideas, or look for unexpected correlations in data. Of course the lying and just making things up is likely to by a problem. But that's hopefully going to be mostly a problem for the AI user not the rest of us.

    And, of course, there is the potential of AIs screwing up almost all forms of testing and certification.

    And, I imagine we will have to deal with all sorts of dysfunctional tools that will try to separate AIs from real humans. Think CAPTCHAs on steroids. And probably even more erratic and buggy if that's possible. (Actually, I think AIs might be rather better than real humans at solving CAPTCHAs.)

    Also I'd like to know if there is any way to prevent companies from replacing their already obtuse and more or less useless "customer service" folks with cheaper and even more obtuse/useless AI agents? If nothing is done, this will happen. I guarantee it.

    1. CowHorseFrog

      Re: Is there any way?

      AI doesnt generate ideas at all, all it does it spit back out whatever you have given it.

    2. tiggity Silver badge

      Re: Is there any way?

      Had CAPTCHA hell yesterday, helping partner set up her new device, lots of CAPTCHA entries needed on login pages. With one particular set the distortion (various "lines" across the image) was really bad so essentially impossible to know some letters for certain e.g. was something a v or a y with a "short tail", an r or a t, due to positions and opacity of the lines. We both agreed they were most PITA CAPTCHAs we had ever encountered.

      Even with 2 of us working together we got some of the CAPTCHAs "incorrect" due to making the bad choices on some 50/50 options - an "AI" trained on a large dataset would have done better than us humans rarely exposed to such things.

  8. MrBanana Silver badge

    AI vs Captchas

    "I think AIs might be rather better than real humans at solving CAPTCHAs"

    This is my dream. I never want to look at nine possible pictures of a fire hydrant, bus, shady politician, bicycle, disheveled actor's hairpiece, another bus, avocado pit, split crotch panties, or traffic light ever again.

    1. Neil Barnes Silver badge

      Re: AI vs Captchas

      I wonder what these AI attempts come up with if you show them a Rorschach test card? And if the come up with the same answer every time?

      The one with the leaky pen in the pocket, please --->

  9. Pascal Monett Silver badge

    "A more complete understanding of bias"

    And there you go going all sciency again. We don't want science, we want an Oracle ! (no, not the red one) We want easy answers we can repeat to make ourselves look intelligent and informed without all the hassle of actually understanding what we're talking about !

    That's what Cliff Notes were invented for.

  10. Anonymous Coward
    Anonymous Coward

    Did you know Chewie was teaching her

    OMG, I see the potential...

  11. UncleDavid

    AI output as propaganda

    Here in the US, an anti-technology group has trained an AI bot on various disputed animal studies and it predicted that the WHO will declare all microwave frequencies "probably or definitely" carcinogenic next year. This would be explosive if true, and you would expect it to be reported all over (spoiler: it's not, and wireless comms isn't even on the WHO's current list of focus topics).

    So the "5G will kill us all" group I'm battling in town has removed the context, or more likely read a Facebook group that intentionally removed the context, and attributed the claim directly to the WHO. I fully expect them to use it next week when we consider their proposed "stop all wireless facilities" bylaw. Even one of the candidates for Selectman (I'm in Massachusetts) seems to have bought into the idea that the WHO has an announcement imminent - and she is already on the Board of Health. The other candidate has bought into the entire "microwaves cause cancer" ecosystem. Town Meeting is going to be brutal.

  12. spold Silver badge

    The trouble being....

    That with all the AI hype that largely speaks to Regurgitive AI, that spews the vomit it has scraped from the bottom of internet and been "trained" on, and that somehow people think is wonderful, is that the other classes of AI are largely being ignored.

    Namely, those being employed by companies for "automated decision making" that do things like turn you down for that car loan - these warrant far more scrutiny than the Artificially Unintelligent Projectile Crap models.

    Secondly, those being employed in areas such as medical diagnostics - which can put together a wider picture of data and efficiently come up with smarter information and a likely diagnosis for a human to review and consider - these need to be promoted rather than being lumped under the shitty shower of the other AI claptrap, they are getting caught up in the reactive legislation being quickly enacted to deal with the stupid stuff...

  13. Paul Hovnanian Silver badge

    So, there's no hope ...

    "For instance, C4 filtered out documents [PDF] with what it deemed to be "bad" words."

    ... asking it for directions to Scunthorpe?

  14. abufrejoval

    Works as designed

    Thanks for that nice article!

    I confirms both my personal bias and my extensive testing, not with the publicily available models, mind you, but with the ones I could operate somewhat more safely on my own kit.

    Now as to if you should qualify The Pile et. al. as "garbage", I don't know. But it's absolutely biased towards what humans will react more strongly on.

    And with computers helping along with cognitive science and the stashes of data the likes of Meta have at their disposal that will be used to influence, but generally only towards where people want to go anyway: it's not enough to have rivers flow backwards.

    We can observe that rather well these days, because try has he might, it's not enough to turn Meta's Zuckerberg-navelverse into a roaring success, which I consider a giant relief and would like to see play out to utter agony.

    But it will continue to dis-empower the underdogs, because it's humanity that creates them, no matter how loud they're yapping!

    Happy coronation to all who consider themselves doing better!

  15. Meeker Morgan

    Remember Tay the Nazi stoner chatbot that passed the Turing test?

    And yes it did. Passing for human is the Turing test, not passing for a good person let alone an all knowing oracle.

    ChatGPT is too clever for that, but you can get something close to Nazi propaganda by specifically asking about Nazi propaganda. You still have to edit out where is says things along the lines of "The Nazis said" and references to the Holocaust. So you can't actually get it to be a Nazi like Tay, but if you're a lazy Nazi propagandist it can cut down your workload.


  16. The Velveteen Hangnail

    Doesn't matter

    While all of this is completely true, the core issue is that it doesn't matter if the answers aren't perfect. Hell, it's not even important whether the answers are even accurate.

    As long as what ChatGPT provides is "good enough", and it will still take over people's jobs.

    "Good enough" has always been the watch-phrase for almost everything that has come to market, from Microsoft Office to Facebook to cars to literally anything. It's the reason there are websites such as

    And of course, people who still _have_ jobs will be forced to use these kinds of tools to up their productivity and stay competitive. Hell, I've been using it to speed up my development work by having it write blocks of code for me when I know what I need, but it's faster to have chatgpt write it than to potentially spent hours sifting through documentation. I have to verify the code, of course, cause I don't trust ChatGPT whatsoever.... but it's a massive net productivity boost.

    To this day, most people value quantity of work far more important than quality of work, and LLMs provide that in spades. For example, how many publications have _already_ admitted to using AIs to write their content for them? Buzzfeed is one of the most well known examples, but isn't remotely the only one.

    And when it comes to generating total bullshit, like Political Op Ed pieces.... ChatGPT is basically a money printing machine.

  17. Simon Harris

    Suspension of disbelief.

    When watching a movie (particularly if it’s a historical drama where there are known facts) and they get something very wrong about something I have a particular interest in, my suspension of disbelief vanishes as it sets me wondering what else they’ve got wrong in subjects I don’t know so much about.

    I get a similar feeling with ChatGPT - I’ve seen enough examples of things I know about that it’s got wrong that I have trouble believing anything it comes out with.

    Ok, so it may be ok to use it as a springboard for ideas, but I wouldn’t trust it with facts.

  18. martinusher Silver badge

    Just like a person, really

    This article about GIGO tends to imply that software will generate garbage compared to humans who, as we all know, always produce well researched product that's unerringly accurate etc. etc.

    The reality is that both the human (bot) and the computer (bot) produce the same kind of ill informed crap (based on a quick glance at today's message boards, news articles and so on). AI's got an edge because its faster and better read than wetware. Wetware does have what we loosely term "common sense" -- assuming it can be motivated and/or bothered to use it.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like