back to article Devaluing content created by AI is lazy and ignores history

It's taken less than eighteen months for human- and AI-generated media to become impossibly intermixed. Some find this utterly unconscionable, and refuse to have anything to do with any media that has any generative content within it. That ideological stance betrays a false hope: that this is a passing trend, an obsession with …

  1. Pascal Monett Silver badge

    I vote for "well and truly lost"

    Shoulda woulda coulda yada yada. It's all very nice to request the public to be responsible, but the public isn't and it's not being given the tools to be responsible about AI.

    Those decisions are all being made at the Board level by people who, frankly, do not have the public's interest at heart in any way, shape or form.

    So the only tool the public has is blind disregard for anything AI. Once the Board suits will have understood your message and understood that it is in their interest to show the public what has been 100% human-generated versus what has been "facilitated" with pseudo-AI, then we have a better environment where the public will be able to gradually accept and integrate that, indeed, pseudo-AI is not going away and there are things that might be better for it.

    But until then, only shunning AI is going to have any effect on Boardroom discussions.

    1. Pu02

      Re: I vote for "well and truly lost"

      AI creating Art is just logic processing copies of everything already done, just as humans have been doing all along. But where is the value?

      With humans, good Art filters through people's tastes, interests and the various mechanisms we use in human communities, to be recognised and valued.

      With bots, there is infinite processing logic, and infinite distribution channels, with only humans to filter it out using outdated methods and hopelessly overloaded resources, mostly at the end points. It's a train wreck in process.

      So don't taint your work by using AI to generate voices, even if they are to represent AI. It will just break the system that isn't yet able to manage the tidal wave coming.

      Just use a device to process computer generated voices, instead. Or if in a pinch, a human, some of them on the telly sound more like bots every day so you must be able to find some good ones!

      1. This post has been deleted by its author

      2. cdegroot

        Re: I vote for "well and truly lost"

        How do you think computers generate voices? Or modify them? It’s algorithms all the way down, from your simplest band pass filter to GPT4, with not much in between to separate them.

        AI generates a ton of crap art and now and then something surprisingly good (usually something Dali-esque). Next step probably is to rig up a computer to to separate chaff from weed, then one to hang themanummer in a museum, and then a bunch to appreciate it all ;)

  2. Headley_Grange Silver badge

    The UK has just passed a law to make creating deepfake porn with AI a crime. They didn't go far enough (in my view, others are available) and missed an opportunity by not making the AI's owners and creators jointly liable for the offence. I get the "people kill, guns don't" argument, but we are at the very beginning of a concerningly disruptive technology and it's better to start hard with legislation that can be relaxed later rather than assume we can legislate problems away once the cat is out of the bag.

    1. Mike 137 Silver badge

      "it's better to start hard with legislation that can be relaxed later"

      Except that in the UK at least, like the Pygmalion hurricanes, such relaxation hardly ever happens.

      1. 43300 Silver badge

        Re: "it's better to start hard with legislation that can be relaxed later"

        "Except that in the UK at least, like the Pygmalion hurricanes, such relaxation hardly ever happens."

        Quite - and it's got worse in the past few decades, Successive governments absolutely love making more and more laws. Quite often there is a panic about something or another and the response is yet another law (often badly drafted) rather than thinking whether a) it really needs legislating against or b) whether there are existing laws which it falls under, which aren't getting enforced. The latter often applies, but politicians prefer a 'new law' so that they can claim to be 'takling the issue'. Doesn't usually matter to them whether the new law is workable or not as there's rarely any backlash afterwards even if it's unenforceable.

        This is a charateristic of late stage large organised societies - a plethora of petty rules and regulations, prior to complete collapse (see the Roman Empire for an example).

    2. tony72

      and missed an opportunity by not making the AI's owners and creators jointly liable for the offence

      You can make deepfake images with Photoshop or GIMP - are you seriously suggesting Adobe or the GIMP devs should be held liable if someone does that? If not, then why should it any different if the software in question happens to be "AI"?

      1. Headley_Grange Silver badge

        No, I said AI and I meant AI. I neither said nor meant "Gimp" or "Photoshop" or "anyone who owns a pair of scissors and a pot of glue". The "a gun is is a tool no different from a teaspoon or a pepper grinder" type of argument is why we will never control AI.

        1. Catkin Silver badge

          Could you please highlight where you view the difference in terms of culpability? In other words, what specific capability 'AI' image generation offers and how this pertains to culpability on the part of the coder?

          1. Headley_Grange Silver badge

            I have never suggested that the culpability would lie with the coder, just as I wouldn't hold a bricklayer responsible for people making crack in a building he'd worked on. I'm also not saying that the owner of the AI is to blame for deepfakes, but they are a clear opportunity to stop them, and give how bad deepfakes will be (and they will, because the "it's just a tool" view will prevail) for society and everyone I think we should take the opportunity to try to manage it now. If the AI owners could go to prison then they'd put in limits to prevent them being used for deepfakes.

            The difference between AI and, say, Gimp? Ease of use. I've used Gimp: it would be easier to kidnap Taylor Swift, bring her back to the UK, cut off her head and sew it onto someone else's body than for most people to create a credible deepfake of it using Gimp. I've never edited a video and even if I could I can't do a passable voice impression of my own brother, never mind a world leader - or Taylor. With AI I can just ask it; it's not great at the moment and the asking needs a bit of skill - like using Google search in the early days - but it will get orders of magnitude better very soon and then people mentioning Taylor Swift to AI will probably get auto-complete suggestions about TS porn before they've finished their sentence.

            I know I'm banging my head against a brick wall - there will never be such legislation. I think we're at the point where it's almost too late anyway - the "criminals will always be able to get AI" arguments are almost upon us and the people who own AI won't let it be controlled in any way that reduces their returns. The potential impacts on daily life make me glad I'll be dead soon and won't have to live in a world where I have to doubt absolutely everything that I hear or read or see unless they are my close friends and family sitting within touching distance. I've never been so glad to be old.

            1. Catkin Silver badge

              I suppose it comes down to where you think 'too easy' is. Personally, I could probably edit something rather nasty together in GIMP (Certainly in PS) but couldn't do it using older airbrushing and wet photo compositing techniques. There are likely still those who could do the latter. I would very much object to PS introducing guardrails, not least of all because that would involve a certain amount of privacy invasion.

            2. TheMaskedMan Silver badge

              "I've used Gimp: it would be easier to kidnap Taylor Swift, bring her back to the UK, cut off her head and sew it onto someone else's body than for most people to create a credible deepfake of it using Gimp. I've never edited a video and even if I could I can't do a passable voice impression of my own brother, never mind a world leader - or Taylor."

              Just because you (and I) can't, doesn't mean that others are similarly limited. Sticking a famous head on a porn star's body has been a thing for as long as there's been porn on the net, and the suspicion (or excuse) that an unlikely / inconvenient image has been photoshopped is almost as old. Plenty of professional and amateur entertainers can produce passable imitations of famous people, too - mimicry has always been popular.

              For those with access to the appropriate skills and equipment, making a very convincing image or video has been possible for quite some time, but the facilities have been limited to movie studios etc. The tools - and yes, they really are just tools - are now within the grasp of the great unwashed, and suddenly we "need" new laws? I don't think so.

              If anything, the proliferation of these tools will just let the rich, famous and political elites get away with more than they did before - any inconvenient photographic evidence of their inevitably shady antics will be dismissed as fake, and who will be able to prove that it isn't? After all, the tools are so widely available these days.

              Like you, I'm on the final furlong and don't relish a dotage in which AI is crammed into everything just for the sake of it. But it's here, it's not going away, and it is at least moderately useful. Best to go along with it and let the younglings decide what they're going to do with and about it.

            3. Bugsy11

              No need to bang your head for such legislation. All you need to do is wait for Apple to build gen AI into Final Cut and Margrethe Vestager will be right there to sue them for not having deep fake restrictions built into their AI-infused video editing app. She loves to sue Apple for anything and everything but does not mind the Big Five music labels all colluding to prop up a single music streamer to rip off EU customers and EU artists simultaneously. As long as a company is EU based, she apparently feels antitrust laws do not apply to them. But with her animosity towards Silicon Valley companies, it's only a matter of time before AI-infused video editing apps will be sued if they do not have deep fake restriction algo's built in.

          2. heyrick Silver badge

            Having tried to fake an image once (pasting myself by a nice lakeside, and doing a far worse job than Kate), to make a believable fake using photo editing software will take effort, and time, and maybe then it will still look naff.

            An AI, on the other hand, you write a description of what you want and wait a few moments while it generates an image. If it isn't good, just hit refresh (optionally tweaking the description if it took something a little too literally) and repeat until you have a credible enough picture of the Pope skateboarding...

  3. Doctor Syntax Silver badge

    I wouldn't call avoiding AI/ML output "ideological".

    It seems an entirely practical approach. It the content can't be traced back to original source it can't be subject to any of the approaches we might have to evaluate it. It's worthless despite being created at vast expense. The entire AI/ML enterprise is an exercise in squandering trust, money and electricity.

    1. Will Godfrey Silver badge

      Indeed. It changes not being able to trust some content, to not being able to trust any content.

      1. Dave 126 Silver badge

        > It changes not being able to trust some content, to not being able to trust any content.

        Sadly yes.

        Though interrogating data and narratives does suggest that what is true and real is coherent, whereas batshit crazy bullshit isn't - i,e ML may be able to provide tools for detecting bullshit.

        Sorry for my agricultural language, but 'bullshit' has become a technical term... Lies have an antagonistic relationship with the truth, bullshit is oblivious to it.

        1. Claptrap314 Silver badge

          A couple of weeks ago, I realized that an MLLs is a functional bullshit artist. Then I had to check myself--it is the AI companies that are the bullshit artists, the AI is the bullshit that they are getting us to eat.

          But back to the initial observation. These MLLs have been architected, designed, and built around the idea of generating plausible output. And they are good enough that one generally needs to be an expert to catch their errors. In particular, if an MLL is generating code, you have to scrutinize it thoroughly (after you fix the syntax errors & compile bugs) to see if there are security issues.

          I got in an argument with my boss this week about the implications of this. But he drives a Tesla, so he's already entrusting his life to one of these systems which is far from adequate to the task, if evaluated skeptically.

      2. Catkin Silver badge

        Personally, I don't see this as a bad thing. It's been possible to fabricate pretty much any image/audio/video for many years now; the only thing that's changed, in my view, is the barrier to entry. I don't see that cat going back in the bag so if the public is left a little more sceptical about what they're presented with, that seems like a plus.

        I think it's telling that content fabrication is a concern now that it's in the hands of just about anyone, rather than primarily those with significant resources.

    2. Headley_Grange Silver badge

      AI is following the trajectory of most tech these days. The potential to significantly improve nearly all aspects of life - work, leisure, health - is massive but it's undermined by the commercialization and exploitation of people to the extent that most of the real benefits will never be realized.

      For example, It would be fantastic to put all the NHS patient data into a single database run with AI. From a personal perspective that would mean my health data would be immediately available to all health professionals wherever I was - GP, A&E, paramedic finding me by the side of the road, etc. From a public health perspective the opportunites are endless and could make genuine improvements. It's never going to happen because I believe the people who can make it happen are more interested in making revenue from my private data than making my life better, so they can fuck off. AI will develop the same way that the internet and its services have - primarily as a means to line the pockets of the likes of Meta, Google, etc. with any real public benefit having to be picked from the bones of what remains.

      1. Doctor Syntax Silver badge

        "It would be fantastic to put all the NHS patient data into a single database run with AI."

        The last 3 words are irrelevant to the benefits and also to the drawbacks. It would be better to take the AI out of any NHS consideration and work out how to make a straightforward database work without betraying trust.

      2. Anonymous Coward
        Anonymous Coward

        "For example, It would be fantastic to put all the NHS patient data into a single database run with AI."

        No it fucking wouldn't, get to the rubber room you belong in, with that shit, even an Ai would diagnose you as fucking mental

        1. Headley_Grange Silver badge

          I am be just as hostile to having my records shared and I would be incandescent if they trained AI on it. I don't trust them. Cos it could fuck up my insurance. Cos I could get scammed. Etc. I think your post just emphasied much more succintly what I think too. But it's such a fucking waste.

          One of the biggest issues in the NHS is that most doctors have no idea how well their diagnoses and treatments work - GPs in particular. There's no relationship any more. I see a different doctor every time I go to the GP and we get a strict 10 minutes so there's no "apart from the rash, anything else I should know about" sort of chat. The GP treats and prescribes and usually he/she doesn't see me again and that's it. They don't know if I was cured or if I ended up in hospital even worse or just dropped down dead. Just uploading patient records and correlating treatment with what little is known about outcomes could pay major dividends - both in patient recovery and the cost of treatment and drugs. It could be literally life saving. It'll never happen because people, including me (I opted out of the data sharing**), now have such a deep distrust of the tech companies and, even more distrust of any combination of tech company and government. All that potential, all that science, all that potential benefit, lives saved,...etc., all of it chucked away because a few very very very very rich people want to get even richer by trampling all over my privacy and selling my data to anyone who's willing to pay for it.

          **I'm not naive enough to believe that anyone took any notice and I think that the opt-out is worthless now in light of the deal with Palantir.

          1. ITMA Silver badge

            "One of the biggest issues in the NHS is that most doctors have no idea how well their diagnoses and treatments work..."

            I would disagree.

            The biggest issues with the NHS stem from politicians - their constant fiddling, inteference, "reforms", "reorgansations" etc etc etc.

            How much of the precious NHS budget is wasted on that crap?

      3. Anonymous Coward
        Anonymous Coward

        I really, really don't want my medical data to be accessible to any medical professional who goes to look for it. Are they trustworthy? Probably 99% are. Which leaves a lot of them that aren't.

        As for AI working with medical data:

        1. Using it for research, to identify apparent correlations between patient history (medical, work, location, etc.) and diseases to attempt to identify causes of those diseases: Great idea, so long as it is VERY STRICTLY anonymous, with few humans having access to the anonymized data and the AI being designed to be unable to give info on one individual. (If queried with something where the sample size is less than, say, 5, respond "not enough data".)

        2. Using it for diagnosing or picking treatment options for a patient: %(*& no!!! I want an actual doctor, not a decision based on a summary of web articles of unknown accuracy done by something that doesn't even understand what it's reading!

        1. LybsterRoy Silver badge

          I'm slightly different. I do want my medical data accessible to any medical professional that requires it (note the difference) and would like them to know I'm diabetic BEFORE they prescribe sugar pills.

          On your point 2 some of the doctors I've met operate in a very similar fashion. The main difference being the computer can read a LOT more articles than the human being. What would be nice would be the computer providing information such as "treatment x cured 83% of sufferers prescribed it, treatment y cured 67% of sufferers prescribed it, all those prescribed treatment z died."

      4. Anonymous Coward
        Anonymous Coward

        "For example, It would be fantastic to put all the NHS patient data into a single database run with AI... ...It's never going to happen because I believe the people who can make it happen are more interested in making revenue from my private data than making my life better."

        Oh, I estimate chances it's going to happen are rather high. Selling the likes of Google access to all that juicy data is already getting under increased scrutiny by the public, even when saying they will only use it to train their medical tools and will delete all data. So how about using all the data of NHS *patients* not only for research, but also for building a tool to give patients and doctors better access to their data and allow them to see better patterns in it. That must be good, right?

        If we are not careful, some improved sharing and access of our health data will be the "new free" of Google, Microsoft, (not so) OpenAI and other big tech companies. They'll provide it to us "for free". They'll just slurp all our medical data with 100.000% full access while using that data 100.000% to further line their pockets in as many ways as possible. On the side, just like with mail and search and... they'll destroy paying and subscription based competition because that can't ever compete with "free" services. History repeating itself over and over again if we don't learn from it.

      5. jospanner

        wtf does “run with ai” mean?

        mba brains really do treat it as a talisman

  4. HuBo

    Irrational exuberance skins cats

    Interesting POV article and komments (above). Coupling Mark's genAI book with Tobias' 10-minute HowTo could be a great way to vaccinate oneself against the potential side-effects of the AI-hyporama virus.

    This being said, I think that given the girth of AI as a field, it is important to delineate an hypersurface of separation between its nonsensically "batshit crazy" bits (eg. as referred to by Dave 126), and those bits that are actually useful (eg. Intel's Olena Zhu's work on optimizing the thermal layout of chips). AI's use to help overcome the blank page syndrome more creatively than through PowerPoint Templates is probably a good thing, as are other AI targeted at assistive use for humans (eg. Hadley_Grange's public health). On the other hand, AGI, superintelligentAI, and genAI aimed at replacing people with algorithms, after illegally scraping their creative outputs, and plagiarizing those works without attribution, is not.

    IMHO it's important to grayscale our valuing/devaluing of AI accordingly (or Pascal Monett's shunning/praising), and keep ourselves educated and knowledgeable about this (reading ElReg and trying its HowTo's). Beyond Sally Dominguez's (Mark's colleague, paraphrased) "unfettered human creative thinking to level-up the Fourth Revolution", we also want to be sure to develop and apply the best of our critical thinking abilities to this field.

    1. Anonymous Coward
      Anonymous Coward

      Re: Irrational exuberance skins cats

      No worries about AGI or superintelligent AI anytime remotely soon - they don't exist and won't for a long time. Gen"AI" is just using probabilities to generate the next piece of the image/text, and has no idea what it is building. (It has no ideas whatsoever; it's all A with nonexistent I.)

      Current "AI" is really, seriously overhyped. It can't be used anywhere where the output (as-is, without thorough human checking) needs to be accurate - see recent cases of lawyers being fined for citing AI-hallucinated case law. Which means its usefulness is pretty much limited to coming up with new ideas of things to check (where it may not be any better than humans), and the arts, where there's a huge copyright-infringement issue. If someone's job is going to be replaced by current "AI", they should be looking long and hard at what skills they have, and what they need to learn to be more useful than a dice-roll-based content generator.

      1. Justthefacts Silver badge

        Re: Irrational exuberance skins cats

        “It can't be used anywhere where the output (as-is, without thorough human checking) needs to be accurate”

        In any human enterprise where correctness matters, no human reaches the standard either, without thorough checking by another human. We’ve used independent cross-checks by separate people since forever. The raw output of a single human, without process, is so deeply fallible that only the very lowest value operations can ever use it as-is.

        Code gets reviewed before going to production. When the surgeon comes to operate on your leg, one doctor draws an arrow on your leg to indicate which one…and another doctor separately asks you which one it should be. Authors get copy-edited. Airline pilots have co-pilots. Every manufacturing plant has had separate QA people since a century. Even if you literally just make and sell sandwiches at minimum wage, you’ll find that a Safety Elf will come round and check you are doing it right.

        Checklist review is how the world works, and AI task offload is of course no different.

  5. Howard Sway Silver badge

    drawing a line between "real" and "fake" betrays a naïveté bordering on wilful ignorance

    Making assertions like this in order to pump up the AI hype is however even more naive. I get that an article consisting of facts that have been gathered by a human may not differ all that much from an article consisting of facts that have been gathered by a LLM. Although we'll be stuck forever reading articles full of dry information, without any deeper insight or analysis if that ever becomes dominant. But there are very many reasons to draw this line when it looking at things that are either true or false. Making the argument that it doesn't matter that AI makes up false stuff, because some humans do that too should not be an excuse for this technology's shortcomings.

    As for the idea that people should be made to honestly state how much of something is generative AI output using a "dial", I have news for the people proposing that - not everybody is going to do that honestly all the time.

    1. Doctor Syntax Silver badge

      Re: drawing a line between "real" and "fake" betrays a naïveté bordering on wilful ignorance

      If the article contains real citations to relevant source material which bears out the point being made then it would matter a good deal less whether the article was written by a human or an AI/ML system. However what currently pass for AI/ML seem to be pastiche generators. Yes, they can create pastiches of articles with citations but it turns out that this is simply because the training has taught the systems what a citation looks like, not what it is and it's the appearance that has been reproduced, not real content.

      1. DoctorNine

        Re: drawing a line between "real" and "fake" betrays a naïveté bordering on wilful ignorance

        This has been my criticism all along. AI isn't actually AI. It is a chimera. So what we are dealing with, is a simulacrum of substantive human interaction, without the substance. Since humans actually need the social milieu for their wellbeing, and subsist as individuals within society by being part of this web, feeding them that simulacrum instead, is empty non-nutritive consumption. Many people can sense this decrement in the quality of their lives, as virtual spaces, and now these virtual beings, rob them of the REALITY of their daily experience; indeed of their very existence. It is impossible for me to view this impending tidal wave of unwanted artifice with anything but loathing. People who are enthusiastic for such a revolution seem frankly daft to me. In the 1970s I used to say, 'Keep it real'. That means quite a bit more than its three simple words would appear to say directly.

  6. Mike 137 Silver badge

    "fully artisanal human content"

    Depending on one's pronunciation, that could sound almost Freudian.

  7. Watashi

    Good AI vs bad AI

    I don't think many people would argue that modern CGI hasn't successfully revolutionised cinema and TV production. However, there's an awful lot of mediocre or just plain bad CGI being put into new films and shows. I expect AI will be the same.

  8. Groo The Wanderer Silver badge

    Near as I can tell, 99 percent of "AI generated content" is whacking material prompted up by teenage boys, who have as realistic and mature an expectation of the female human form as anyone else who has been programmed by modern society. I avoid the content because I'm not interested in whacking off, not because it isn't "pretty." But there is a lot more to art than a pretty picture.

    It does have it's niche uses, such as generating repetitive content to support more ambitious interfaces (I'm thinking icon generation, for example), so it won't go away, but unfortunately it makes the talentless droves think that they're "artists", much like it makes a bunch of video-watching clowns think that they're "self-taught programmers" when what they produce is near-illiterate garbage. There is a lot more to true art and true programming than what such "talents" are capable of producing.

    Society is racing to the bottom. We're being drowned in content that is based on averages and statistics...

    1. pdh

      > Society is racing to the bottom. We're being drowned in content that is based on averages and statistics

      Problem is that many people are quite happy with that sort of content. They're looking for entertainment, not for art.

      1. Catkin Silver badge

        This is hardly a new 'issue'. For example, the whole business that led to the legitimate theatre.

        1. An_Old_Dog Silver badge

          Aren't most successful prostitutes actresses and actors? "Oh, Baby, you're the best!", etc. I think some people savor the (not-credible-to-any-logical-and-rational-person) illusions more than they savor the actual sex.

          1. Catkin Silver badge

            That's entirely true but 'legitimate theatre' was a marrying of cultural protectionism with censorship.

  9. AVR

    Laws aren't optional

    The legal stuff does need to be sorted out before useful commercial content can be a real thing though. There's more than one reason for Audible to reject your potential AI-voiced audiobook - would you even own the copyright to it? If not, would Audible face legal hassles later if they sold access to it? There's common sense ways to deal with such issues but no guarantees that all legal systems will follow the 'common sense' path.

    1. veti Silver badge

      Re: Laws aren't optional

      That's actually not that difficult to sort out. The author owns the copyright on the words, that's not controversial. And (I don't know for sure, but I assume) he can buy the voice from Elevenlabs with such terms that he owns the exclusive right to distribute it, which he can then assign to Audible.

      Sure it's a hurdle, but it's pretty easy to clear.

  10. Szymon Kosecki

    Division by null....

    You can't devalue something without a value to start with....

  11. Anonymous Coward
    Anonymous Coward

    Suggestion for another article

    A discussion of whether Audible, an Amazon company, is an oppressive monopolist in the field of audiobooks.

  12. NapTime ForTruth

    The single most important thing AI can do...

    ... is to render the Internet unusable for any meaningful purpose, and in so doing drive us back toward actually thinking and acting in the real world (if only occasionally, lazy curs that we are).

    Being handed an answer isn't learning, and being offered synthetic images of your imaginary dream partner isn't a relationship. Yet we're too weak to turn off what amounts to an interactive version of television and go be in and of the world.

    With luck, and some significant probability, generalized AI could be the tool that either ends the omnipotent artifice of online "presence" or renders the human component of technology redundant.

    Perhaps somewhere ages and ages hence our successors will find the remnants of us buried thousands of meters deep in the internetworked strata, just below the still-hot fallout layer, and will draw wise conclusions and wiser paths from the folly of our self-destruction.

    I hope they are evolved from cats.

    1. Paul Hovnanian Silver badge

      Re: The single most important thing AI can do...

      AI didn't render the Internet unusable. We did that. Years ago. AI is just a face of authority that makes us trust garbage we find on line.

      Anecdote: Many years ago, I went looking for the origins of a quote I had heard in an old movie. But could not remember the source of. Various search engines place the origin in a movie released in 1997. But I can clearly remember it from my days in high school, several decades earlier. The search engines are quite authoritative, giving the above date more often than not as its origin. Instead of "first use found".

      Sorry. Bing, Google and even Alta Vista have enshitified our knowledge base.

  13. IGotOut Silver badge

    The writer puts one side of the argument....

    ....but I do wonder how they will feel when their book is ingested into the AI would then a 1000 knock of copies are sold for 99c on Amazon and people go "No need to pay for that book, I can get ChatGPT to give it to me for free".

  14. Anonymous Coward
    Anonymous Coward

    mm AI great NOT

    AI is a newer fancier markov chain system, with better filtering to stop it coming out directly with junk.

    It still generates junk based on probabilities based on what it was fed, but it's harder to spot why it's junk which makes it more dangerous.

    1. Pete 2 Silver badge

      Re: mm AI great NOT

      > It still generates junk based on probabilities based on what it was fed

      If you genuinely believe that, you should stop what you are doing and read this article in The Spectator.

      It will update your information, which seems to be at least a year behind.

      1. An_Old_Dog Silver badge

        Re: mm AI great NOT

        I read rhe article you linked to. Was I the only one who noticed the LLM system referred to simultaneously praised and criticised the ambiguity it found? (I'm presuming rhe ambiguity of the cause of Emily's death is part-and-parcel of the ambiguity of the book itself.)

        Despite what some (many?) people "feel", these systems are not intelligent, and rhey don't "understand" anything.

        I've played with an ML image-generating system, and with CHAT-GPT3. In most cases, my results were crap. That probably has to do with the databases those systems were trained on, and on how the various elements were weighted.

        This is similar to the reason why expert systems aren't often-used: they are too hard to train well.

  15. Pete 2 Silver badge

    A Day at the Races

    > refuse to have anything to do with any media that has any generative content within it

    Queen's first few albums included a small notice that reminded us "No synthesisers". That all the music was generated by people playing actual instruments.

    However, they later relented and adopted synths.

    ISTM that AI will follow a similar path. That those who initially choose to avoid it, will eventually see the benefits and join the rest of the world in using these tools to make better stuff.

  16. heyrick Silver badge

    Devaluing content created by AI is lazy

    I beg to differ, and no, it's not an ideology.

    For starters, the entire name is a fraud - there is no intelligence in AI. It may be much better at pattern matching than us humans, but sadly there's no understanding. It could make you a picture of $POPSTAR naked because it has ingested enough images to recognise what you meant by $POPSTAR and it has also ingested plenty of images of people in various states of undress. But as can be seen by the fingore, missing/additional limbs, and creepy crazy eyes, it doesn't really know much about human anatomy over and above what it has inferred from pictures. Furthermore, it has no doubt chewed it's way through all sorts of medical textbooks, yet it still had trouble with really rudimentary things like "how many fingers?", not to mention the odd leg sticking out of somebody's head and the like. The textual version of AI is very similar. Close enough to convince if you don't look too hard at the details, but still far from anything truly useful.

    The second thing is that AI might have applications...certainly it's useful to have basic "person/object recognition" in a photo editor to remove unwanted people/things from a photo and do an okayish (YMMV) job of filling in the hole...however since it's "the hot new shit" it is being shoehorned into all sorts of places where it's neither necessary nor even appropriate (regard the recent story about the Roku patent for detecting what you're watching to serve up adverts when you pause). The massive concern here that cannot be downplayed is that AI is computationally expensive, so there's a pretty good chance that AI will be brought to mediocre hardware by simply offloading the processing to the cloud. What this means in other words is not only is your gadget's functionality entirely dependent on the availability of the remote service, but - perhaps more importantly - it'll be sending shitloads of data to be processed. A massive invasion of privacy.

    And, finally, can we actually put that much trust in the blatherings and creations of machines that managed to get where they are today by industrial scale theft and pillaging of other people's data and information, sucking up god knows what along the way.

    AI is today's big whoop. That's why we keep having endless stories about it. And, to be fair, it is a pretty impressive toy. But a toy none the less. Not the saviour of mankind, nor something that is liable to do much more than eventually mutate into a maybe useful tool (once they start giving a shit about where, exactly, the training input is coming from).

    Until that time, treat it with the utmost of scepticism.

  17. cleminan

    Drowning in synthesised rubbish.

    My current stance on LLM AI content roughly goes along the lines of: If you can't be bothered to create it I can't be bothered to read/watch/pay for it.

    LLM models being used to replace creative endeavour feels like employing the technology solving the wrong problem. Pattern matching to spot flaws, infections, the early stage of a potential disaster, tumours, a first pass sanity check on code, they all feel like positive uses of the technology. Outsourcing creativity and media to a probability database smacks of wanting more for the sake of more, there's no added quality & little room for adaptation, invention or interpretation.

    The creative LLMs are fun to play with, but they shouldn't be considered more than toys. An amusing sideline, like novelty records.

    Anything created from LLMs should also be copyright free.

  18. Orv Silver badge

    I just laugh bitterly at how, once upon a time, automation was supposed to free up humans to be creative by taking over menial work. But now tech innovators have decided that, no, it's the creative workers who should be replaced and only the jobs so menial they're not worth buying a robot to do should be left to humans.

  19. pip25

    Meanwhile, in another El Reg article...

    "AI spam is winning the battle against search engine quality"

    That is exactly the problem. You can create quality works with AI assistance - and plenty of human input and refinement. But most AI "content" is created with entirely different priorities: minimal effort with maximum payoff. The typical AI output encountered by people today is garbage, and I doubt labeling would help with that.

    1. amanfromMars 1 Silver badge

      Re: Meanwhile, in another El Reg article...

      The typical AI output encountered by people today is garbage, and I doubt labeling would help with that. .... pip25

      Such is most probably the case, pip25, because garbage is what people generate and AI output typically encountered by people has been trained with input entertaining and expanding upon that fact and virtual reality.

      And unfortunately, inputting with something/anything/everything different and the output from sub par super humans is invariably reliably practically guaranteed to still be the spilling of even more garbage highlighting the likes of the view and opinion that alas the chatbots are still generating meaningless nonsense

      Having been initially trained on human input/output, is the logical natural initial AI result pre Advanced IntelAIgent Learned Large Language Learning Machine Input/Output bound to be basic and tainted and skewed towards the savouring and favouring and flavouring of garbage entertainment and production, but to not expect it to rapidly change with outputs elected and selected sharing an otherworldly available situation for radical revision of future present direction choices, is madness confirmed as systemic and endemic in humanity, and with particularly specific and peculiarly engaging regard to the Much Bigger Picture, is it realised they haven’t a clue about what to do about anything.

      And the question left hanging there to ask is ...... ? Realised by whom and/or what? Future AI[s] or present day human[s] ‽

      1. pip25

        Re: Meanwhile, in another El Reg article...

        Thank you for proving my point, though I could have done without it. >_>

  20. Steve Davies 3 Silver badge

    Of course we will devalue it.

    The usefulness/reliability of AI is being evaluated. Until it can be proven beyond all doubt we will all or we should all have doubts about it. Would you trust your life to critical systems that might suddenly go rogue and say 'I'm sorry Dave, I can't allow that'.

    What safeguards are being put in place?

    Who is tracking the infiltration of AI generated code into mission critical systems?

    The answer is clear. No one is overseeing this.

    Will is take one of these [see icon] before we come to our senses? Probably.

    1. amanfromMars 1 Silver badge

      Re: Of course we will devalue it.

      If you can believe main streaming media pundits and outlets of late, Steve Davies 3, the threat of nuclear weapons conflict is already, as has always been since their conception and storing, an option being considered proportionate and necessary by crazy humans without any assistance from AI.

      And as for your question and answer ........ Will it take one of these [a mushroom cloud] before we come to our senses? Probably. ...... humanity in general does appear to be disposed to listening to and supporting leaderships which tolerate the spreading of such insanity rather than fundamentally opposing it and ensuring it can never ever happen at any time on the orders of a cabal of idiotic fools with no sharp tools in their boxes of tricks.

      Deaf, dumb and blind to simply complex common sense renders the idiot ridiculously stupid beyond compare, does it not?

  21. nobody who matters

    @Mark Pesce:

    Your whole article deserves a massive downvote in itself.

    You are falling into the same trap (and being mislead by the same marketing tripe) that is the sole reason why the current state of so-called 'AI' is so dangerous and really should be kept under very tight control (ie. not in the hands of the population at large); it simply is <NOT> artificial intelligence, and people being constantly persuaded by marketeers that it is Artificial Intelligence leads them to a level of expectation for its abilities which the LLMs are simply not capable of delivering.

    AI possibly will become the all-pervading entity that you suppose, but that will be when true AI arrives, and I have my doubts whether that will ever happen because I suspect the Human Race stands a very good chance of destroying itself long before that happens because someone somewhere will be mislead into trying to use the lead-in tech to do things that it isn't capable of. A case of trying to use the tech to run before it has properly learned to crawl, which is basically what we are already seeing happening with the LLMs.

    It ISN'T AI ;)

  22. martinusher Silver badge

    Obviously you don't really get this creative thing

    The value of a creative work is in the effort needed to create it. There's no easy way to define this but for want of a better term we could use words like 'soul' to describe that part of a person that brings an original work into being. Its true that in the process of doing this a person might use a number of cultural rules, an agreed mental roadmap that describes the form the creative work takes, but this is the difference between formulaic culture and enduring works of art.

    My particular interest is music, mostly classical music. Its probably true that because of the rules of Western harmony every piece of music that could ever be has already been composed by someone at some time. Much of this has been forgotten and rightly so -- I've got a collection of pop music from the 1840s and you wouldn't believe the level of tripe it represents. We only know the Schumanns, the Mendelssohns and the like, quality output that probably only represents a fraction of their total output. AI proponents want to foist the noise on us, the formulaic and get us to pay for the never ending copyright battles. No dice, not interested. Its true that some original work could come out of the machine, just as the proverbial 1000s of monkeys with typewriters could create the works of Shakespeare. It just wouldn't be Shakespeare.

    AI and similar tools are already running rampant on the web. You can tell -- find a webpage that's 'content free' and you're there. I just click away.

  23. AI Hater

    AI is the anti-life equation

    Here is to hoping that the demon that is AI kills the Internet and saves us from itself. Nothing else more purely represents something that is anti-life.

  24. Myristica

    AI generated content is a complex and nuanced topic. I devalue AI generated content because it's not art. Art reflects the author, artist, composer, shining with their tastes and labor. AI has little to none of this.

    AI content has no effort to value. We wouldn't congratulate the handwriting of an essay typed on a keyboard, or the brush techniques of a digital artwork. The font and digital brushes themselves may be worthy of respect, but that respect hardly belongs to one simply using them.

    Likewise, AI generations contain little additional value. I believe people like using AI because it makes them *feel* like artists, but without the effort, and refuse to acknowlege the AI as a tool, rather insisting they alone are the masters. Why should I respect that? Sure, it may require prompting skill or iteration, but does that really earn someone the result? I'd rather admire the prompt and the relentlesness, not whatever the AI spits out.

    On another note, I feel humans are far better at voicing text than AI. Sure, AI can get you good enough, but no futher. A skilled narrator can recognise the tone of text, add helpful pauses, speed up or slow down. Sure, some AI can do this with operator assistance, but at that poing, telling a computer exactly how and where to change tone, you might as well just record it yourself.

    I hope you can see my stance on this complex topic. AI users don't put in effort, not emotional, intellectual, nor physical, and I can't respect something that had nothing go into it.

  25. jake Silver badge

    Lost in the kerfuffle ...

    ... How can I "devalue" something that is completely valueless?

    Most intelligent folks realize AI as sold today is snake-oil at best.

    Consider that today's AI is mostly a marketing exercise that doesn't work coupled to simple machine learning and huge databases that are demonstrably full of incorrect, incomplete and incompatible data, and are otherwise corrupt and stale. Garbage in, garbage out.

    As currently implemented, AI can not work as advertised, not on a grand scale. Not today, and not any time in the future.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like