back to article Red Hat promises AI trained on 'curated' and 'domain-specific' data

In Red Hat land, some things remain the same – Fedora will still be supported, we're told – while others, including AI-driven applications, are starting to surface. This year's Red Hat Summit wasn't the usual lowkey event. Coming on the heels of Red Hat's first layoffs, it felt fair to brace for a somber air. Instead, the …

  1. katrinab Silver badge
    Meh

    I would guess the most likely model if this actually works, would be to buy the software from the likes of IBM/Red Hat, and the AI model from a publisher of technical literature such as Reed Elsiver Butterworth Tolley Lexis Nexis?

    I've played around with PrivateGPT, and it seems like basically a glorified Elastic Search. That isn't necessarily a bad thing, but not quite the game changer that everyone seems to think it is.

  2. Cybersaber

    I give up re:speed on hype cycles

    "Long before ChatGPT turned AI into the buzziest of buzzwords, Red Hat had been working on turning AI into a useful tool.

    This began in 2021..."

    So roughly 11 months prior now qualifies a technology as having come 'long before?'

    1. Anonymous Coward
      Anonymous Coward

      Re: I give up re:speed on hype cycles

      11 months is ages in AI years.

      Massive changes can happen in a week.

      I decided to go down the rabbit hole and take a look at what is out there, deploy some technologies on my own kit and several things jumped out at me.

      1) It is incredibly cheap to roll out.

      2) With the cost in mind, the results you can get are staggering.

      3) There are new models being released every day (yes, quite a few of them are extensions of base models, but for every crap model, there are a couple of interesting ones).

      4) The technologies around AI to enhance AI are moving incredibly quickly. Hypernetworks, Lora, Lycoris, tons of diffusers, networks, GANs etc etc...

      5) It's moving so quickly, that by the time you see something hit the media, it's already old news.

      The key thing appears to thinking of the likes of ChatGPT, LLAMA etc as sort of "OEM" AI technologies. They are general purpose, tech demos essentially.

      Is AI a buzzword? Maybe...is it in the same camp as other buzzwords...e.g. WEB3, DevOps, Cloud etc? Definitely not.

      With regards to this particular article..."AI trained on curated and domain-specific data"...there are two obvious ways you can look at this. Firstly, from the perspective of someone that is only seeing the media hype-cycle...it seems like an experimental and weird decision to make with potentially unproven results.

      From the perspective of someone that has been actively using, experimenting with and deploying AI...beyond the "lol I made Bing angry" bubble...this doesn't seem like a dumb move...setting up a local model, based on a very good base model, that has been enhanced with some additional training to make it better in specific areas, then giving it access to "domain specific" data seems like a reasonably and straight forward move...especially considering that all you need to get started is a quad core CPU (I'd recommend anything 6th gen and up), 8GB of RAM and a 256GB SSD...you fire up the client you want to use (determine if you want a REST API, a full blown chat client, Web UI etc), pick a model that meets or comes close to your requirements, present the client with a path to your "local domain" information...essentially a folder full of documents that you have picked for the AI to learn form...and away you go. You start prompting it, seeing the results, and fine tuning the data you're giving it to round out the results you're looking for.

      You can do that yourself, right now, using GPT4ALL in a few clicks...see the results for yourself and form an opinion based on a real world experiment that you performed, rather than just being sick of the "hype".

      You can have this kind of solution up and running in under an hour and I think that is the attraction right now...you can bring an old box out of retirement and turn it into your own local AI. The investment can literally be zero.

      It's dirt cheap, pretty easy to setup and you can start seeing results very quickly...and as the models improve in very short periods of time, you can see continual improvement in very short spaces of time.

      If you could add a technology to your business that has more benefits than drawbacks for virtually nothing, to see if you can derive some sort of competitive edge from it, why wouldn't you want it?

      AI may be a buzzword, but it's a buzzword that doesn't require heavy investment, massive changes to your business and enormous amounts of technical skill to setup...it's only going to get easier and as the models keep improving, it's only going to get better...whether we like it or not, it's going to be incorporated in a typical tech stack, in some shape or form, whether we like it or not.

      What we need to do at this stage, is figure out how minimise the reliance on cloud based AI to prevent our idiot clients signing up for the latest shiny and uploading terabytes of company data. That is where I see massive problems in the future...some fuckwit CEO decided on his own that linking the company SharePoint, OneDrive and Outlook on Office 365 to WhamBamThankYouMamAI and now that "fly by night" platform has access to anything and everything and becomes a target for hackers...or worse, proprietary information has now been incorporated in a model that could at some point leak to the internet and be used as a "base" model or is manipulated into spewing loads of private / proprietary information.

      There is a lot of dumb shit on the horizon that we're going to have to deal with, unless we stay ahead of the curve and nip in the bud before it happens.

      In terms of the quality of AI...well, that all depends on who is training it and with which data...ultimately, it will be people like us training AI models for businesses...unless you're a naysayer, in which case the CEO will hire a third party company that demands full admin rights over *your* network. Shit always follows the path of least resistance...and in this case, it's firing the guy getting in the way. So you can either roll with it, understand it and be in a position to ensure that this "hype" tech gets deployed properly, or you can resist it and head to the job centre.

      Finally, not all AI is ChatGPT...most of my clients have been using AI in some shape or form, for quite a while now...for some it might be as simple as an AI based upscaler, for others it might be an AI copilot for programming, some people might be using AI scheduling assistants...because I know a lot of the products, I don't see AI as this looming thing in the distance, it's very much already here...thing is, the genie is already out of the bottle...pretty soon we're going to have an AI infused Office 365...at that point, AI becomes basically unavoidable and will be everywhere, it stops being niche and becomes something that everyone has...and it'll hit you like a truck because suddenly you're going to be balls deep in clients asking you questions about AI, how to use it, how to get the best out of it, what they should and shouldn't use AI for, what should they share with it, can you just hook up the company shared drive real quick? etc etc...at that point you're either going to have sensible, rational reasons to gently let them down that you can bring to their attention or you're going to be the crazy "it's just a buzzword, we don't need that shit" guy...being that guy, never goes down well in the long run.

      1. Anonymous Coward
        Anonymous Coward

        Re: I give up re:speed on hype cycles

        "AI" was a buzzword already in the 1980s and the only difference to that time is that someone had internet-full of data to feed one now.

        It still doesn't have any actual intelligence, proper hype.

      2. Bitsminer Silver badge

        You can have this kind of solution up and running in under an hour...

        This.

        And all the government politicians / hype-recyclers / AI-doomsayers / corporate moat-diggers / otherwise uninformed people want to regulate AI?

        Just no.

        The genie is out of the bottle: AI search and AI solutions are going to be on almost every phone and desktop within the next few weeks.

        Public large models are only 10 or 20 GB; finetuning and/or prompt engineering gives you an AI machine that can solve 90% of your problem in minutes on any cloud VM for $2 an hour. Your prompts and the AI answers are private and therefore of significant commercial advantage.

        Small and medium businesses (and criminal gangs) that catch on with this will win big time.

    2. Anonymous Coward
      Anonymous Coward

      Re: I give up re:speed on hype cycles

      The ‘buzziset of buzzwords’ now is ‘curated’. If I hear it once more from an effing radio presenter (BBC 6 Music are you listening??) I’m going to put my foot through my wireless.

    3. OhForF' Silver badge

      Re: I give up re:speed on hype cycles

      I wonder why the author says this started in 2021. Wasn't Watson around since 2011?

      My take is more like "IBM has been working on expert systems and AI for decades now and still struggles to find commercially viable applications for them".

      1. Cybersaber

        Re: I give up re:speed on hype cycles

        You could be right, dunno, didn't research the full history of watson. My point was that the author pointed to an event roughly 11 months before the article was published, and reffered to that as 'long ago'

        Knowledge domains have subjective terms about what 'long' is. 'Recent' or 'long ago' have entirely different timescale inferences in geology, for instance.

        The fact that 11 months was now being referred to as 'long ago' in reference to LLMs had me rolling my eyes so hard that maybe I need to get a gyroscope implanted in them to keep from shattering my orbits in the future.

  3. TVU Silver badge

    Hicks seems to be talking corporate BS. How can he say that ""I think Fedora has an incredible opportunity" when he's just gone and sacked the overall coordinating project manager for Fedora? I strongly suspect this won't end well.

    1. Anonymous Coward
      Anonymous Coward

      Right to be sceptical about Fedora comments

      "It's still the distribution base for what Red Hat Enterprise Linux (RHEL) will look like in five years” strikes me as an odd thing to say. On average, a new RHEL major release lands every 3 years. So that is either an admission that Red Hat is now delivering less value to their subscriber base by releasing fewer major releases over a typical server lifetime, or that doesn’t mean what it looks like at first glance.

  4. Groo The Wanderer Silver badge

    I'm not surprised IBM/Redhat "get it".

    The general internet is largely populate by illiterate and ignorant masses who are hardly the "bastion of knowledge" that you should be querying for technical answers on any subject. It still won't be "intelligence" if it is just an LLM, but I'm pretty sure they're going beyond that limited approach. The Watson heritage and mentality insists on it.

  5. Anonymous Coward
    Anonymous Coward

    Now me being cynical:

    "That means, we're told, these LLMs have been built on data that Red Hat knows is correct. When Lightspeed generates a particular Ansible Playbook – a reusable, simple configuration management and multi-machine deployment system – Red Hat says it's based on tested, high-quality data and code. Not some garbage someone wrote up in a hurry to meet a deadline."

    When I read this, my immediate thought was "knows is correct, biased towards IBM in some way, shape, or form".

    As for "high-quality data and code", if they are basing it on stuff from IBM I'd have to add a pinch of salt.

    Back in the day when I had the 'pleasure' of working with Watson, it would mainly push IBM crap over anything else, even if it was not directly related to the query I was submitting at the time.

  6. Anonymous Coward
    Anonymous Coward

    "When Lightspeed generates a particular Ansible Playbook – a reusable, simple configuration management and multi-machine deployment system – Red Hat says it's based on tested, high-quality data and code. Not some garbage someone wrote up in a hurry to meet a deadline."

    Yea, right. If anyone believes that load of bollocks, they are out of their minds.

    More likely it's instead written to sell IBM products *and* generated by BS generator, in a hurry to meet a deadline.

  7. abend0c4 Silver badge

    Install Nodejs dependencies

    It strikes me this is the kind of task better suited to a script retrieved from a known origin and relating to a specific version of the software being installed: it's either a straightforwardly repeatable task or it isn't - in which case AI can't be relied on to get it right either.

    Having some LLM infer the necessary commands from some training data it acquired at an unknown point in the past, even if it got it from "reputable" sources, sounds like using an extremely unwieldy sledgehammer with a loose handle being used to crack an already-shattered nut.

    1. pip25
      WTF?

      Re: Install Nodejs dependencies

      Yeah, that example in particular is really odd. Are they seriously trying to automate the simple "npm install" command...?

  8. Duncan10101

    I've really got my doubts

    So we have this situation where the LLMs start spouting lies. Now, why is that?

    Everybody on the LLM bandwagon will cry "It's the training data set". But is that really true? I seriously do not think so.

    These models are just statistical guessing machines. Whatever words are most likely to come-up next, it outputs. It has no concept of truth, of facts, nor the meaning of its own output. It has no concept of meaning. It has no concepts at all.

    So imagine you give it a clean, truthful dataset. Just because the individual inputs may have been truthful, it doesn't mean that this property will be preserved after its "Great Statistical Remix." Until somebody can show me an algorithm that has the proven property of "Truth Preservation" then this whole idea seems deeply flawed, and I suspect the pursuit of a "Curated dataset" will be nothing but a wild-goose chase.

    Consider:

    In images, there are often features that we humans think of as "Fingers." Now, to the AI, this might be just a statistical pattern, but however you look at it, in a photograph containing a hand, the feature that we might call a "Finger" is statistically very likely to be followed by another "Finger." Now, unless you have some sort of understanding of what a finger is, and how many a typical human might have, the simple probability is that a finger is very often right next to another finger. So what do our favourite image-generative AIs produce? That's right, pictures of people with the wrong number of fingers.

    Go on: try it. Ask one for an image of "A man covering his face with his hands." I bet you get six or seven fingers per hand.

    So ... what's my point about the fingers? Let me ask this: Exactly how many photos in the image-generator-AI's training set do you think depicted people with seven fingers? I'm betting it's a very, very low number.

    We all understand the truth of "Garbage-In, Garbage-Out" ... but it does NOT imply "Truth-In Truth-Out" at all. Not one little bit.

  9. Doctor Syntax Silver badge

    "That means, we're told, these LLMs have been built on data that Red Hat knows is correct."

    There might be a touch of hubris in there.

    What, I wonder, happens when something they "knew" to be correct turns out not to have been? Does being "curated" mean they can simply remove the bits which are now known to be incorrect? Or tell it to disregard that bit of training? Or do they have to go through the entire training with corrected data.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like