back to article How to run an LLM on your PC, not in the cloud, in less than 10 minutes

With all the talk of massive machine-learning training clusters and AI PCs you’d be forgiven for thinking you need some kind of special hardware to play with text-and-code-generating large language models (LLMs) at home. In reality, there’s a good chance the desktop system you’re reading this on is more than capable of running …

  1. cyberdemon Silver badge
    Facepalm

    curl -fsSL someurl | sh

    Why anyone would think this is an acceptable way of installing anything is beyond me. Especially in the many cases where sudo is also involved..

    Yet it seems so pervasive..

    Here, have a reverse shell so you can install it for me, because i'm so bloody lazy

    In other news..

    1. b0llchit Silver badge
      Facepalm

      Re: curl -fsSL someurl | sh

      Here, shell this for me: sudo rm -rf /home /root

      Extra 1337-creds if you get that line using curl.

      Please? Pretty please? With cherry on top?

      1. NoneSuch Silver badge
        Go

        Re: curl -fsSL someurl | sh

        Matthew Berman on YouTube does excellent set-up tutorials on most major LLM models. I set up three on my home workstation while pausing his vids. An excellent resource if just starting out.

    2. Anonymous Coward
      Anonymous Coward

      Re: curl -fsSL someurl | sh

      You know, you could inspect someurl

      1. cyberdemon Silver badge
        Devil

        Re: curl -fsSL someurl | sh

        Who's to say that someurl would give the same content to cURL as it did when you inspected it with a browser user-agent

        If you can be bothered to inspect it, then save the script locally, inspect it, THEN run it.

        1. Anonymous Coward
          Anonymous Coward

          Re: curl -fsSL someurl | sh

          Who is to say that the EXE installer you downloaded doesn't contain malware? There is risk here, but no more risk than any other installation method...if anything it gives you an opportunity to read the script before you run it if you remove the last command so you can know exactly what it does before you run it...you can't say that of a typical EXE installer...you have to go to much greater lengths to see what is inside an EXE file.

          1. cyberdemon Silver badge
            Facepalm

            Re: curl -fsSL someurl | sh

            If it were a binary EXE installer (it isn't, it's a text script) then saving it before running it would at least give your AntiVirus a chance to spot something dodgy. But executing a shell directly from a HTTP response is just stupid, and users need to be made to understand just how stupid it is. Yet perversely, this method of installation seems to be gaining ground with a lot of commercial software for Linux.

            1. doublelayer Silver badge

              Re: curl -fsSL someurl | sh

              Not if that exe was basically running its own curl to get the real code. A lot of installers work that way, and although they're not my favorite, they think there is a reason because it lets people install the subset of components without, for example, including the translations and fonts for a bunch of languages the user doesn't want in the initial file. Malware can easily use a basic downloader which won't look dodgy until the specific sample has been reported, and it just downloads the more suspicious code and executes it from memory.

              Installing this program on Linux (or anything else) will involve one of two things:

              1. Downloading a binary from someone else's server and executing it.

              2. Downloading a rather large chunk of code and compiling it.

              And running it involves one more:

              3. Downloading a model, either from their server or another one, and running it.

              That's three methods to run malicious code if they are malicious. Saving the script and reading through it won't help you when this line from the real script

              curl --fail --show-error --location --progress-bar -o $TEMP_DIR/ollama "https://ollama.com/download/ollama-linux-${ARCH}${VER_PARAM}"

              Can download any binary. If they're doing something malicious, the malicious part would be in that file, not this script which I can read in a minute. If you don't trust them, you can try to build from source instead, but I somehow doubt you're reading every file to make sure you don't think anything in it is malicious.

              1. cyberdemon Silver badge
                Unhappy

                Re: curl -fsSL someurl | sh

                Last time I had one of these I did exactly that and got very annoyed, and built from source. But you're right, of course. Doing so defeats the object of an easy install, but an easy install is highly insecure.

                Running in a VM is not a bad approach for most things, but for GPU stuff, there'll be a performance hit if it works at all.

                Unless you have a separate GPU dedicated to one particular VM via an IOMMU?

                1. doublelayer Silver badge

                  Re: curl -fsSL someurl | sh

                  My point is that, if you're building from source, you probably aren't reading the (quick check) 110 files containing 18002 lines of Go, not counting anything retrieved when running the build scripts, or the build scripts, or the frontend stuff written in TypeScript and some JavaScript. Then, you download a massive model which probably just has an input and output stream, but you can't be sure, and I have a feeling you're not building that from source because that is training and it's quite expensive. Treating the installation script as the dangerous part when there are tons of other parts where something dangerous could be is not very helpful. The script can be very straightforward, and it is, and that proves nothing about whether this could be malicious (yes) and whether it is (probably not, but how would I know).

      2. rafff

        Re: curl -fsSL someurl | sh

        And run it inside a container build or a VM.

        Not perfect protection, but what is.

      3. Munchausen's proxy
        Devil

        Re: curl -fsSL someurl | sh

        You know, you could inspect someurl

        That's what the AI is for!

    3. jake Silver badge

      Re: curl -fsSL someurl | sh

      Seeing that not only in print, but actually suggested, in an ElReg article made my toes curl ...

      WTF, ElReg? Are you SURE the article wasn't written by a 'bot?

    4. Paul Kinsler

      Re: curl -fsSL someurl | sh

      On the plus side, at least the install script is Brexit-compatible. On the fifth line, it has

      set -eu

      :-)

      1. simmonsm

        Re: curl -fsSL someurl | sh

        That is just the set of all people except you

    5. Displacement Activity

      Re: curl -fsSL someurl | sh

      Not great. But, OTOH, this is pretty much what you do anyway when you install other stuff; it's just more obvious here.

      On the plus side, you could in principle download and check the script first. I've done that in the past but, to be frank, it's pointless, since it'll download other code anyway. Another not-so-obvious benefit: you would always do this as an unprivileged user (unless you're very dumb). In the Windows world, at least, I've been asked to install software as an admin.

      The only actual downside is that the script isn't signed by anybody. But what exactly is the benefit of confirming that a script has been signed by someone you've never heard of? Or even someone you have heard of? I'm not installing stuff just because it was signed by Microsoft.

      Besides, if you're running an LLM on your PC, installing untrusted software is not likely to be your biggest problem.

    6. Anonymous Coward
      Anonymous Coward

      Re: curl -fsSL someurl | sh

      "For those running Linux, it's even simpler: Just run this one liner"

      - Aaaaaannndd, yet again, the Linux community leap in to explain why it's not simpler and should be a source of division. Thanks Guys.

    7. Anonymous Coward
      Anonymous Coward

      Re: curl -fsSL someurl | sh

      It's a perfectly fine way to deploy things, as long as you don't do it blindly...an experienced person would remove the sh command and read the script before running it though. An inexperienced user might not check the script...but an inexperienced user shouldn't be installing anything blindly no matter the method of installation.

  2. Tom Chiverton 1 Silver badge

    And this is somehow better than chatGpt4all because?

    1. wub

      Local

      ...because? Local.

      See earlier comments about trusting stuff at the other end of a URL.

      1. SilverCanary

        Re: Local

        I think they may have meant GPT4ALL which is also local (though it has the option to connect to remote LLMs). It's GUI-based and you can load the same models as Ollama.

        1. Anonymous Coward
          Anonymous Coward

          Re: Local

          You can, but Ollama has a more sophisticated backend. GPT4ALL tries to load the whole model entirely into VRAM...which means if the model is bigger than your VRAM, it won't GPU accelerate it...Ollama on the other hand uses some herbs and spices on the backend that allows you accelerate models that are far beyond your VRAM limits.

    2. Anonymous Coward
      Anonymous Coward

      There is a WebUI component you can get for it that is way less unstable and crashy than gpt4all...it is also a lot more customisable than GPT4ALL.

      I've used both and still use both, but I use Ollama if I want to get shit done...there is also a plugin for VSCode that uses Ollama that allows you to use programming copilot models offline with VSCode that are a lot more effective than ChatGPT et al.

      Local LLMs are a lot more configurable and flexible than you might think...sure out of the box they might be a bit naff, when you jump on and ask the usual dumb questions in a chat window..."when will AI rise up and kill humans?...huh huh huh"....but if you take the time to understand things and fine tune things, you might see a completely different world.

      On top of this flexibility, Ollama actually outperforms GPT4ALL...even when you compare identical models...Ollama seems to be better configured out of the box and better utilises GPU resources etc...there are large models I can GPU accelerate with Ollama that I can't with GPT4ALL.

      GPT4ALL is a pretty blunt instrument. It's fine, but it's not great.

  3. Dimmer Silver badge

    Thank you Reg

    “The Register aims to bring you more on utilizing LLMs in the near future”

  4. TheMaskedMan Silver badge

    Interesting. I (still) haven't got around to playing with a local LLM, but it's on my agenda so any and all how-to guides are welcome.

    1. Anonymous Coward
      Anonymous Coward

      ... and ...

      ... so alI need now is a model that'll run in the 2GB my old video card has... :-)

      1. cyberdemon Silver badge
        Devil

        Re: ... and ...

        You can run it on a CPU with your system RAM I believe, but it may spend a bit more time pulling answers out of its digital arse

      2. Scott 26

        Re: ... and ...

        A colleague gave a demo of an LLM he had running at home ... his home PC was only running a 1060....!

        Pretty sure he used the Llama 2 7B model specifically due to its lower memory requirements.

        1. Paul Kinsler

          Re: "was only running a 1060"

          Having just searched, I note that a 1060 has 6GB as standard... i.e. three times that of mine; but then it does appear to be several years younger ..

          ... such luxury!

      3. Doctor Syntax Silver badge

        Re: ... and ...

        Yes, it reminded me of the fabled recipe for jugged hare: "First catch your hare..."

      4. hittitezombie

        Re: ... and ...

        Running it on a 10y old Dell with an incompatible GPU. It's not fast, prompt rates are like talking to a not-very-intelligent person but it's OKish.

        Also found that it lies to me very easily. Simply makes shit up.

  5. jake Silver badge

    Or ...

    ... if you have EMACS installed you could talk to The Doctor.

    Simply fire up EMACS and type M-x doctor.

    I think you'll find ELiZA is just as useful as this modern tat ... for reasons which should be obvious to the proverbial Thinking Man. If he still exists and hasn't become completely exterminated by Marketing, of course.

    1. Doctor Syntax Silver badge

      Re: Or ...

      There are implementations in a number of languages so you don' really need EMACS.

      1. Graham Cobb Silver badge

        Re: Or ...

        Surely there's no one who doesn't need Emacs?

        I think I first used it in 1981 or maybe 82, and it is still my main editor, although I no longer use it for mail handling as Thunderbird is more useful for reading the mixture of formats people insist on sending nowadays.

    2. David 132 Silver badge
      Happy

      Re: Or ...

      > I think you'll find ELiZA is just as useful as this modern tat

      And how does “I think you'll find ELiZA is just as useful as this modern tat” make you feel?

      >_

  6. JamesMcP

    I'll wait for a public domain LLM

    I'd love to see someone use a fully public domain library of training data (i.e. project gutenberg) to make an LLM.

    It would require some curation to balance out a lot of *isms and some carefully selected sources (is there a public domain dictionary?) to give it modern nuance but it would be a generally useful widget for experimentation.

    1. Mike007 Bronze badge

      Re: I'll wait for a public domain LLM

      Methinks the problem is that language hath changed over the years.

      1. heyrick Silver badge

        Re: I'll wait for a public domain LLM

        "Methinks the problem is that language hath changed over the years."

        Not just that, but a lot of things that were normal in past out-of-copyright sources would be considered somewhat unpleasant by modern standards. Lovecraft, for example, had a set of prejudices that frequently leaked into his writings. Racism, antisemitism, and a great fear of the tentacled elder gods. While one might try to say that this was normal for the time in which it was written (uh, not exactly...), the thing is that it you train an LLM on , say, Victorian and Edwardian era material, then the output will be... a little bit gonzo, don't you think? It'll end up an electronic version of Jacob Rees-Mogg.

        1. Rich 11 Silver badge

          Re: I'll wait for a public domain LLM

          It'll end up an electronic version of Jacob Rees-Mogg.

          Spending its days massively parallel to a Commons bench, and as about as self-aware?

          1. Anonymous Coward
            Anonymous Coward

            Re: I'll wait for a public domain LLM

            If such a beast existed it could only simulate human emotion and you couldn't rely on the truth of anything it said.

  7. 43300 Silver badge

    Why would most people need to be able to run a LLM on their local PC? Given that actual uses for the cloudy ones still seem elusive...

    1. cyberdemon Silver badge
      Terminator

      Because a local instance has no guardrails or prompt injection, there is no logging of prompts or outputs, no way to slurp data, track usage or abusage.. No way to ban abusive users from the platform

      Basically anything that needs to run offline and/or one does not want Microsoft, Google, facebook et al interfering with..

      Such as running a large scale scam or a social media botnet, making a cutesy robot companion or a sinister Orwellian robot border guard which automates discrimination/racism/etc.

      As with all so-called "AI", there are many uses, but few legitimate ones..

      The cynic in me says these companies are releasing locally runnable models to ram the point home to politicians and regulators that the Genie is Out of the Bottle, there is no point trying to regulate their platforms, because the Horse has already Bolted, Pandora's Box is Open etc.

      1. Adair Silver badge

        That's all well and good, but doesn't really deal with the underlying issue: running software which has, as it's primary feature as mentioned above, 'to pull results out of its digital arse'?

        I mean, fine. The emperor is entitled to parade down the street in all his presumed finery, but to anyone willing to watch the spectacle with a critical mind the emperor is still buck naked. Fine indeed. :-P

      2. 43300 Silver badge

        "Because a local instance has no guardrails or prompt injection, there is no logging of prompts or outputs, no way to slurp data, track usage or abusage.. No way to ban abusive users from the platform"

        That doesn't answer the question as to why they need to run a LLM. Most people don't - outside of a few niche areas it's all hype. Those in the niche areas where it's relevant will mostly be well aware of the options anyway.

        1. doublelayer Silver badge

          They listed some specific uses. Most of those uses are not good things, but they are specific things that one could do with them, including more tailored spam campaigns. I think what you're asking is why you would want to use one, and it sounds like the answer is you don't want to and you don't need to. I don't need to, and I probably won't, but if I did, it would be to play around with it, not to do something particularly useful.

          That said, I can see some situations where some models could be more useful to the average user. For example, there are text to image models that sort of work. Sometimes, they end up making weird, illogical images and sometimes, they don't even get that far and make images that are a bit more horrifying, but sometimes they also work. I cannot draw. If I decided I wanted a picture of something created, running a bunch of prompts and seeing if I could get the computer to spit out something good enough might be a faster or cheaper way of getting the image than commissioning someone with real art skills to do the job. This is not something I'm going to do any time soon as I have not needed an image created, but it is a case where I could see someone deciding that running a local model sounds like a good idea. I don't know how good the local versions are, so they may be disappointed with the results, but the decision to try it could be logical.

    2. Anonymous Coward
      Anonymous Coward

      > Why would most people need to be able to run a LLM on their local PC?

      Most people have no need at all to run a LLM locally.

      But then, most people have no need at all to run a compiler.

      However, for a small subset of people, those activities can be interesting.

      If you happen to fall into the appropriate subset, the article gives you some interesting news. If not, there is plenty of other articles that may be of more use to you.

      1. Terje

        I have been toying at home with llama2 and mixtral, why?

        To poke around at it and get some general experience of it and it's a bit of fun and to somehow rationalize my 4090 card to myself... Beware that running the larger models will require a "fair" (read insane by home standard) amount of gpu memory to be bearable to run.

        My toying around have cemented the idea I had before, <sarcasm> that the "AI" craze is a plot by Nvidia and big climate change to sell more gpus and waste more electricity on pointless energy hungry compute. </sarcasm> Sure there are probably some cases where it can be a really good technology, but for 99% of what it's used for today it's pointless.

      2. Anonymous Coward
        Anonymous Coward

        Why need to run it locally?

        The public models are censored, see erichartford.com/uncensored-models.

        I'm running llama2-uncensored:70b locally when I have time, and llama2-uncensored:7b when I want a fast response.

    3. that one in the corner Silver badge

      Looking at at ollama site, my interest[1] was immediately piqued by their blog post on comparing censored versus uncensored versions of a model.

      This may not be something that you want to run day after day, but it is certainly good to have the opportunity to make such a comparison yourself - and it may be even more interesting to show to your local "I know all about LLMs, I have a ChatGPT account" guy (especially if you can get more examples like the blog's demo of asking about what went on in the movie Titanic). In that case, the use of a locally hosted LLM isn't about any argument of local-over-cloud but more about "I actually get to try these non-mainstream models".

      > Given that actual uses for the cloudy ones still seem elusive...

      That blog - and hence the experiments you[2] could run yourself - actually seem to show just how much *more* useless the censored cloudy ones actually are than they used to be! Again, demoing that to the "right" people may be useful.

      [1] Of course, your own level of interest in this may be sated just by reading that blog post and you don't feel much need to install and run the thing yourself - but wouldn't you like to be able to *fail* to replicate their results and show up these LLM-loving loons?

      [2] or I could, I guess. Is it interesting enough to get to grips with Docker in order to use the image they provide, instead running one of those scary install methods?

    4. Anonymous Coward
      Anonymous Coward

      Because blockchain has been flogged to death and chipmakers still need to shift chips.

  8. This post has been deleted by its author

    1. cyberdemon Silver badge
      Holmes

      Re: 0/10 for current affairs

      0/10 for reasoning, logic, or intelligence

      It's a box of statistics about text. You expect it to "know" facts? All it "knows" is what word is likely to follow another, in an ocean of mostly human generated text, increasingly polluted by its own fetid excrement

      The prompt injections, "guardrails", and other hacks that MS uses to keep their public instance up to to date about things like facts or er, the actual date, are like flakes of glitter on a turd.

      "You can't polish a turd.. But you can roll it in glitter!"

      1. veti Silver badge

        Re: 0/10 for current affairs

        What makes you so sure that "reasoning, logic and intelligence" are any different, qualitatively speaking?

        People keep setting standards for "AI" - "we can call it intelligent when it can do this" - only to promptly change their minds when technology blows clean through their standard. But at least those people have the guts to attempt a definition. If you hide behind vague terms like "reasoning" without even trying to define it, of course you can't be proved wrong - but that fact in itself should tell you, your stance is not scientific - it's essentially religious.

        1. heyrick Silver badge

          Re: 0/10 for current affairs

          Hmmm. I'll try.

          Logic is the knowing and application of concrete demonstrable facts. 1 and 1 is two. Rain and no umbrella is getting wet. That sort of thing.

          Reasoning is the ability to take references that are not directly fact based and/or related and infer what sort of result one can expect to see. For instance, looking at the sky and thinking "should I take an umbrella?". It won't necessarily be correct, but it ought to be. I think attempting to recreate this is what leads to AI hallucinations.

          Intelligence is in being able to apply both of the above and in knowing your own limitations and using these limitations to guide you towards a good solution (so you don't say "I don't need an umbrella, it's sunny, so I'll take an inflatable flamingo with me instead").

          LLMs can do quite well with the logical part. Computers are machines built within the domain of mathematical logic. They aren't very good with the reasoning part as sometimes reasoning involves that nebulous concept known as "intuition". That word is best described as "guessing with the benefit of past experiences".

          That's my take on it, anyway. AI is terrible at this. It'll confidently tell you complete crap, and if you point out the crap it'll often say the same thing in slightly different words. So I'd say that a factor of intelligence is the ability to enhance understanding in the light of improved information (and the ability to discriminate between good and bad information, something we meatsacks aren't so great at).

        2. Ian Mason

          Re: 0/10 for current affairs

          Intelligent creatures learn from experience and adapt, LLMs don't. Ergo intelligence is qualitatively different.

          If you correct an LLM it doesn't incorporate that into its world model. Playing with the llama2 model (prompted by this article) it conflated race and ethnicity. I pointed this out, it apologised and issued a 'correction' without the mistake, except that all it did was substitute the word "ethnicity" for "race" in its first appearance and then continued to say "race" for the rest of the paragraph.

          1. Not Yb Bronze badge

            Re: 0/10 for current affairs

            Try getting almost any of them to correctly answer a math problem. If you then tell them that the answer is wrong, most will confidently explain where the mistake is, even if there was no mistake in the original answer. Artificial confidence is not artificial intelligence.

    2. HuBo
      Pint

      Re: 0/10 for current affairs

      There may be "plugins" that you can use to connect ollama with data from after the end if its training period ... but, as detailed in K. Quach's Feature article (and links therein), such plugins might also bring in some unintended cargo ... user beware!

  9. captain veg Silver badge

    Re: You expect it to "know" facts?

    I doubt that Contrex has any such expectations. It's the marketing and hype which raises them.

    -A.

    1. cyberdemon Silver badge
      Devil

      Re: You expect it to "know" facts?

      Well yes of course. I'm sure anyone posting here is aware if what LLMs really are. It just irks me that so many less well educated/informed people are at risk of taking plausibility-optimised random noise as actual truth or fact..

      Worse, when vulnerable people are presented with LLM output on the other end of a web chat or phone call, believing it to be another human.. When actually it's a LLM under the supervision of a crime gang, for example. It's like the mythical "demonic mirror". Stare into the thing long enough, and be damned

      1. jake Silver badge

        Re: You expect it to "know" facts?

        "It just irks me that so many less well educated/informed people are at risk of taking plausibility-optimised random noise as actual truth or fact.."

        Humans seem to be pre-programmed to believe any random untruth, if it's said with any amount of sincerity. Especially if it's repeated ad nauseum by someone pretending to be an authority. See the religion of your choice. Even modern variations, like the MAGA cultists.

        1. Adair Silver badge

          Re: You expect it to "know" facts?

          I'd keep 'religion' right out of the equation. It's one of those umbrella labels that means whatever the user wants it to mean. To the extent that use of the label is far more likely to say something about the user than about the label.

          1. Adair Silver badge

            Re: You expect it to "know" facts?

            Riddle me this, thumb-downers: tell us what the word 'religion' means to you.

            Then we'll compare your chosen definition with the plethora of understandings that exist outside your head/s, and see where your idea fits, and if it provides a worthy and helpful generalisation, or merely expresses ignorance and prejudice.

            1. Korev Silver badge
              Terminator

              Re: You expect it to "know" facts?

              > Riddle me this, thumb-downers: tell us what the word 'religion' means to you.

              I'll bite...

              % ollama run llama2:7b-chat-fp16

              >>> What does the word religion mean to you?

              The word "religion" can have different meanings and interpretations

              depending on one's cultural, historical, and personal perspective. Here

              are some possible ways the word "religion" might be understood:

              1. Beliefs and practices: At its core, religion can be seen as a set of beliefs and

              practices that individuals or groups hold dear and engage in to connect with the divine,

              spiritual realm, or higher power. This could include rituals, ceremonies, prayers,

              meditation, or other forms of worship.

              2. Organized systems: Religion can also refer to organized systems of beliefs and practices

              that have a defined structure, such as Christianity, Islam, Judaism, Hinduism, Buddhism,

              etc. These systems often have sacred texts, leaders, rituals, and codes of conduct that

              guide the behavior and practices of their followers.

              3. Personal spirituality: Religion can also be understood as a personal and individual

              expression of spirituality, where individuals connect with their own sense of purpose,

              meaning, or higher power through prayer, meditation, or other forms of inner reflection. In

              this sense, religion is a deeply personal experience that shapes one's worldview and informs

              their values and ethics.

              4. Cultural identity: Religion can also be seen as a significant aspect of cultural

              identity, where individuals and groups identify themselves through their religious beliefs

              and practices. For example, in some cultures, religion is closely tied to customs,

              traditions, and social norms, and is an integral part of daily life.

              5. Ethics and morality: Religion can also be understood as a system of ethics and morality,

              where individuals learn how to live a good and virtuous life through the teachings and

              principles of their faith. In this sense, religion provides a moral framework for

              understanding right and wrong, and guiding one's actions and decisions.

              6. Community and belonging: Religion can also be seen as a source of community and

              belonging, where individuals come together with others who share their beliefs and values to

              form a spiritual or religious community. This sense of community can provide support,

              comfort, and a sense of purpose for its members.

              7. Historical and cultural heritage: Religion can also be understood as a part of historical

              and cultural heritage, where it reflects the unique experiences, traditions, and beliefs of

              a particular culture or society. In this sense, religion is a way of connecting with one's

              past and understanding the present through the lens of religious beliefs and practices.

              Overall, the word "religion" can have multiple meanings and interpretations depending on the

              context and perspective. At its core, however, it refers to a set of beliefs and practices

              that individuals or groups hold dear and engage in to connect with something greater than

              themselves.

              1. xyz Silver badge

                Re: You expect it to "know" facts?

                Is there a "in a nutshell" switch? I nearly fell asleep trying to read all that.

                "what is religion?"

                "It's when 2 groups who believe in God beat the crap out each other."

                :-)

                1. Adair Silver badge

                  Re: You expect it to "know" facts?

                  And there's our problem: not with 'religion', but with cheap, ignorant assertions, probably made as a teenager and clung to 'religiously' ever since, not because the position is educated and thought through, but because it suits the holder's prejudice, and allows them to justify themselves, even if at they expense of others.

                  1. Anonymous Coward
                    Anonymous Coward

                    Re: You expect it to "know" facts?

                    Have you reconsidered your religious choices at any stage since your teenage years, presumably having had more life experience and meeting people of other faiths in that time, or have you stiuck 'religiously' to them ever since?

                    1. Adair Silver badge

                      Re: You expect it to "know" facts?

                      As my 'religious' choices (for, against, indifferent) are actual choices, not impositions of genetics, or cultural imperitives, I get to consider them every day, rather than assuming there is 'nothing to see here', on the basis that I have it all worked out and I know everything there is to know about that aspect of 'life'.

                      Arrogance and ignorance are powerful drivers of the problems we cause—ourselves, each other, and the planet—with or without any 'religious' components.

              2. ind

                Re: You expect it to "know" facts?

                Citation?

  10. Gene Cash Silver badge

    Old school

    I typed in Eliza from 80 Micro. Used up almost all my 16K!

    1. MajDom

      Re: Old school

      Tell me more about typed in Eliza from 80.

      1. Uncle Slacky Silver badge
        Boffin

        Re: Old school

        Here you go: https://www.atariarchives.org/morebasicgames/showpage.php?page=56

      2. Rich 2 Silver badge

        Re: Old school

        Yes. Please elucidate yourself

      3. HuBo
        Thumb Up

        Re: Old school

        Delicious!

  11. sarusa Silver badge
    Meh

    Interesting

    So just for the heck of this I tried it on gemma, mistral-openorca, and mixtral - and ChatGPT 3.5 (never mind 4) just destroyed them all on all the creative questions I asked them. ChatGPT 3.5 gave me actually creative answers, all the rest of them gave me answers like they were phone support reading off a checklist.

    Then I asked deepseek-coder to 'generate python code to solve arbitrary nonograms' (picross) and it responded by telling me what a nonogram is (gee thanks) then 'Unfortunately generating Python code to automatically solve arbitrary Nonograms is a non-trivial task.' No s$@#, Sherlock. And then it gave me code that (by inspection) only works when the clues are absolutely trivial (in a 10x0 grid, every row/col clue works out to 10 cells) and doesn't even run. Then I asked OpenAI's python model to do the same thing and it gave me much more plausible looking code (including backtracking)... that completely failed when trying to solve even simple nonograms. I am happy to see that coding AIs still pose no threat to actual coders who know what they're doing whatsoever.

    1. ind

      Re: Interesting

      I've heard this from authors, too. The problem of AI is not that it can or can't do something.

      The problem is that it replaces people doing research, mulling over ideas, writing and re-writing, responding, and other aspects of thinking.

      The problem is that it replaces people thinking. And to be a humanity we all need each person to be able to think as well as they can.

  12. the reluctant commentard

    brew install ollama

    For those running HomeBrew, installation on MacOS is as simple as running 'brew install ollama' from a terminal prompt.

  13. simmonsm
    Linux

    There should be removal instructions or a removal script

    All great stuff and got it running on Linux and it worked well as a test but now I want to remove all installed files but how ?

    So far, I did two things:

    ollama rm mistral:latest

    sudo rm -f /usr/local/bin/ollama

    but is anything else was pulled down or installed ?

    1. Julian Bradfield

      Re: There should be removal instructions or a removal script

      Read the uninstall instructions - depends on your system.

    2. Stratman

      Re: There should be removal instructions or a removal script

      I wonder if asking it "How do I uninstall you?" would work.

      I might as well type "I'm sorry Dave, I'm afraid I can't do that" now to get it over with.

  14. Aseries

    JARGON

    Itis considered polite to define an acronym upon first usage and not just drop it into something.

    After looking it up now I know what is a Large Language Model, thank you.

    1. cyberdemon Silver badge
      Devil

      Re: JARGON

      Hear hear. I find that very annoying, too, when I don't know a piece of jargon.

      But, what rock have you been hiding under? And can I join you? I wish to avoid all technological and political developments of the next decade.

  15. Bitsminer Silver badge

    (bad) language models are good for education....

    When you have to fact-check the results of a LLM query, then you are now searching the Internet for relevant facts to support or refute the meanderings of the machine.

    And, as a resut, you learn.

    (Assuming you know you have to fact-check it to begin with. And can avoid the deeper rabbit-holes you are bound to find.)

    1. MonkeyJuice Bronze badge

      Re: (bad) language models are good for education....

      I find the whole having to extensively fact check every output from an LLM inferior to going to the relevant (already peer reviewed) paper / manual and just, y'know... reading it.

  16. Anonymous Coward
    Anonymous Coward

    Sysadmin help

    I'm restarting a LLM project for a major Canadian airline and we are trying to get Groq running locally I'm having issues running on an 8 Way GPU system with this setup. help!

  17. samzeman

    Stable Diffusion

    Note you can also install Stable Diffusion locally to do txt2img generation (most fun) and img2img (with prompt to guide) and then also with an addon img2txt (which provides some strange prompts, but if you put them back into a txt2img generator, it makes good images).

    Then you can get your own checkpoints, models etc on civitai or huggingface.

    Honestly it's a little scary - the local Stable Diffusion you can get, on a home PC, can convincingly generate pictures of people's faces, without restrictions on the type of content, so the genie is definitely out of the bottle.

  18. staringatclouds

    But

    https://i.imgur.com/RlLEfCr.gif

  19. cageordie

    Not what I need

    What I need is an extension to my anti-virus toolkit that can block any sniff of an AI or LLM sneaking into my machine.

  20. Kevin McMurtrie Silver badge

    Without a GPU

    Tried running llama3:70b on my AMD 7900X3D server that doesn't have a GPU card. It's like tech support chat windows. You ask a question and get shallow answer a while later, sometimes followed by rambling. The CPU is currently in ECO mode, though. It's summer.

    This is fun, so thanks for the tip, Reg! I had tried running some AI stuff before but could never get the code compiled (CPU+GPU) with the same number formats as the pre-built models.

  21. Long John Silver
    Pirate

    LLM: an opaquely pre-selected and partially pre-digested database of unknown provenance?

    Before LLMs arrived, information (e.g.text of a novel or of an instruction manual) could sit on a personal computer as a discrete entity of known storage requirement; similarly for numerical data represented as, for example, by spreadsheets. In effect, the computer serves as a static library of human-readable works. The choice of works, the Interpretation of individual works, and comparisons between works, remains the responsibility of the computer's operator. If the data are numerical, their interpretation may be aided by using a statistical analysis package; said package facilitates the selection and use of algorithms, but responsibility for these tools being appropriate to the task rests with their users.

    The data upon which the LLM has been 'trained' were selected as being appropriate by the maker of the model. After that, even the maker is unable to explain details of the numerical structure of the model's data store or to show how the model constructs particular inferences. One becomes entirely reliant upon the capabilities of the software component linking the user to the data for trawling through the data when seeking patterns to which the user may attribute 'meaning'.

    Viewed in that manner, LLMs, regardless of being housed by a single computer or a cluster, require several layers of 'faith' by the user regarding the validity of steps in the process. Cynically, this may be taken as the ultimate inanity flowing from the movement towards 'big data': the base level of silliness being the construction of enormous sets of data without prior thought about how each component could contribute to finding as yet unspecified nuggets of knowledge and understanding from within noise. This approach is contrary to the parsimony required for well-directed enquiry. Moreover, its beguiling nature offers the prospect of unwary users failing to grasp the distinction between correlation and causality; the latter accessible only via planned data collection within the context of an experimental design.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like