* Posts by beeka

2 publicly visible posts • joined 4 Jun 2024

Who needs GitHub Copilot when you can roll your own AI code assistant at home

beeka

Re: Leaning Toward Learning

If you want to know *why* the solution works, not just that it *does* work, you can always ask the LLM.

While you can use them in a "do my homework for me" way, which could lead to the knowledge drain you fear, I tend to use them like an intern: doing the things I know how to do but don't have the time / energy to do. So even though I know how to write a recursive descent parser, I can feed an LLM with a bunch of BNF and it generates code in seconds that would take hours to write. You still need to understand what it has generated, as it isn't perfect. Getting it it to rough out tests or documentation also helps battle inertia around those tasks: easier to review / clarify / extend something that exists than stare at a blank screen.

Do you really need that GPU or NPU for your AI apps?

beeka

The answer to the question about "when is something AI vs engineering" is that all the while you understand exactly what it is doing it is called engineering. We don't understand what AI / ML is doing so it can't be trusted with anything important. Small ML tasks can be understood well enough, e.g. a neural net for character OCR, but usually as part of a larger, well-engineered, system.

That it is possible to feed billions of linked tokens into a computer and it can guess what you expect it to say next feels like magic: I could just as well be speaking with a Sophon. The Raspberry Pi foundation encourage the use of the term Machine Learning in an attempt to increase the understanding of the limitations and reduce the mystique. I've been trying to understand these systems for the last few months and am still not sure what a GGML file contains and whether the differences in models are due to embedded run-time logic or alternate training data / processes.