Re: As a learning example, this is fantastic.
I agree. Not a fan of the LLM mania - which looks like many a hype of the past - but it behooves programmers to stay in touch with what the tech does.
Same author penned the "roll your own AI code assistant" article assistant and explained what RAG consists of - i.e. how a model trained on the wider world seems to infer things about your code base which I had noticed with me. Ditto on this article, people complaining about parsing totally miss the point: how do you anchor your system on facts - not hallucinations? Calling an external system that knows seems beneficial. Ask Air Canada, whose bot lost them a court case when it mis-cited procedures for bereavement discounts.
And, from my CoPilot experience, hallucinations are very du-jour: if you write Python code using Polars, a newer dataframe library than the more popular Pandas, CoPilot is likely as not to offer suggestions using Polars syntax but then slyly referring to Pandas methods that don't exist.
Overall, can't say I'm thrilled with CoPilot, it tends to clutter VS Code with useless suggestions 90% of the time when you're writing code you know. The Zed editor in macos somehow manages to offer much better suggestions and doesn't heat up the CPU near as much doing it. But occasionally CoPilot can be useful if you ask it to flesh out some code you're not as familiar with, like a bash/zsh operation to check an arg's value. Early days, early days.
Actual nitty-gritty tech details, rather than breathlessly jonesing about how many billion params which model has? Keep 'em coming.