No, universities and lecturers tasked with marking have been intensely aware of this since it became a thing.
There is no easy fix.
1005 publicly visible posts • joined 9 Jul 2009
One point: no argument that current LLMs have a tendency to churn out guff (even if impressively fluent, articulate and grammatical guff), but one thing they are not doing is simply "regurgitating" chunks of text yanked off the internet - a common misconception.
Do find out how how transformer models - which underpin the likes of ChatGPT - actually function. It's rather more subtle (and interesting) than you might imagine.
> Fortan had it figured out in 1957, but that is a programming language.
Sorry, I don't follow (I cut my teeth on Fortran 66, as it happens :-))
> My point is, that we are not yet at a place where a computer can read a load of books on a subject and actually apply the knowledge to problems rather than just regurgitate it.
Well, not a load of books - rather, petabytes of internet stuffs. And you're mistaken if you think LLMs are simply "regurgitating" - it's far more subtle than that (have a look at how transformer models function). "Datamining linguistic associations to generate a human-like responses to textual input" gets closer, but that doesn't really do it justice either.
> For this reason, I think LLMs are fundamentally flawed, and while there may be use-cases for them, it is far from the magic bullet that some people think it is.
I don't think of LLMs as "flawed" - for that I would need to know what they are supposed to be doing - and I really am not sure. So, for instance, a flawed sort algorithm is an algorithm that doesn't sort properly... but what's a flawed Large Language Model? What exactly is it that it is failing to do properly? Churn out nonsense in rather good grammatical prose? No... they're rather good at that. I really do not know what the use-cases might be.
One thing they don't claim to be, however, (despite the mutterings of some unbright people) is to be some kind of general AI. I really don't think anyone the right side of clueless truly believes them to be a magic bullet. This is not to say, though, that some of the mechanisms they deploy - like transformer models - may not turn out to be useful building blocks in some future artificial general intelligence. Apart from anything else, the capacity to build associative networks of abstracts built on language (essentially what transformer-based LLMs do) actually sounds like rather a useful attribute for a putative AGI.
> I don't think the problem with mimicing the human brain is processing capacity. In any case, a datacentre full of A100s, I'm pretty sure, can easily match that.
Haven't run the figures, but I actually doubt that. 100 billion neurons and 100 trillion synapses is quite a lot (!) Plus consider the energy consumption of that datacentre compared with a single human brain...
> I think the problem is that the human brain doesn't work on boolean algebra and can't be represented in boolean algebra, no matter how many trillions of instructions per second you execute.
Not sure that's the issue either; information transfer in neural systems is in the form of discrete neural "spikes" propagating along synapses (albeit in analogue real time). That is a situation which most certainly may be - and in fact is, routinely - modelled in digital computers. (I have even done so myself.) It would not be hard, if it hasn't already been done, to devise hardware to do that very efficiently.
> Take an example, ask ChatGPT to multiply two 4 digit numbers together. A calculator from the 1970s can do that way quicker than a human brain, but ChatGPT can't do it at all, unless the exact numbers you give were in the training data. The human with 1970s calculator can figure out very easily which buttons to press on the calculator to get the right answer, ChatGPT can't.
I imagine it would be rather easy to train ChatGPT to be able to figure out which numbers to press on a calculator for arbitrary numbers and a given arithmetic operation - or if not ChatGPT, certainly some other AI system. Hell, if existing AIs (read "machine learning algorithms") can beat humans at chess or Go, or figure out for themselves how to become human-level players of Atari games from nothing more than raw pixel access and some feedback, how hard can that be?
I do take one point you make: that it may not all be about scale. We know astonishingly little about the organisation of information processing in brains, aspects of which may be crucial to human-level general intelligence. But, as I suggested, current AI techniques such as the transformer systems used in LLMs - and perhaps predictive coding-style systems - may be on the right track. Time will tell.
> Yes it is, and it's no way 'Intelligent'
Hmm... well, good luck trying to pin down what "intelligence" means.
(Hint: this has been debated for millennia, in practice means something slightly different to anyone you ask, while attempts to elucidate its nature tend to get mired down by terms like "understanding" which merely shunt the explanatory burden down another metaphysical rabbit hole, ending up in circular argument.)
Well, (human) intelligence may turn out to be "just [sic] a fancy statistics prediction algorithm" writ large -- very large! There are plausible and (to some degree) testable theories of biological cognition intelligence which posit something along these lines, which are receiving serious consideration; see, e.g., predictive processing theory.
Of course I'm not saying current AI is remotely in the ballpark of human-level cognitive and intelligent abilities; unsurprising, given the scale of information processing in the human brain (80+ billion neurons, 1,000+ trillion synapses), its phenomenal energy efiiciency, and its billions of years of evolutionary "design" (plus lifetimes of learning) with vast real-world-experience "training data". What I am saying is that I don't believe there is any kind "secret sauce" to human intelligence that we cannot in principle engineer -- and that current statistical models may actually be moving in the right direction.
I think that's a fundamental misunderstanding. Most of these forks are essentially hobbyist (I don't intend that at all disparagingly), and are never going to attract a large user base - nor I suspect, do they expect to. Meanwhile, if someone is seriously thinking, for whatever reason, of getting into Linux, they're surely going to google something like "Best Linux distribution for beginners" - and will find a handful of distros top all the lists (they do - I've just tried that). So they probably end up going with Mint. How is that "confused by choice"? Probably rather less so than buying a car, a guitar, a laptop (and way less so than, say, buying home insurance).
Meanwhile, the hobbyists fulfil a role in trying out new ideas, contributing back to the developer community, and occasionally gaining wider traction - because they've hit on something which turns out to be useful/attractive to enough other people, or perhaps to some business/industry need. That's pretty much how the major distros got to be major... and hell, Linus Torvalds himself started out as a hobbyist! Indeed, hobbyists tend to be people who, primarily, make stuff for themselves, because they can. And they tend to be highly motivated and rather good at what they do.
That's simply how Linux works. Do however, feel free to ignore all this and go with the majors - I (mostly) do myself.
I've been quite happy with it too - having read the story cited in the article, however, I may reassess (after a little more research).
> Not sure how much swap partitions are used these days.
Ha. I find myself generally turning off swap - because a certain well-known computational suite (*cough* Matlab *cough*) has the annoying habit of hitting the swap when you accidentally request more RAM than is available, and bringing the system to its knees. This is almost certainly not what you want -- ever -- and is especially egregious when running in batch mode, possibly on someone else's machine. With swap turned off, it gracefully errors with an "Out of memory". In fact I cannot think of any scenario where I'd actually want my system to grind to an almost-freeze, as opposed to just erroring out.
LaTeX is indeed still de rigueur for serious technical/mathematical document preparation - simply because in that domain the typesetting, layout, and cross-referencing/citation tools are vastly superior to anything any WP application has ever mustered*. Plus, modulo an admittedly steep learning curve, it is way quicker and easier to write the markup than faff around with graphical doodahs.
* I get to do a lot of technical/mathematical reviewing, and can generally tell at a glance whether a submission has been prepared in LaTeX or a WP; if it doesn't make my eyes hurt, it's probably LaTeX.
To be honest, I thought the terms were equivalent (they do seem to be used interchangeably, if erroneously, rather frequently). This article explains it nicely. My understanding from the article, though, is that my in/on example would indeed reference prepositional rather than phrasal verbs (e.g., "get in the box", "get on the bus"), while your "based off" is phrasal. In the interests of pedantry, I'd be happy to rephrase my statement as: English phrasal/prepositional/phrasal-prepositional verbs are notoriously difficult to learn for foreign language speakers.
By the way, I was not actually complaining about your usage of "based off", so the remainder of your post is misdirected.
> Or, quite possibly, f*cough*.
Uncalled for.
Perhaps not entirely relevant, but English prepositional verbs are notoriously difficult to learn for foreign language speakers - there is really not much rhyme nor reason to them. My partner is a native Spanish speaker, and while her English is pretty good (certainly better than my Spanish), she is totally flummoxed by "in" vs "on" (it's the same word en in Spanish). I think she flips a mental coin every time - and invariably gets it wrong.
While I'm here, I need to get this off my chest: what the hell happened to the word "ground" in British English??? "I dropped my phone on the floor." WHAT FLOOR?? WE'RE IN THE MIDDLE OF A BLOODY FIELD.
To quote Unlucky Alf: I knew that were going t'appen.
Okay, okay, no-one is holding a gun to my head. But back here in the real world, taking a stand against using a slightly annoying piece of software vs. a catastrophic impact on my work and working life makes that a non-starter. (I am a research scientist in academia, and almost all remote meetings, presentations, symposia, conferences, workshops, lectures and student supervisions are on Zoom. It would be exactly the same for comparable employment anywhere else on the planet.)
I am not you, my work is not the same as yours - please do not lecture me on my job (or anything else).
> But from mid-teens onwards (when they start using computers as well as phones) they expect a good Mail experience ...
Heavens, where have you been? I have yet to encounter a teenager these days who uses email at all (except under duress for unavoidable school/higher education usage). Then when/if they get to the gainful employment stage, they will most likely have the appalling Outlook foisted upon them.
Isn't Éire - or indeed Ireland in English - also the name of the island on which the state of Éire and Northern Island sit?
To confuse things further, it must be said that many Irish consider the entire island to be Éire - i.e., they do not acknowledge the legitimacy of the border.
Indeed.
Re. (1), I suspect the answer is a resounding yes-no-maybe. Clearly human intelligence has been "designed" (by evolution) to solve very different problems than those for which LLMs have been designed. But this is not to say that some core principles involved in both may not happen to coincide. If we are to create artificial intelligence which is any kind of competition to the human variety, I suspect that some of the principles behind LLMs may turn out to be useful building blocks, rather than the whole deal.
Re. (2), let's not conflate intelligence with consciousness, though. I think much (arguably even most of) human - and certainly other biological - intelligence is not actually enacted consciously in the form of internal monologue. Much of it is "under the hood". This may be on the level of (mostly) not being consciously engaged in the action of driving a car, and even to the "creative" tasks which stand as poster-children for human intelligence, such as doing mathematics*, playing chess, or figuring out what someone else is thinking.
* This may sound odd, but as a mathematician myself, much of the time I truly am not conscious of - and am in fact unable to recapitulate - how I arrive at some result. The act of engaging in mathematics seems to involve something closer to a dream-like, highly-parallelised reverie rather than a linear monologue.
Well, my point still stands: PC users at work as well as home get what's preinstalled on their machine. Sure, I might have remarked that in the workplace there is almost certainly no option to change even if you wanted to.
And yes, MS exploited their hegemony on PC hardware very effectively to lock the business world into their software, and I don't see that changing anytime soon either - unless perhaps MS overplay their "cloud" hand (and pricing) to the point that it starts to make financial sense for business to bite the bullet and begin to disengage their MS dependencies*. But I'm not holding my breath.
* My own workplace (an academic institution) dived gleefully into MS's pocket about five years ago - since which time the IT infrastructure and support has become steadily and progressively more dysfunctional. It wasn't broke, didn't need fixing and IT costs have soared. I die a little every time some new crappy cloudy "solution" is foisted upon us. And don't even get me started on Outlook and Teams...
> That is simple? Really?
Yes, it really is - and if you think otherwise you should not be allowed anywhere near a computer.
Of course what's even simpler is buying a PC with Linux preinstalled - y'know, like your PC came with Windows preinstalled.
> You have obviously never done a clean install of either Windows ...
I have. It was a car-crash (see my earlier post).
Yeah, I experienced something like that, admittedly some years back (probably Windows 7 or 8). The hard drive hosting the original Windows installation had died, so having purchased a new unformatted drive, we attempted to reinstall from CD(s). The problem, as I recall, was a missing network card driver. Ended up having to download the network driver on a different machine, burn it onto CD, and then commenced a protracted faff trying to get the semi-functional half-installed Windows to actually read the CD and write the driver to somewhere it could find it again and complete the installation. Eventually got it working, but never want to go there again...
Doesn't sound like it's got a whole lot better.
> I'm terribly, terribly sorry for your Linux fanaticism, but 3% of the desktop market after DECADES of trying PROVES my point.
Um, no it doesn't, and shouting doesn't make it any more true.
Bottom line: people use Windows because that's what they get pre-installed on their (consumer or business) machine. And they're not going change OS even if they know they can (which the vast majority probably don't), because they have no motivation, nor the tech knowledge* to do so. It really is as simple as that. The same holds for Linux. It's not going to gain a substantial slice of the desktop market until such time (if ever, and for whatever reason) as it is comes pre-installed on commercially-marketed consumer machines. The same has been true of Windows, MacOS and ChromeOS, and indeed iOS and Android in the mobile phone sector. It has virtually nought to do with the quality of the OS or available software.
See also my earlier post below.
* If you thought installing Windows from scratch on a Linux machine was any simpler than the reverse case, I've got news for you...
> That's not because suddenly, everyone will realize that the Linux desktop is wonderful. Sorry, folks, if it hasn't happened by now, it never will.
No indeed, and it's worth bearing in mind that that's certainly not how Windows gained hegemony in the desktop PC arena. They did that by making sharp deals with the PC hardware industry - you simply got MS Windows pre-installed as the OS that came with the PC (and later laptop) you bought from any major consumer outlet. Likewise in the workplace, it was simply on every machine that your company purchased. Whether you liked it or not -- let alone thought it was "wonderful" -- was a non-question. And the average consumer (or indeed business user) would most likely have no idea that alternatives were even available (not that they would be motivated to change for change's sake anyway).
The same will be true of Linux; it will never gain widespread traction in the consumer market until such time as it is routinely pre-installed on consumer PCs/laptops; and like MS Windows, it's wonderfulness or lack thereof will not be the issue. Whether this happens as suggested by the article -- through MS relinquishing the market (and failing to take their entire consumer base into cloud-cuckoo land with them) remains to be seen. I can, though, imagine a world where the (diminished) desktop-PC-as-we-know-it world becomes a Linux-dominated one, with major PC manufacturers/vendors bundling pre-installed, possibly Windows-like, bespoke Linux editions, with support. Perhaps even, with the overt co-operation of Microsoft, as thin-client gateway machines to the MS cloud...
I've left Apple -- and desktop gaming -- out of this discussion. The former will maintain a core/niche support of graphics/media bods plus people who genuinely do think it's wunnerful. As for the latter, I'm really not sure. I've never been a gamer so am not sure which way that market is moving. But if MS are really getting out of the traditional desktop market, another OS will need to take up that slack - and I don't see any alternatives to Linux there.
There will inevitably be cases where "unsafe" code is unavoidable (or desirable) - especially if you're writing an OS. The thing about Rust is that it obliges you to explicitly declare sections of code as unsafe, making for better transparency.
Rust also has several features which make it easier and more natural to code safely without incurring performance penalties, where the corresponding C paradigms are inherently unsafe (e.g., on gounds of NULL pointer dereferencing, array bounds overuns, data races, non-portability, etc.) The compiler does a lot more of the heavy lifting in Rust, and performance may sometimes by improved over C, as the language gives the compiler more guarantees than C does, which it may exploit for aggressive optimisation.
There's an excellent article about this here.
(Disclaimer: I code a lot in C and not at all in Rust.)
As I recall, in the early noughties "gazunder" (verb) gained some traction as a term in the English property market as a sort of obverse to "gazump"; it's where the buyer waits till exchange of contracts is imminent and then drops the price of their offer.
(Note to non-English readers: this may make no sense to you if you are unaware of how completely barking English property law is. Essentially, acceptance of an offer on the sale of a property implies no contractual obligations on behalf of either the buyer or the seller, either of whom may pull out of the deal or attempt to renegotiate the sale price with no penalty. Contractual obligations only kick in on "exchange of contracts" - which only happens after the lengthy and costly process of conveyancing, surveys, etc. Think about that and weep... Note that this is only the case in England - Scottish property law, for example, is comparatively sane.)
I think it's quite possible that some of the current naive NN architectures and training/learning models will find a place as building blocks in more sophisticated organisational schemes.
That human brains have the benefit of billions of years of evolutionary honing (plus lifetimes of training), operate at computational scales which dwarf current technologies (~ 100 billion neurons, 1000 trillion synaptic connections, anyone?) and operate with insane thermodynamic efficiency -- plus the fact that evolution hardly goes out of its way to make its organisational and engineering principles in the least bit transparent -- suggests that "one day" may be rather far in the future.
Still, got to start somewhere...
> The architecture needs a hybrid approach combining overt symbolic processing with NN / vector classifier engines.
The problem is that "overt symbolic processing", affectionately(?) known as GOFAI (Good Old-Fashioned AI) hit an impenetrable wall of combinatorial explosions some time around the mid-80s, and remains virtually dead in the water. (Whether hybridisation with NN architectures might resurrect it is moot; I have my doubts.)
Having said which, while "overt symbolic processing" is almost certainly not the way human cognition and intelligence works, there are some interesting initiatives appearing on the interface between cognitive neuroscience and machine learning, such as predictive coding. Whether such developments turn out to have traction remains to be seen.