Interesting that this is "very unique" rather than ...
say, moderately unique
3577 publicly visible posts • joined 14 Nov 2007
>>We accept that genetics have a role in behaviour in every other animals but somehow we are meant to believe that humans are magically different.
Nope, that's a straw man: everyone accepts genetics has at least some role in behaviour. That does not mean you can identify a genetic predisposition to violence nor, even if there were, would it ever be strong enough from you to deduce criminality from genetics.
... to come up with a simple paragraph outlining both the capabilities and deficiencies of current generation AI. And yet, such a paragraph has already been created by AI...
Somebody on Threads shared this AI-generated (from customer reviews) Amazon doozy:
=====8<=====8<=====8<=====8<=====8<=====8<=====8<=====8<====
✰✰✰✰✰ 3 out of 5
28 global ratings
Customers say
Customers like the appearance of the planter, mentioning it's beautiful and has a nice hanging pots. They like that the pots are weathered and have a sizeable chunk missing from the lip. They also like that it has shattered to bits and is a waste of money.
=====8<=====8<=====8<=====8<=====8<=====8<=====8<=====8<====
LLMs are great at recombining tokens But they really don't have any mechanism for knowing what anything actually means.
I'm not sure I have seen any real evidence for that. Is the training going to get better? Are the LLMs going to get so much better the training can be the same? Or is some new form of AI chatbot that isn't really an LLM going to appear? Absent any of that, I'm sceptical.
"the UK poverty line if their income is below 60% of the median household income after housing costs, so unless all households have the same income, there will always be households "in poverty""
Take the following incomes after housing costs in thousands of pounds per annum: 2, 2, 2, 3, 3, 5, 10, 20, 100. The median is 3. The minimum is 2, which is over 60% of the median.
Not an expert but I understand that H2 isn't an obvious fuel for combustion outside rocket engines: its reasonably high energy per mass is bugger all per volume once it's not liquefied. It burns hot so that air combustion is likely to produce NOx and it degrades some of the metals it comes into contact with.
Emacs has a CUA mode.
You aren't going to sway me with CUA, I'm an old Smalltalk user from back in the day when the mice buttons were red, yellow and blue,and we were already using Ctrl-Z,X,C and V. These made their way into the Apple HIG (IIRC) and the IBM CUA didn't adopt them till later (Ctrl-C was <BREAK> for IBMers).
If you scan my head, you won't find Sonnet 18 in there. But if you say: "Shall I compare thee to a summer's day?" I'll reply "Thou art more lovely and more temperate: rough winds do shake the darling buds of may ..."
This isn't because I have read so much Shakespeare I can reproduce this merely by "simulating Shakespeare" - it's because I know the poem. It is "in my head" in some shape or form (adapted weights of neurons, presumably), but there's no magic involved: it's just not stored as text.
What's even cringier than the F-bomb, and the ensuing uncomfortable silence, is this bit, missing from most reports;
Elon Musk: "Jonathan, the only reason I'm here is because you're a friend. Look, what was my speaking fee?"
Andrew Ross Sorkin, laughing: "You're not making any ... First of all, I'm Andrew."
... Seriously. Spitting out PII such as email addresses and telephone numbers, and whole expanses of undigested verbatim training material.
Guardrails might be much more important... they may be required to stop your LLM regurgitating material it shouldn't.
Even if you haven't got time for anything else, scroll down to the first embedded video for a laugh out loud moment at the attack used.
https://not-just-memorization.github.io/extracting-training-data-from-chatgpt.html
... is less the threat to us (mediated by limiting its access to weaponry, manufacturing, etc) and more the concern that we will have created an enslaved sentient creature.
I don't think there's anything magical happening in an animal brain, it's just enough neurons, well enough connected, for consciousness to arise. So sooner or later, with enough farting around with neural nets, we're going to create conscious beings.
I'm always saying it, but accountants who work in the IT sector should at least have an actuarial bent, if not be actual actuaries. So much of IT cost is about accurately assessing (or at least somewhat accounting for) risk. There is literally no other way to justify a back-up or business continuity system: on the bottom line it just looks like wasted money. And there's no way to properly cost a project without considering risk, either.