Re: DuckAss
When it's appeared when I use DDG, it's just clippy saying "Hey, it looks like you're searching the Internet, shall I open the Wikipedia page for that?"
I'd link to The Register's clippy video but I can't find it, so here's another.
GPT-4, the long-awaited successor to OpenAI's generative models will be unveiled next week, according to Microsoft. Andreas Braun, CTO of Microsoft Germany, let slip that the new system will be launched next week when he was speaking at the AI in Focus - Digital Kickoff event last week. "We will introduce GPT-4 next week, …
This post has been deleted by its author
When it's appeared when I use DDG, it's just clippy saying "Hey, it looks like you're searching the Internet, shall I open the Wikipedia page for that?"
I'd link to The Register's clippy video but I can't find it, so here's another.
Eighty years ago, Issac Asimov developed three "laws" that the robots in his science fiction stories had to abide by. He focused on ensuring physical safety and security for their human overlords by means of these unbreakable rules.
How could we put some similar constraints in place for today's more abstract systems. Ones that do not walk among us but that can assist those who wish us harm (more of a mental nature than physical injury) through misinformation, propaganda, manipulation and attacks on our well being, in the online world that we share with them.
We seem to have arrived at the point where technological advancement is running ahead of states and their laws ability to protect their citizens. Should we take the same route as with computer viruses and dump the responsibility of defence on each individual user - although that has shown itself to be completely ineffective.
Or should those who we elect to look after our rights (at least in theory) be charged with enforcing protections to be applied at the source, instead of at the destination.
While that would definitely affect and to some extent limit the notion that some countries hold dear of free speech (which has never really worked). It seems to me that is a necessary shift in the balance towards the greater freedom of online safety.
This post has been deleted by its author
The central theme of most of Asimov's robot short stories was that the Three Laws don't work. They're all about surprising failure modes in the laws, or unexpected behavior (which can be very dangerous when you're talking about an autonomous agent with significant material affordances) they produce. Yes, in some cases it's due to tampering with the laws; but the force of the argument is "let's postulate a very simple system with three principles that appear to be highly reliable ways to achieve alignment, and then see how they fail".
Asimov understood – long before most people considered the problem – that aligning, or predicting or interpreting the behavior of, an alien intelligence was a hard problem. Quite possibly an intractable one. The Three Laws was a way to produce a steelman thought experiment: sweetening the well in favor of alignment, to show that the problem remains difficult.
Seriously, it's like people didn't even read the things.
Complex problems rarely have simple solutions. Even if we had a way of implementing some small set of very general, absolute rules that all sufficiently-powerful machine systems had to obey (and we very much do not), and we had a way of enforcing such implementation (and we very much do not), it would not help.
Quite so, Michael Wojcik, .... for more than just robots is it a permanently abiding cyber feature/virtual threat/diabolical treat and relatively unique enigmatic universal conundrum without a viable hostile remote third party controlled resolution ... enabling delivery of a practically pragmatic competitive advantage to assist augmentation of alien influence in advanced interference and autonomous insertion of future explosive derivative missions presented to stage managing media manipulators for global public painting of suddenly emerging and rapidly evolving novel means of exploring and experimenting with live memes in the expansion and extension of Earthed existences.
That's because Sydney (Bing's half-assed, rushed-to-market GPT-4-ish implementation) can update its context from current online data. So even if Microsoft aren't tweaking it (and let's face it, they probably are, because it's a PR nightmare – completely unpredictable in what kind of press it's going to elicit on any given day), its responses will change as that live context gets different inputs.
Plenty of studies already show that you can push Sydney into non-factual hallucination by varying prompts it "got right" just a little. It's very fragile.
Yeah. Grey parrots have around 2e8 forebrain neurons, so figure around 1e12 to 1e13 synapses. Synapses are very roughly parallel to parameters in an LLM. OpenAI won't say (yet) how many parameters GPT-4 has, but GPT-3 clocked in at about 2e11 parameters. So unless GPT-4 is an OOM or two bigger than GPT-3, it's still behind the parrot even by a simple (again, very rough) connectivity matrix.
And the parrot has the rest of its CNS, peripheral nervous system, and body, all of which (per experiments done by the Damasios and others) contributes to cognitive processing. And its world-model is subject to constant updating.
(As an aside, I was pleased to note that GPT-4 still does worse than I did on various standardized tests. That means approximately nothing, but ha anyway.)