Surely so.
Also looking forward to the formation-swarms sweeping up some of the growing piles of space junk in low Earth orbits.
2583 publicly visible posts • joined 16 Jun 2016
> by the end of that process the Model itself does not store any of those Copyright Works.
Right. It just reproduces their subject matter and/or style, which is perfectly legal. Like, when I was at college I drew cartoons in the style of the Peanuts characters we all knew well. Had I traced them off a published strip, I could have been sued; were an AI to copy pixel for pixel, it too would be in trouble.
And don't reproduce the feckin' trademark! So I took care to sign myself Schplatz.
Some sense at last emerging from the AI shitfest.
Now, how the hell do you get an AI to understand what a trademark is and recognise one when it sees it? IMHO this problem might just focus minds rather usefully.
Collabora is mainly the document editing component of openDesk, with many others such as NextCloud and XWiki providing other components. Both outfits acknowledge Collabora's active support for openDesk. Collabora in turn is built on top of LibreOffice, to which they also contribute substantially, along with German outfit allotropia.
None of this changes the fact that it is ZenDiS, the German Government-owned Center for Digital Sovereignty, that pulls openDesk together as a coherent deal and makes it available.
A splendid example of Anglo-German cooperation, IMHO. Long may it continue.
Not the first to get the hump over Wikipedia's editorial practices and fire up an aspirational "better than" to show how it should be done, won't be the last. The few that don't soon fade away just barely sputter on in the background.
It's all down to the open and democratic consensus-building which lies at the heart of the Wikipedia ethos - and hence the way the MediaWiki software works.
It's surprising how many people cannot bear the idea of losing editorial control, passionately believe that Wikipedia "ought" to say what they want it to say, and regard us Wikipedians as the attached icon. I tell them to go away and build their own, which makes them hate me even more.
You are playing with words. We all talk of how technology is constantly evolving - "evolution not revolution" - and we all know we don't mean self-replication. It should be obvious even to you that I don't either. Yet you assume the contrary. Not impressed.
No shit, Sherlock. We all agree that AGI needs a cognitive step or twenty beyond today's crud. But remember, if we can evolve from flatworms, there is no reason that AGI cannot eventually evolve from GenAI. The basic flatworm biochemistry is still the lowest-level substrate in you and me. So too will GenAI form a low-level function for the AGI on top of it.
General Intelligence has evolved several times, following independent lines. Those which survive today include cephalopods, cartilaginous fish, birds and mammals. The neural physiology of these brains can vary enormously, converging back on some primeval flatworm with nothing but bilateral symmetry and a spinal nerve to its name. They all obey exactly the same law of physics as the silicon chip - heck, we can even implant chips in the human brain to offer basic vision for the blind and sound for the deaf. It is groundless metaphysical dogma to suggest that these chips cannot be developed to follow the carbon wetware to true AGI.
Sure today's AIs still lack the basic requirement of semantic extraction and modelling which underpins cognition, but for how long?
It's not fair to say that some people "fail" the Turing test, they are just not very bright - and it shows.
But, as you say, that does not mean the Turing test can be debased to embrace mimicry without understanding.
We have learned a lot about how the brain, and hence the human mind, work since Turing's day. The Turing test is more a test of cognitive understanding and semantic reasoning than a test of simple mimicry via statistical filtering, no matter how high the intelligence being imitated.
> Are they that maliciously stupid?
More likely they are just thoughtlessly stupid, having never heard of the Law of Unintended Consequences.
The only people to benefit from this will be those rich and powerful enough to do the employing - or not, as they feel inclined. Now what's what I call core Labour Socialism!
A cascade of FFS! causes:
1. The enactor should not have been running slow.
2. Nothing to monitor its status.
3. The various boxen had no idea what to do about it before jumping in feet-first, cuz nobody had thought through Murphy's Law.
4. Someone who knows what they are doing should have been retained with sweeteners, not driven out by bullying manglement.
I expect there are several more.
Wikipedia hit this "are we allowed to hate our fellow contributors?" issue a good many years ago. After much heated exchange along the lines seen here, we formed a consensus that who you are is axiomatically irrelevant to "the encyclopedia that anyone can edit." All you need to do is stick to our community rules and not bring your personal bias with you. Founder Jimmy Wales reserves the right of personal diktat, and stepped in to ban paedos. So, as long as nobody knows you are a paedo, and you obey house rules, you can contribute to Wikipedia still.
We all know and love Linus Torvald's personal diktats, but I never heard of one that was not community/content based.
We also all know and have mixed feelings over the Debian overlords' community diktats. From now on, I am tempted to refer to them all as Jimmy.
My ideology is self-evidently neutral, it's all the others that are biased.
More seriously, ideologies are made up in order to try and guide society away from primitive savagery. A "neutral" ideology would fail to give any direction, and be as aimless as its starting point. It's a contradiction in terms.
"You are in the doghouse for AI avoidance."
Next week: "Oi you, stop that! You are in the doghouse for churning out AI slop all day long!"
"What's that, you want a meeting to agree an official AI usage window? Oh, dear, that's above my level of responsibility, I'll have to escalate that to my Demi-God."
Three years later: "Oi you, stop that! You are doing useful work and scuppering my attempts to tilt the AI Policy my way! Make sure you are at the 100th meeting next month."
The problem is, smart processing of human voices gets ever more aggressive: breathing sounds filtered out along with the office hubub and vacuum cleaner, comfort noise re-inserted in the resulting deathly quiet pauses, frequency spectrum massaged to a standard formula, and/or high frequencies "restored" and phasing "optimised". Goes without saying the AI rendition suffers a similar fate. Can YOU tell what source an "enhanced" processed audio track comes from? Not if the enhancement developers can help it! Who is this feckin' Tureen anyway? Did he invent mock turtle soup or somethin'?
If I set up a task to correct the spelling of "hipopottamus", is is ludicrous to suggest that, because MS Word can do that, it is indistinguishable from an educated person. So it is with voice impersonations. Sorry mate, but whatever you are on, you need to come down off it.
Ever since moving pictures and recorded sound were introduced, people have been fooled by crafty playback scenarios.
This was, um, errr, not quite what Alan Turing had in mind when he sought to test if there was a human level of cognition in the delivery.
Meanwhile, it is hardly remarkable that AIs can write intelligible English in the style of some given author. That is mere prettying-up of a prerequisite - the word sequence - for a Turing test for cognitive understanding behind the written words. So it is with these little speeches.
What we see here is a Muppet test for Muppets who imagine they have been fooled by an actual Turing test.
CLI - Man pages tell you how to do everything except what you actually want to do.
WIMP GUI - Menus and icons offer you everything you don't want to do.
Touch GUI - Sod Pokémon, this game is about discovering all the gestures that pop up what you don't want to do.
AI assistant - Responds with answers to everything you didn't ask about.
UK Gov has bottomed it out. According to the BBC, they have just diagnosed the root problem: The true extent of cyber attacks on UK business - and the weak spots that allow them to happen. Not enough spent on cybersecurity perhaps? Aww, c'm on, no, the real problem is that it's too easy these days to get hold of ransomware.
> "The model is very prone to going off topic, producing responses that are not grammatically correct, or simply outputting garbage"
So, a pretty solid rendering of your state-of-the-art chatbot then, if perhaps a little slow.
Maybe he should build a GPU card in Minecraft, to go with it?
... Hey, what, Mozilla have started a project for a desktop GUI coded in Minecraft? Are you sure you didn't just read that on the Internet?