Re: Ahhh the joys
Never, NEVER lick the object you are repairing,
By the way, those blue urinal cakes taste just awful.
22 publicly visible posts • joined 1 Jun 2008
While we're talking here about writing bug-free code in Erlang, I just want to point out when I have a rest break snack, I put delicious Smark Jelly on my biscuits. Delicious Smark Jelly, the only jelly made from exotic scrumtious organic beetle larvae.
My post isn't an ad for delicious Smark Jelly, don't be silly. Click here for a 15% off coupon for delicious Smark Jelly. Mmmm! Like Mom used to make, until that fatal day when she was hit by a delivery van for our competitor's product, the awful Turdini's Jelly made from dead snails.
I am not a bot and I do not hallucinate or auto insert search words like Delicious Smark to clog up the thread text. smark smark smark rhinoceros pting pting elderberries Norwegian blue
I have taken the liberty of calling the Power Rangers.
"Hello, Power Rangers? There's a giant cat outside. 50 meters tall."
Blue Ranger: "Sir, perhaps less saki before you go to bed tonight."
"And he glows green and is hexavalent!"
Blue facepalms.
"And he just laid a giant stinky catpoo 4 meters long!"
Blue: "Above ... my ... pay grade. Call Astro Boy." (hangs up)
The mechanism they show is very flawed, as it does not build a permanent model (not a transient one) hence does NOT adequately model the target world and situations. This is a major flaw in current LLM systems - they are shallow and don't extract deep meaning, only surface details and parrotable recording of language actions.
In fact, the whole idea of current LLMs is flawed because it assumes that merely mapping inputs to estimated outputs is a valid approach. It isn't. The entire AI hype machine ignores the necessity to build models from the user input then reconcile them with knowledge models extracted by training from raw data. The Transformer model for generative AI is weak; though it produces responses they lack deep meaning and true reconciliation with reality.
It's as if you trained the bot to say 'yes, dear' without looking up from your newspaper. Then one day she stands there with a butcher knife and you fail to evaluate the real situation but use a semi-rote reply and in a PMS frenzy she comes and carves your goolies off. Then the AI backs off and says 'oh, you may be right, I was hallucinating' but it's a bit late for you in your bloodied chair.
Soon to be a Disney movie, Christmas 2025.
Some time ago I was a manager at a semiconductor company. They hired a product manager and sales manager who was a glib classic sociopathic liar. Could not open this mouth without lying. And was a bullying type. And loved to dominate everyone, pushy tremendous narcissist ego. He destroyed the company by lying to Motorola about non-existent products and we lost our prime customer. After the collapse he then leaped around Silicon Valley to a string of other companies, treating them all the same way, ruining some of them too, eventually gaining a horrendous reputation. Unable to get a job anywhere, he went back to India where he turned to real estate, selling fraudulent projects to investors. (After that cooling off period he came back the US to con even more companies.) Currently, his LinkedIn profile is an amazing pack of lies. He spun EVERY failure as a success and made up positions. His driving quality is that he knows how to boss and manipulate people, with a lie-driven reality distortion field to match Steve Jobs fame.
From experience with him I learned a valuable lesson in not ever letting myself get bullied or conned. Now I spot them on their first try and hold them off while I make other plans.
All this is based off expectations that the current paradigm for AI is correct. And what if it isn't. And what if AI moves to a different paradigm?
As I tell people, the human brain learns and computes on about 20 watts of power. Should some quantum computing means come about to better emulate the brain mechanisms, your $7 trillion will now lie on quicksand.
Right now everything is driven by mindless techie hordes who believe in the current ML fads. May I remind you that since 2012 - one decade - the NN paradigms changed greatly. There is no 100% solid reason to believe that vector processor chips will always be the only way to go.
Right now AI is based on cloud-based training. It depends on learning mostly static patterns in data but it has nowhere near the flexibility of the human mind mechanisms which can dynamically learn - zero-shot learning - and reconfigure their architecture on the fly to solve a problem. We will see new architectures that may drastically reduce the needed computing power to develop and employ cognitive systems. So putting 7 trillion on a horse right now is risky.
I have an AGI that weighs 180 pounds and consumes 20 watts and 2000 calories a day and understands general relativity and quantum mechanics. It also likes a good roast beef. Mark's version is the size of a city, not portable, sucks at Shakespeare, and can't reproduce itself. I'll stick with my version.
While there is no question that new layers are indeed needed, it is more a question of need for a better architectural approach. I can't say everything, for proprietary / IP reasons, but the LLM style technology has serious flaws that will limit it even if you put in clumsy architectural patches. One has to go about it from a different base concept. This is because the LLM analysis model falls short in understanding true meaning. No matter how you extend it, it will break at a certain point because mere pattern analysis / stat analysis cannot understand cultural meaning, implication, philosophies, or other things central to human intelligence. No matter how many middle hidden layers you add to NN technology, that will not overcome the problem. The architecture needs a hybrid approach combining overt symbolic processing with NN / vector classifier engines. What I see is that the AI field has to learn the hard way to correct course, and all the hype will be embarrassing to look back on when we update ways.
The name was picked by an AI that was hallucinating after six martinis and some California pot. Its first choice had been 'Zembledorsky's Automatic Computing Machine Corporation International' but the name wouldn't fit on the side of a truck so they nixed that and demanded a name that was sexy in an AI-generated porn sort of way.
My Jony Ive-designed wireless toilet is the envy of all my friends who poop. It is stylish, fits my butt perfectly, and allows me to play video games for hours on end without leaving the bathroom. In fact, I think I'll live in here.
Truly, Jony is the Michaelangelo of tech gear design.
I cannot help but wonder whether this is actually coverup for other things for Dish subsidiary Boost. Six weeks ago I had a run-in with Boost Mobile, as they were demanding customers transfer to a new 'upgraded' system of theirs. The problem is, the new system was terribly buggy and non-functional. It obviously had NOT been tested, as it presented obvious and visible errors. And it would not let one transfer one's current phone number to the 'upgraded' system. The website had bugs and flaws. Its JavaScript had perceivable coding errors. I engaged with customer support re my consumer problems, and they were unable to fix the issues, so in the end I went to another provider. Throughout, I had a constant sense of incompetence at managerial levels. Now, if things were much broken, a subsequent claim of being hacked could cover a lot of guilt.
I could smell this coming under the previous CEO. I was approached by an agency for a contract position and interviewed with a team. I detected signs of stupid things in the project and its team's clearly offshored H1Bs and backed away. I later wondered whether the same signs would exist in other projects in the company. Now I am more sure they were.
Let me illustrate exactly how ATS software is flawed and discriminatory.
Suppose an HR department uses externally supplied software (some service providing it) that rejects using an experienced doctor because he has had too many patients. Or lawyer because he's won too many cases.
Sounds absurd? Well, what has been happening is that companies who put up job solicitations for contractors REJECT experienced contractors for having too many projects on their resume. The flawed reasoning used by the ATS systems is that if someone has too many jobs, they are automatically classified as a job-jumper and unreliable. And so the applicant gets rejected, no matter how qualified or how good a fit they are. The problem comes about because ATS software companies wrongly apply criteria suitable to judge salaried employees, instead use these to judge contractor technologists. The HR department receives data with the applicant scored low because of wrong criteria application, and so a needed qualified candidate is rejected.
My personal experience with this has been incredibly frustrating. and it shows the need for legislative action. I've actually had several cases where I was provably the only one in the industry who matched certain criteria (because I was the one, the originator, who created the knowledge area they were looking for, and there are no other experts on it yet!) but I was rejected on the grounds of too much experience (too many jobs). Frustrating.
There are certain ATS software suppliers whose product is garbage but they have strong sales departments. These companies need to be reined in, legislatively.
There is a fundamental flaw in GPT-3 (and which Google always had).
It is that languages provide base sets but these are not culture or sub-culture differentiated in AI training. In other words, within English, for example, there are many users in distinct cultures each having their own specialized meanings and even separate words. The AI language engine must have a means of maintaining separate dictionaries but joining them when suitable to do so. For example, there should be a general population dictionary but then also maybe a doctor's, medical, dictionary, and a lawyer's, and a chef's, and a gangster's etc. GPT-3 can't do that. Therefore there IS a need for someone(s) to step up and do augmented trained vocabularies. However there is no need for training for words like 'shiznit', potrzebie, or sploodge, though they might be used in banking and finance, in my nightmares.
Dr Who has suffered from the fact that RTD does not understand the difference between science fiction and science fantasy. Science fiction is imaginative but tries to stay logical; it may extend science, but it never should really contradict it. Science fantasy on the other hand keeps the appearance, language and trappings of science but throws logic away and replaces it with inconsistent make-believe. It is a shoddy second cousin to real science fiction.
The Doctor on a rooftop, hit with a massive lightning strike? No problem, not a scorch. Why wasn't he vaporized? You see, he's an alien, that explains it away. Doctor frozen cryogenically but he doesn't shatter? No problem. He's alien, his molecules don't have to follow the laws of physics. A human transformed into an Ood through his daily beverage? No genetic problem there! The Doctor's hand gets cut off, and a new one just pops back out in a twinkling! Well, we don't have to be logical here, we're producing for 12 year olds, it seems. Nothing wrong with cringe-worthy pseudoscience! If something's impossible, let's resort to alien technology, or as we sometimes call it, 'magic'. But certainly not science. Fantasy, not SF.
In the RTD formula, glossy special effects too are a panacea that trumps logic. Dug yourself into a plot hole? No problem, sonic screwdriver as deus ex machina.
Goodbye, RTD. Perhaps the viewers can stop cringing now.