Re: Bollocks
Ironically, the LLM company most frequently guilty of anthropomorphizing its models is... Anthropic.
195 publicly visible posts • joined 26 Nov 2014
I'll take Things That Never Happened for $1000, Ken.
I suppose it's possible that with sufficient nudging and the right series of queries somebody managed to get an LLM to do something remotely resembling what is described here. But anybody with the slightest knowledge of what LLMs are and what they do knows this is complete bollocks, just like every similar claim has turned out to be.
This is a massive HIPAA violation. Blue Shield customers should sue it into oblivion.
And then move on to all the other US health care "providers".
I put "providers" in scare quotes, because the entire business model of health insurance is not to provide insurance, but to deny it.
In addition to all the other flaws mentioned upthread, this is entirely the wrong way to do automation. A much better approach would be to provide APIs to the underlying functionality that can be directly invoked by an automation agent - preferably an agent that does not purport to be AI - and ideally, upon which the human-centric UI is also built.
Driving the UI directly is the software equivalent of creating a self-driving car by taking an ordinary car and replacing the driver with a humanoid robot.
That's because (a) power (together with cooling) is the limiting factor in a datacenter, and (2) how much compute will be available is not readily predicted.
The computing capacity is neither a fixed quantity nor even a readily forecastable one. If you start a datacenter today, you don't really know what chips will be available when it finally comes online. And even once it does, the CPUs, GPUs, and whatever other PUs are in there are going to constantly evolve. They will also change depending on what mix of regular compute, AI compute, storage, etc. the datacenter is running, something that will also evolve over time.
So for all those reasons "compute capacity" is not a stable enough number to be useful.
"Alphabet attributed the growth to enthusiasm for Google Search and YouTube ads" must be one of the most marketing sentences ever written. If Mr. Claburn managed to write that while maintaining a straight face, he is a stronger man than I.
However, I am genuinely surprised that Alphabet didn't attribute the growth to enthusiasm for mediocre LLMs, aka slop machines.
My recollection (possibly faulty, it was a long time ago and a lot of beer has flowed under the bridge since then) was that a couple of years before all this kicked off, Ellison confidently predicted that the ERP and CRM space was inevitably going to consolidate, and that when the dust settled, Oracle would be one of the big winners.
When the market steadfastly refused to go along with Ellison's opinion, he set about making it happen himself with a series of acquisitions of decidedly questionable business sense. Oracle's actions triggered SAP to follow suit, thus "proving" Ellison's foresight.
BTW, in my experience one of the unalterable laws of software is that any project, product, or portfolio named Fusion is a hodgepodge of incompatible technologies that get along like a sack of cats and that Marketing is trying to throw a blanket over.
The most recent price increase for Prime combined with the lack of good, findable content caused me to drop it, and I'm far from alone.
The fact that they keep larding the bundle with additional features that have no value to me but presumably are factored into the price is not helping.
If it were possible to buy just the delivery service at a reasonable price I might go for that because I still occasionally order stuff I can't find elsewhere.
...but exactly what problem are "smart homes" solving?
In my <number> years on this planet, I cannot remember thinking of any fixture of fitting in my house of which I have ever thought, "if only I could turn on/turn off/adjust $X without getting up from the sofa."
The only possible exception from that is the "oh, shit, did I close the garage door?" moment when leaving on a long trip, and even that only requires remote control of that one piece of equipment, not synchronized color-changing light bulbs.
...how criteria like being too expensive, not a "digital native", too old to be flexible and understand emerging technologies, etc. never applies to C level positions.
I'll bet there are tons of people in India who could do their jobs just as well for far less money. They probably bought their MBAs from the same American universities too.
In fairness, the US constitution was originally formulated to cement the power of land-holding white men and entrench slavery.
Any good that has come from it is a result of very hard work, an extremely bloody civil war, and decades of civil unrest to amend it or attempt to get it to apply equally. And even that is a constant battle.
LLMs are the party trick of the AI world. The fact that the trick got bigger and more elaborate doesn't change that. The purpose of an LLM is quite literally to *create the illusion of a thoughtful answer* by brute force.
Believing that an LLM is "reasoning" is like believing that a stage magician is doing real magic... even after he shows you how the trick was done.
A grammar school friend of mine had a TRS-80 and the rest of us were jealous, obviously. The main thing I remember us doing with it was playing some 'conquer the galaxy' game, possibly based on Star Trek, but this was all decades ago.
Mind you, his family lived in Solihull so obviously they were much better off than mine!
Bollocks.
Defending offensive behavior as "neurodivergent traits" is offensive, demeaning, and damaging to people who are neurodivergent, especially when the people being excused / excusing themselves very rarely have any diagnosis to support their claims. It's become the "go to" for assholes and dinosaurs to defend bad behavior.
Maybe you should look more closely. In surveys of historians and political analysts, he is generally rated the worst or second worst (after Buchanan).
Others here have given a long list of his corruption, nepotism, bloviating, demonization of all opposition, cozying up to every flavor of Nationalist and the violent extreme right, etc. etc. so I won't repeat that here.
IBM could probably lay off as much as 50% of its employees without losing anything worthwhile. The company has a huge number of flunkies, goons, duct tapers, box-checkers and task masters.*
Unfortunately, IBM is in the process of, in large part, laying off the wrong 50%
*Definitions here: https://a4kids.org/wp-content/uploads/2021/05/100.-5-Types-of-Bullshit-Job-Graeber.pdf
The ads seem to be inserted automatically rather than by a human editor choosing an appropriate slot or even matching the ad break timing originally designed for linear TV. As a result they routinely break scenes 5 to 10 seconds too early, even mid-sentence in some cases.
FWIW other ad-supported streamers are no better.
If they're going to monetize us, they could at least be less shoddy about it.
I have exactly the same positive experience. I bought a really cheap ($100-ish) small Lenovo Chromebook as an experiment (I had quit work, returned my work laptops, and needed something for away from home). I was so happy with it that I bought a second one (Acer) with a larger screen and more powerful processor. It's ideal for travel e.g. for finding local restaurants, checking opening times for attractions, and all sorts of online stuff. I also use it for writing in cafés without having to be paranoid about it getting swiped (there's no personal info on the local drive) or having coffee spilled on it.
Also, I never have to worry about updates or driver compatibility or any of that crap. It just works nearly all the time. And if anything does go badly wrong, a reset is quick and easy. Basically, it's the admin ease of a phone with the screen and keyboard and capability of a laptop.
And for the tech purists, it's Linux under the covers. It's kind of weird to see the Reg crowd defending Windows and Mac over a true Linux machine!
The only thing I don't use it for is running a development environment (although reportedly plenty of people do) as I need Windows for my test environment, so a desktop makes sense for that.
I was relocated by IBM to Raleigh so that I could "have more contact with my colleagues". I was a product manager, and my dev team were mostly in Poughkeepsie, with other labs in Germany, India and China. My product management colleagues were mostly in Austin and California. The colleagues including the sellers and customers I worked closely with were everywhere on the planet.
Even my immediate manager was in Pittsburgh, although he did come to Raleigh a couple of days a month. His manager was in New York.
At first I went into the office pretty regularly (although I always left around 2:30 to pick up my son from school, and work from home the rest of the day.) By the end I was going in only on the days my manager was in town. And even then I would spend maybe two hours a day with him, and the rest in my windowless office with the door closed, working online or on conference calls.
The whole premise is BS.
If there is any bias - and I doubt that there is, I think that the problem here is the definition of "bias" - it is, as many have said, in the training data.
The creators of LLMs try to train them on "high quality" data, e.g. Wikipedia not Conservapedia. And let's be honest, 99.9% of right-leaning content - far in excess of Sturgeon's Law - is crap. And much of it is intentionally crap, designed to mislead and undermine known scientific facts.
In today's America, things that are factual are dubbed "left" regardless of what their content is. Twenty years ago, Karl Rove referred to opponents of the Bush administration as "the reality-based community" - and he meant derisively. Twent years later, here we are...
Beer icon because I need one.
"a simple inspection of the source code would reveal this"
I very much doubt it would. One of the challenges of LLMs / generative AIs is that the way they arrive at their results is utterly opaque even to their creators. It's hard enough to spot subtle biases in entirely traditional scientific computational algorithms. Unless there is some absolutely obvious smoking gun Lean Left filter in the code, there could very easily be bias, accidental or intentional, that even an expert would find hard to identify.
If there is any bias - and I doubt that there is, I think that the problem here is the definition of "bias" - it is, as you say, in the training data. The creators of LLMs try to train them on "high quality" data, e.g. Wikipedia not Conservapedia, and let's be honest, 99.9% of right-leaning content is crap.
Acknowledging the reality of climate change, vaccine effectiveness, the history of slavery, police racism, etc. etc. etc. is not left-wing bias, even though these facts are supported by the left and denied by the right.
Just because the Right has taken a Lemming's Leap off the derp [sic] end, it does not mean that we have to pretend that factual neutrality still lies midway between the two main parties.
Anbody interested in more on this should google* "overton window".
*Other brands of search engine are available.
...is LaH10 at 170 GPa wit a critical temperature of 250K (roughly -23C).
Ambient pressure is approximately 100kPa.
So to have achieved such a reduction of the required pressure by several orders of magnitude would be beyond astonishing.
It will be wonderful if this replicates, but this falls into the "extraordinary claims/extraordinary proof" category of skepticism.
So your assertion is that the only two possibilities are "understanding of our physical world" or "random construct of words and letters"? No wonder you are confused.
The key to LLMs is that their output is a PROBABILISTIC output of words and letters, derived from digesting a massive corpus of human-generated content. They very literally have no conception of a physical world; only word likelihoods.