Re: What ever happened to the future where we pick items off of shelf ...
Amazon closed them when the fancy AI tech turned out to be Actually Indians watching lots of cameras.
2270 publicly visible posts • joined 25 May 2017
> "Users do not want disconnected tools," OpenAI said
Clearly they have never heard of the Unix philosophy.
I realise I'm a non-standard user but I want good tools that do their job well, not a Frankenstein Swiss Army Knife that does everything badly.
> The world doesn't just want to build things. It demonstrably also wants to blow them up. OpenAI's long list of investors and cloud providers might now wonder if they are standing in the blast zone.
Not to worry, Jensen Huang supports war. He thinks we should have bombed Iran years ago and there's money to be made selling AI to the Iranians once the US bombs them back to the stone age.
> GNOME 50 itself still contains the XWayland X server, so you can still start and use X11 apps, the same as ever.
Sounds like backwards compatibility to me and in just the second paragraph so you didn't have to read far. Also later on the paragraph about old Nvidia cards.
Elon has already said Grok will do whatever the warmongers want.
And OpenAI signed a contract with the Pentagon on Friday.
I read that this whole debacle was sparked because someone from Palantir told someone at the Pentagon that they overheard an Anthropic staffer complain that Claude was used in the illegal kidnapping of the Venezuelan President. So why do Palantir want Anthropic out of the US government? Could it be because they have invested in OpenAI? Or maybe they just think Anthropic will get in the way of their whole "let's be evil" plan?
I'm a developer, I deploy code in 2026 and I very much do care where it is deployed. I need to ensure it will run on that architecture securely for a start.
Netlify or Vercel? Did you sneeze? Do whatever the LLM tells you? Are you a slave to OpenAI, Anthropic, etc or did LLM use rot your brain?
This article is bollocks if you know and care what you are doing.
> machine intelligence is a threat to the human species," the site explains. "In response to this threat we want to inflict damage on machine intelligence systems."
If a machine intelligence existed, sure, but LLMs are not intelligent.
> "Poisoning attacks compromise the cognitive integrity of the model," our source said.
No they don't. Cognition requires understanding and LLMs do not understand anything.
So no Browse button or drop down list? What is it with the current crop of big tech devs that they always replace a working feature with a new, less capable and not fit for purpose version?
I'd have given this such a scathing code review that the incompetant snowflake responsible would need therapy!
“For whatever reason, because we had certain people in our company, we won't go into the names, who didn't want to build it and that building that database that drives it, well, already we're selling product and really doing a phenomenal job there,”
Is it just me or does Benioff sound like Trump? Maybe they both have the same brain disease?
Whatever, I wouldn't trust any product from this security nightmare of a company and its terrible CEO.
So good ethics would be to do no security research and let Meta continue to shit all over their users and crooks make off with their data?
Downloading one or two records would not convince Meta, they didn't even speak to the researchers till informed they were about to publish. The data was deleted afterwards - that is ethics.
> no such thing as an immutable general purpose operating system
I'd love to hear what he considers a general purpose OS. SteamOS, for example, might be designed for gaming but it is perfectly capable of school work, office work or running the lights for a concert.
How more general purpose can an OS be?