>employees who rate this sort of language as insightful are more likely to struggle with analytical thinking
I reckon they rate this sort of language insightful because they struggle with analytical thinking.
2392 publicly visible posts • joined 24 Nov 2007
Supposing we can hit an incoming asteroid 10 years before it impacts Earth (which is already rather optimistic), that change in speed accumulates to a few kilometers. It's something, but nowhere near enough. Of course, this was a demonstrator, but still, actually deflecting an incoming asteroid is going to need a few orders of magnitude more punch.
I was just about to post this. It's been a while, but if I picture in my mind the act of closing the hole on a 3.5", I get a feeling of "danger, data at risk". The opposite if I imagine opening it. I absolutely can't remember reading this anywhere, but it seems to have become a low-level instinct.
I don't think you've picked a good example. The feudal economy regularly started projects with completion dates 100+ years in the future, often managed to see them to the end, with material and build qualities good enough to last for centuries if nobody knocks them down. Good luck doing that under capitalism.
Of course, the feudal economy also featured extreme inequality, and lots of those megaprojects were vanity or political or military and didn't actually contribute to making things better for the general population, except very indirectly. Which is closer to nowadays' capitalism than I'd feel comfortable about.
On that time scale, it's unlikely that media degradation is the thing that gets your data. I would argue that if you are serious about storing data on that time scale, media degradation is the least of your concerns, because you'll have multiple backups anyway, periodically running checks against each other and fixing any errors that appear. And even setting up that will not really be the main problem.
You'd need to keep your backups in multiple nations, have some kind of organization that can transcode them as data formats change, shift them around as nations destabilize over history, have nearly-unbreakable financial support for it... basically, you need a religion.
>The fact that some billionaires float the idea of orbital datacenters to the public gives us a good impression of how stupid they think the public is.
I wouldn't mind it too much, if it didn't work, for them at least. Hyping idiocy wins investment funds and raises stocks, and it's easy to be out by the time reality hits. I could better tolerate being surrounded by idiots, if at least I could rely on those idiots not managing my pension funds.
>He said it was unreasonable to focus on "how much energy it takes to train an AI model, relative to how much it costs a human to do one inference query. It takes like 20 years of life and all of the food you eat during that time before you get smart," he said. "And not only that, it took the very widespread evolution of the 100 billion people that have ever lived and learned not to get eaten by predators and learned how to figure out science and whatever, to produce you."
So, he's comparing the energy cost of running ONE inference query to the energy cost of literally the entire history of life itself?
Wow.
I... have no words.
If someone attempted this in a serious debate, I would consider it equivalent to conceding the point to the other guy.
Look, there's a reason passwords endure. That is always the case when systems endure despite people decrying their faults and providing alternatives for decades (e.g. IPv4). It's never because people are stupid; there are always good reasons. Maybe not the reasons you like, but good reasons.
In the case of passwords, it's because they are easily understood as a concept, can be easily changed if compromised, can easily work entirely offline, and don't have any single point of failure except for the user's brain. There is no other system that has the same properties. 2FA locks you out if your phone has any hiccup. Biometrics - ah, the credentials that you literally leave everywhere you go and can never change, great idea.
I'm not saying that passwords are the one true system and everything else is bad, far from it, but let's not be naive here. There are good reasons for passwords, and pretending there aren't won't help anyone.
This is all-around horrible.
First of all, hazing is abuse. Self-perpetuating abuse, as hazers need newcomers to become hazers too (lest they are forced to admit to themselves that they're abusers). It's not fun, it does not create bonding, it's just harmful. If something like that happened to me, I'd drop the job instantly. Might even consider legal action, depending on how it goes.
Secondly, the inmates are human beings. They are not the monkeys in your personal circus. Spend years basically telling them to their faces that you don't want them to be part of civilized society, don't act so surprised when they get out and immediately turn to crime again.
>It actually just means "follows a set of defined rules"
I don't think I've ever heard that definition for "deterministic". Usually, in computer science at least, it's "given a system state, there is only one possible next system state".
That's technically true for LLMs only because RNGs are not really random, but when discussing determinism in computer science we usually pretend RNGs are random (otherwise, everything is deterministic and, while technically correct, that's not very useful).
Like a lot of these LLM projects, the LLM is good at almost, but not quite solving the problem. Which means that the hype spinsters have an easy game of saying "look, it's nearly there, just give me another billion and we'll do it".
Except that it doesn't work that way. It's not nearly there. Not at all. Another billion, or ten or a hundred, won't cover that gap. And, in most fields, you need to solve the problem before you get ROI. Not "almost, but not quite" solve it.
It isn't about personality, it's about prejudice. People have been hiring and promoting based on looks since forever. The model is trained on real-world data, and it has embedded the same prejudice, so it will give the same results.
This is an excuse to keep using poor hiring practices, while hiding behind the computer. They are trying to convince us that LLMs work off some kind of pure logic, just because they run in computers. They don't. They have the same flaws as humans, because they were trained on human output - and a whole batch of additional flaws on top of those (like hallucinations).
Okay, but Switzerland is also insanely costly. I've little first-hand experience, but I was there on holiday a couple of years ago, and a sandwich from a street food van would cost 14€, a short bus ride 5€, a decent dinner out no less than 50€ per person but more likely 80€, you get the picture.
Ooh, right. ORMs are a lot like LLMs, in that they look like they'll solve your problem - but, actually, that's only true for trivial problems. Every time I've tried to use an ORM, it either turned out to just be replacing a bunch of one-line SELECTs, or I ended up wasting more time wrangling the ORM than it would've taken me to hand-craft the correct SQL.
>Though you could argue that a lot of the worst excesses are reactions to holes we've already dug ourselves. React as a response to the crude mashup of JavaScript and HTML, Containers as a reaction to dependency hell of Python-like environments, Agile as a response to unmanageable requirements and estimation demands etc. etc.
Proof, if any was needed, that papering over fundamental problems doesn't fix them.
Maybe, just maybe, this could have something to do with the fact that regular search has been turning more and more into shit lately?
Finding the thing I want on Amazon search is now damn near impossible. Items that match the query are scattered among dozens of items that have little to do with what I want, but are "sponsored" or "most purchased" or whatever. When I search with the full exact name of the item I want, I have to actually scroll before I find it. Google isn't much better.
Given all that, is it surprising that one might opt to just tell ChatGPT to search for a damn espresso machine, just so I don't have to wade through a hundred american-style machines, coffee pots, bags of roasted beans, mugs, freakin coffee-themed pillows, and all the other insanity that pops up when I type "espresso machine" into Amazon?
You make a good point, but "freemium" models work where the marginal per-user cost of the service is extremely low - I think it's mostly for software that's costly to develop, but really cheap to run. If each user costs you a penny per month in hosting, it's fine if almost all of them don't pay.
LLM users very much do not cost pennies per month. It's more like dollars, or even tens of dollars. Those inference runs are really expensive. There is no way that model works for LLMs.
Detecting radiation is trivially easy. Even back then, Geiger counters weren't terribly hard to get. A cover up, as soon as public interest is raised, would be exceedingly difficult to maintain for any length of time. Covering up anything going on today would be impossible in anything but a totalitarian state; the gizmo anyone needs to blow the cover is 50 bucks on Amazon. It seems far more likely, though admittedly less interesting, that there's no cover up to begin with.
>Don't make this a Trump thing, not everything is about that clown.
Nobody (who matters) is "making this a Trump thing". If this was being seen as "a Trump thing", the EU would just grit its collective teeth and wait until 2028. It's not like anyone can build an independent digital infrastructure in 2 years anyway. No, this is very much a USA thing. The reasoning that's going around is that we can't be certain there won't be another lunatic, if not right after Trump, then after two, three or ten Presidents after him.
>Instead, the team developed a trio of domain-specific models based on well-established machine-learning algorithms.
In other words, this isn't the thing that launched the current AI hype. This is old and trusted tech. Similar methods have been used successfully for decades. Calling it "AI" is probably a trick to get spotlight and money - and I say, good for them.
I get the spirit, and I don't think it's a catastrophic idea. However...
This fails to address the root of the problem. Like another poster said, if someone is dumping toxic sludge in a lake, the fix isn't forbidding kids from swimming in that lake. The fix is forbidding people from dumping toxic sludge in lakes.
In the case of social media, this has the added complexity of having freedom of speech mixed in.
I would rather work on the economic incentives that lead to toxicity. Regulate advertising, especially ad brokerage, and ban tracking altogether. Make it so that user attention is difficult to monetise.
>If a country's economic model is reliant upon a growing population, then it's a Ponzi scheme really, isn't it?
The problem isn't the absolute number of people, but the ratio between workers and retired. We don't have to import immigrants because we need more people; we have to import them because we need to raise that ratio. You are correct that a good economic model should be able to work at steady state; unfortunately, we are very much not at steady state, as that ratio is rising.
>The only ways for rich countries to avoid this are to either accept significant immigration, or to prevent people retiring.
Ah, if only we had hordes of working-age people who would literally risk death to come and work for us for peanuts... oh, wait.
I am constantly amazed and dismayed by how the West has been shooting itself in the foot on immigration for decades now, and seems to be aiming higher and higher with every election. I wonder if we'll stop before we get to the heart.
Sorry for the title, but that's about it. The physical location of servers is not really relevant to data "sovereignty".
Is there at least one person who lives in the USA and has (or can easily get) admin credentials for this so-called Euro cloud? Yes? Then it's not secure. It really is that simple.
"Hey, we've made this tool that can generate plain language text based on a prompt. Unfortunately, its output is intrinsically non-deterministic, so it should never be used for applications where you need to be able to rely on its answers. Also, it cannot be properly secured, so it can't be used in applications where a user may be hostile. For the same reasons, it can produce non-kid-friendly results, although, frankly, that goes for the Internet in general. We are aware that this means its usefulness is limited. It's still pretty cool for party tricks, though, and there might be some interesting applications in non-critical sectors."
There, I've done better.
LLMs don't work that way. If you take a prompt like "and then the baby told me that two and two is twentytwo, and we all laughed out loud, and the poor thing got upset..." what exactly is a "parse calculations" step going to do? Even identifying that there's a calculation in there is non-trivial, and assuming that you could do it, sending it to a calculator would be the wrong action anyway. There's not even a question in there. In order to figure out when you have to send something to a calculator, you need to decipher the meaning of the prompt - but we have no way to do that except for the LLM itself.
And you'll find the same problem if you try to sanitize input in any way or fashion. How do you detect a malicious input? Why, you pass it to an LLM... and round and round we go.
Because of all of that, making an LLM "safe" is fundamentally impossible. Band-aids are all they can do. They are not idiots, but they are conmen.
So, instead of doing drudge work, the same amount of workers will spend the same or more time reviewing AI-generated drudge work.
But... if that's the outcome, why the hell would an organization deploy AI in those roles? What's the ROI, where's the benefit?
Besides satisfying investors' AI-FOMO, I mean. The capitalism algorithm is meant to work with rational actors...
I applaud the comments suggesting individual actions. However, I suspect that the privacy problem is a lot like the climate change problem: individual action is of very limited impact on the overall issue, and all good solutions are political. You can bet the bad guys are not acting as individuals; they are lobbying and funding parties and bribing. By all means set up a firewall, I'm certainly doing it, but publicly supporting strong privacy regulation is what might actually save us. Remember, the idea of the personal carbon footprint was invented by an oil company.
I make industrial automation software. These programs communicate with things that do stuff in the real world, and usually can't wait on the software except in limited, predefined moments. Their own programming also tends to be, ah, not terribly resilient when other bits have hiccups. Because of this, usually, the software I make is not supposed to be stopped except via a well-defined procedure, to be performed only in well-defined circumstances. Otherwise, all kinds of interesting fuckups can happen. If it happens, I'll deal with it, but I do my best to minimize such cases; for example, you can't close the program except via task manager/services or with a password.
Sometimes, I get a user who thinks himself smart because he knows that all kinds of IT problems can go away with a reboot. This kind of user reboots the PC at the first problem, which just compounds the problem instead of fixing it, and then calls me.
And then, there was that one user that did so repeatedly, even after getting told never to reboot without calling me. In one case, the program was showing him an alert stating that a sequence wasn't starting, and the SCADA by the side was telling him that an electrovalve was stuck - and guess what his solution was? On the plus side, I swore at him a lot, and he finally learned not to reboot without calling me.