Re: Ding Ding Ding !!!
Snail mail has no intrinsic error correction for packet loss. If a human user develops a process that mitigates the packet loss, the packet loss is still evidence of downtime.
1721 publicly visible posts • joined 10 Jun 2009
> Airline pilots these days, they have GPS and collision avoidance radar and autopilots bringing them everywhere. Some of them don't know how to hand prop their engine and refuse to fly with their head out in the open air.
Funny you should mention pilots, because there's a lesson there that society should have learned.
When autopilot came in, the company bigwigs said "switch on the autopilot, then only do something if the autopilot can't". The pilots (and their unions) kicked back with "if we don't pilot the plane under normal conditions, how the hell are we going to be safe to pilot them in adverse conditions that the autopilot can't handle?!?"
So now pilots fly the plane even where an autopilot would do a perfectly good job, specifically to ensure that they have the piloting skill to take over if and when an autopilot can't handle it, and so that they're confident to say "oh fuck no!" and take the controls back when the autopilot is about to do something rather stupid.
Pilots were able to push back against this sort of short-sighted thinking, but computer programmers can't really do the same.
When I was at school, we weren't allowed to use calculators most of the time. Why? Because we needed to learn how to do mental arithmetic so we would know how to do it if the calculator broke, and also that even if we did use a calculator, we'd recognise when the calculator was saying something ridiculous and stop to look for our mistake.
I was pretty stunned when studying Computer Science at university that classmates would keep going to pull out calculators and I'd end up working out the correct answer before they'd even switched it on. And for a long time they didn't trust me on it and still checked.
So then along comes the notion of an "information age" and us not needing to remember anything, because we can always just google the information, and this actually being believed in schools, so they were already training people to trust google unconditionally, and then along comes AI -- a tech designed to imitate human failings and suddenly everyone's trusting it, and because it's on the front page of google, it can do immense harm.
>sigh<
Yes, but the plan hasn't been thought out. Trump won't always be in the Whitehouse, and conflict of interest things hang around and even if the president can pardon Musk for the conflict of interest things (not sure whether he can or not, this is a very theoretical "even if") the criminal pardon won't affect civil actions. I can totally imagine the US government nationalising SpaceX because they've profited from things that should absolutely not have ever happened.
Ah, thanks for that.
However, I'd imagine that the researchers know this anyway, and if the comparison was black-on-white vs white-on-black as the article's illustration implies, that's a matter of zero occlusion vs 100% occlusion, so it should be sufficiently comparable perceptually to max and min brightness on an OLED to carry over.
And this is an initial study where they're experimenting to see if there is a statistically significant effect that would be worth investigating further. It's not bad science, just bad journalism, because it absolutely does suggest that it's worth doing the research, but until the research has been done, it's not "news".
But what we see is the relative difference between colours -- one contrasting against the other. Why should white text on a black background be harder that black text on a white background?
This research really starts pointing to some very deep things about human perception.
Do you call Eastenders, Coronation Street etc "TV novels"...? No, you call them soap operas. I would call most comics "soapy paper", personally.
Something like Watchmen might deserve the label "graphic novel", but it still started life as a serial before the trade paperback was published, so I still say it doesn't count.
Shocker! They know that. They are talking about the turning up of brightness counteracting the saving of switching the bright pixels off, and that means that they have to be talking about OLEDs, precisely because LCDs are only switched on to block light, and so dark colours can't save energy by definition.
And the reason that C++ is far from perfect is that it was designed to be backwards compatible with C.
This is where we hit a problem: you can't move programming on without getting programmers on-side, but having programmers on-side means not making the improvements that are badly needed.
C was written for single-processor architectures. I know you can multithread in C, but I've never actually done it, and I imagine it's a bit hacky. You certainly can't write for a SIMD architecture in C, and I suspect that it's C and its successors who have stopped the move away from x86 and ARM to SIMD processors for most tasks. I can imagine a world where CPU development stopped and only had to concern itself with GUIs and all the big stuff was data processing....
Yes, I heard about the classic version through Louis Rossman's channel on YouTube. He was saying it was exactly the same price as what Office365 was before the addition of the "free" AI services.
The question is whether Microsoft have dropped the price because they want to claim it is genuinely sub-par, or if it's just a matter that the UK pricing was recalculated after changes in the exchange rate...
This kind of harks back to the overreliance on finger-prints that led to some pretty serious legal bothers a while back.
Fingerprinting was never as accurate as claimed in the early days, and its apparent accuracy was a result of high rates of recidivism: if your fingerprint looked like a match in the database, it probably was a match, because that person was likely to commit a repeat offence.
But then we started keeping fingerprints of people who were innocent on file, including witnesses who only gave their fingerprints to rule them out if their fingerprint came up in the current crime investigation.
Then when you look in the national database for a match, there was a much greater chance that the fingerprint that looked similar was not a career criminal, but someone who was once an innocent bystander, or worse: the victim of a crime. Imagine that: you've been beaten up by a drug dealer for simply being in the wrong place at the wrong time, and just as you're getting over that, the police start questioning you about a burglary because there was a print found that looked sort of similar to yours....
Entitlement?
That's hardly fair.
He is thinking different priorities from the Raspberry folks, that's all.
Raspberry Pis have been cheap because they're looking to served the widest base market, and that's the SD card crowd.
However, many hardcore Pi users are now are now the sort of people who add NVME. An NVME adapter is always an added aftermarket cost, and fitting it in the base model would be cheaper as it would leverage economies of scale. But it would make it more expensive for the core market, even if only marginally. Raspberry don't want to do this.
They have three audience groups now:
The kids' toy crowd
The maker crowd
The not-really-maker crowd who want to have a device that they can do interesting stuff on without fighting with either hardware or software.
I suspect that last one isn't particularly profitable....
But this is kind of similar to the massive off-shoring of entry-level jobs a decade or two ago (I was working in IT in the UK).
The talk then was that only the "low value" jobs were being off-shored, and all the high value, highly skilled work was still here, managing and supervising.
But didn't everyone in a high-value job start in an entry-level job...? Where do the next generation of high-value jobs come from if we off-shore the entry-level jobs that are a vital part of staff development? In the case of off-shoring, the IT and service industries are getting more and more high-value jobs offshored, because now there's more offshore staff looking to climb the career ladder than onshore, but with AI, you can't just expect GitHub Copilot to get so good at it's job that it can review the next generation of AIs to get them through their low experience.
People need to be really good programmers to perform code reviews, and as programming gets shifted to AI, how are we going to be able to get people doing enough programming to learn to do reviews?
Positive discriminationis simply correcting for normal human bias. People trust people who "look right" for the job they're hiring for. People who "look right" are people the most similar to the people you've seen doing the job before. The people you've seen doing the job before are probably white males. This automatically biases people to hire white males.
There are two philosophies in positive discrimination.
The first is just about visibility -- the more people who aren't white men that do a job, the more people who aren't white men will be hired to do the same job in the future. Over time the exposure to people who aren't white men will undermine the unconscious bias that white men are better at the job. This philosophy says that it's OK to sacrifice short term productivity in order to guarantee better productivity in the long term.
The second is about pure statistics -- we know that people are x% more likely to hire a white male than anyone else, so we correct for that by making it harder to hire a white man and easier to hire someone who's not a white man. This not only has the long term effective of reducing the long-term bias towards white men, but it actually means you're more likely to get the most talented individual now, so it's long-term and short-term productivity gains.
> If you're looking at what somebody is called, you're doing it wrong (and in some places there are laws about that sort of thing).
Yes, and the unspoken problem is that what's happening here is that the AI has found a way to intuit the bias from hidden patterns so that it can still be racist/xenophobic/misogynist/generally bigotted by spotting patterns that humans wouldn't, and the people pulling the levers that the AI is working on are in a panic that it's acting like a bigot so they're actively seeking to stop it preferential treating the people who traditionally always get preferential treatment, and the only way they can do that is by getting the AI to be biased against them, and if they're doing that, they absolutely cannot claim to be acting fairly and objectively. Equal opportunities has always meant needing to act preferentially on the traditionally disadvantaged demographics, but that usually involves quotas and other such explicit policies -- this is just opaque beyond belief.
Most likely you have never heard the phrase "computer says no".
Here, the Hammerbarn are trying to identify people who have been banned from their shops. Even if it's not an issue now, there will absolutely come a time when the security are confident enough in it that they'll assume it correct by default.
Now, note that I said "Even if it's not an issue now", because... well why did the authorities start investigating? There is nothing in the story to tell us why, but presumably someone complained, but simply hadn't tweeted about it or spoken to local papers.
We can only speculate about what happened, but I personally think it's a reasonable assumption that a customer was stopped by a security guard and would have been expected to provide proof that he wasn't the person the computer thought he was. i.e. he would have had to provide sensitive personal information to even just go in and buy a roll of masking tape, which doesn't really seem particularly fair, because no-one else does. If I have to prove I'm not someone who bears a passing resemblance to me, that's basically presumed guilty until proven innocent.
People may talk harsh about public bodies sticking their noses in where it's not necessary, but public bodies generally don't do anything until and unless asked to. We don't know what the complaint was, and I think my speculation probably describes a likely scenario that is definitely far from the worst case scenario. The fact that the authorities didn't give out details shows that they consider the complainant's data protection rights to be important -- more so than the retailer does!
But on a tangent... we are, as a culture, too quick to judge people who make decisions we do not immediately agree with. As a result, we're developing a situation where people are more and more having to justify their stance, and while that sounds good in theory, when we start having to publish the public complaints that started it off, we're only going to start discouraging the complaints in the first place.
Which is all well and good, but it's not about whether capturing the personal data (your image) using a security camera is reasonable, but rather the processing of that data. In the UK, we have the GDPR that talks about processing of personal data, and I'm not personally sure where this sits in regards to that.
If a security camera captures someone committing a crime, processing the images to identify the person is fair game, but is analysing everyone to verify they're not a known trouble-maker legitimate here...? This is stuff testing the borders of laws....
Ah no, but that's where you're wrong. It will outgrow the 1-month stage, just not in the way the human does. It won't get better at doing its job, but will get better at looking like it's doing its job. The errors will get harder and harder to spot, so the improvements in AI will lead to us simply having to do more work to find the errors that no human could ever have made.
As a human, looking for human errors is easy as we know how humans think, what with being one ourselves. We cannot understand AI, so we cannot anticipate its errors by reflecting on its thought processes.
Then you're not the target market. They're clearly looking to get a handful of Alien fans excited about completing their VHS collection. Not to mention all the relatively cheap market. How much would they have had to pay to get as much advertising space as this stunt has got them column inches...?
Sounds like another case of parents stepping in when their kid has to face the consequences of their own actions. If this student admitted to using AI for a project, they knew it wasn’t fully their work, and getting a slap on the wrist with detention doesn’t seem harsh at all.
What’s the real harm here? They were marked down, not expelled. The idea that this minor consequence will ruin their elite college dreams seems like a stretch. It feels more like parents overreacting and trying to game the system rather than teaching the kid accountability. At the end of the day, AI tools aren’t going anywhere, and kids need to learn how to use them ethically, not hide behind their parents when they get caught.
And if TheRegister tries to ban me for using AI to generate a comment, I'll sue!
I don't think you're being particularly fair to single the EU out for that -- this is a problem with all public sector regulation, really. No goverment body is really able to react quickly enough to deal with changes in the market, and that was all fine 40 years ago, but the speed with which markets appear, evolve and mutate into unrecognisable form is now just blisteringly fast.
Fair point, but then again, there's still a matter of training vs running. Training an AI is a lot more work than running one, and anything you could train on a Pi is not going to need the full power of the Pi, and that's whether or not you have this card in your Pi.
We're back to the days where the devs need a far more powerful computer than the users. To get a model that uses the Hailo to its fullest, you'll need a more powerful machine to train the model, probably with a NVidia 40 series card in it.
Well, with even MS Office now available as a web app, the days of the dedicated PC are numbered. The Hailo kit pushes this further, because if local beef needs to go AI, then users don't really need much at all.
However, the Pi will never be good for training AIs, so developers are going to need PCs still....
More that that, I was actually kind of word that the Pi Foundation were missing the very real possibility that homebrew AI was going to be a new part of the "makerspace" that Raspberry Pi shoots for. The first Pi was leading the curve for makers by putting a computer at a pricepoint where it was competitive with microcontrollers like Arduino on small scale, but not on large scale. The Pi was a useful steeping stone for embedded, because it was easy to use standard tools and just throw extra processing at it to avoid the need to optimise firmware early on.
Now that things are becoming clearer, I can see that AI is still going to need a PC... for training. I think the Foundation are really onto something here, because no-one uses the Pi as their only computer. This means we're going to get geeks using their NVidia megabeast video card for model training, and instead of only being able to use the models on that PC, they'll able to run it on a much smaller thing.
I see no mention of this in the Bootnote itself, but the picture ofDoctor Who in a Starfleet uniform is absolutely fascinating because what I'm seeing is an actor who appears to be a mutual cousin of Peter Capaldi and David Tenant (or maybe if Tenant had a kid with Capaldi's daughter). It's just mind-boggling that the AI could generate family resemblances that way. But logical, I suppose, other than the fact that I don't see Matt Smith or Jodie Whittaker in the mix....
Every single one of "his" ideas is something I read about old mid-20th-century adventure comic annuals I bought in 2nd-hand fayres in the 80s. Vacuum tubes, self-driving cars, big one-piece rockets, rockets for international Earth travel... it was *all* in those books. He must have just been reading the same stuff, and he internalised it as "the future". And now he's mistaking things he saw presented as "the future" as a child as his ideas and vision for a great future.
But there's a reason those things never got beyond kids comics...
No, that's not relative purity, really... it seems a lot like absolute purity to me, because you're doing a one-dimensional measure of "how many impurities does it have", with no concern to what they are.
Water that's swimming with bacteria is relatively purer than water containing nano-plastics from one point of view: the consumer can just boil the bacteria-laden water before they drink it, but there is no consumer tech to remove microplastics.
Companies have not necessarily reduced overall risk -- they have eliminated known problems that have had known solutions for generations, and they've introduced new problems with no known solution.