* Posts by Elongated Muskrat

2395 publicly visible posts • joined 14 Nov 2022

Ireland joins regulator smackdown after X's Grok AI accused of undressing people

Elongated Muskrat Silver badge

Re: Am I the only one ...

This is a fair point, and whilst there is perfectly understand outrage about the CSAM aspect, this shouldn't lessen the outrage around using this sort of system to remove the clothing from images of adults.

Quite frankly, if this software is being used to do things that humans can do, such as synthesising images of people without their clothes, and without their consent, it should be subject to the same rules and regulations that a human would be.

It is a ridiculous situation where a person would (rightly) be hauled up in court for creating a fake nude of another person without their consent and publishing it, but a machine can churn out tens out thousands of such images in the same time and nobody faces any consequences for it.

Put simply, if the purpose, by design, of Grok isn't to produce non-consensual explicit pictures of real people, without their consent, why has it been trained on the data that allows it to do so?

The CSAM angel is a little different, because not only is it illegal to produce such images of real people, it is also illegal (in the UK at least) to produce such images of entirely fictional children. If safeguards cannot be put in place, that are absolutely reliable and cannot be circumvented, and there are good reasons to believe that this is actually impossible (analogous to the halting problem), then there is an argument that the entire technology is flawed, or at the very least, that its operators should be held responsible for its use in a much more structured manner.

Oddly enough, though, I can't see a world where Elon Musk is held personally responsible for every illegal image his software produces, even though his grubby hands are all over rigging its algorithms.

Elongated Muskrat Silver badge

Not just the parents, the po-po might like to have a quiet word with you down at the station, whilst seizing all of your computing devices for forensic examination.

Elongated Muskrat Silver badge

Re: a cynical attempt to shutdown the last truly open forum on the Internet

I'm starting to think that about 90% of the AC posts here are the same person. There certainly do seem to be a lot of very deranged posts made behind the shroud of anonymity, which again, is especially cowardly given that a user's handle hardly reveals their true identity.

Elongated Muskrat Silver badge
Trollface

Re: a cynical attempt to shutdown the last truly open forum on the Internet

Cool story, bro.

Elongated Muskrat Silver badge

Re: Am I the only one ...

I think the CSAM ingestion accusation is a bit of a stretch. Most diffusion models can generate fairly photographic images of clearly impossible things by bodging together known concepts (e.g. A pink elephant in a rice paddy or a Ferrari drawn in the style of Da Vinci).

To do these things, the model would have to have been training data that allowed it to identify an elephant, determine what colour pink is, have source material of rice paddies, and Ferraris, as well as at least a representative sample of the corpus of work by Da Vinci.

Similarly, to render a naked person, it needs to have had at least some source material representing what people look like under their clothes, or it's not going to know what genitalia look like.

I've not seen the supposed output of Grok (I've never even used Grok, since I won't touch Elmo's stuff on principle), so I don't know how realistic or convincing the output is, but the reporting would suggest that it is accurate enough to cause concerns, which implies that it has been fed accurate training data.

Once again, a statistical model cannot produce output that does not meet the patterns of its input. In simple terms, it can either provide an "average" of the data it is provided, interpolate between two data points, or extrapolate from a series. If it hasn't been trained on images of naked people, then this would be a case of extrapolation, and I simply do not believe that an unthinking piece of computer software would correctly extrapolate these images. The conclusion, then, is that it is either "averaging" from a large number of images of naked people, or interpolating between, say, the image of one person clothed, and another naked. You cannot interpolate between two points without knowing both points. This is, of course, a simplification, but the thing here is that "AI" cannot create anything new, and anyone who tells you it can is swindling you.

Elongated Muskrat Silver badge

Re: Am I the only one ...

I don't know about others here, but I just don't get the point of AI photo editing. Perhaps it's because, when I take a picture of something, I want to record what I can see is actually there and not some machine-edited version of it. Who is going round taking pictures and then thinking, "I want to show this to someone, or post it on my 'socials', to show what I've been up to, but rather than showing what I've really been doing, I want to lie instead"?

I'm left thinking of the TV advert that is doing the rounds (I think it's for a Google phone) which asks the viewer whether they think they should allow their phone to reveal anything it "knows" about a person to their friends. I would have thought the answer to this, from most people, would be a resounding "Hell, no." I'll be the arbiter of what I tell my friends, thank you very much, not an "AI" on my phone which probably relies on sending all my personal data for processing to a data centre in the least regulated part of the US.

As for using "AI" to "touch-up" photographs to completely remove people's clothes, since an "AI" is just a statistical model based entirely on its training data, it stands to reason that it wouldn't have this capability unless it included such images in its training data. If it can be used to generate pornographic images of children, then I think there should be some very pointed questions being asked about how and why it is able to do this, and whether the owners of the model should be investigated more closely. Who thought that feeding images of naked people into image generation software that isn't specifically intended to generate other images of naked people, was a good idea?

How AI could eat itself: Competitors can probe models to steal their secrets and clone them

Elongated Muskrat Silver badge

You joke, but it's not exactly uncommon practice for university professors to claim the credit for work done by their postgraduates and postdoctorates, especially when it comes to publishing academic papers. This counts double if the person who did the actual work was female. It's a genuine problem in academia.

UK unveils telecoms charter to curb mid-contract bill shocks

Elongated Muskrat Silver badge

Re: Not enough

Legitimate, honest fair landlords are entirely unaffected by the provisions in said bill. All those people who are trying to concentrate their wealth by buying second and third properties to rent out to people in insecure situations, at exorbitant rent, and without performing any maintenance are parasites and if they don't want to play by the rules should release that housing stock. Actually, scratch that, they should have that housing stock confiscated by the state and handed over to local authorities as much needed social housing. It's about time we undid some of the social harms that Thatcher caused.

Edit - I will just add the observation that much stricter rules apply in many places, and, shock horror, the outcome which you postulate has not materialised.

Elongated Muskrat Silver badge

Re: Not enough

Starlink is not an option, because there is absolutely zero chance that I would ever buy anything from a company owned by Elon Musk. I would have thought that this as pretty clear. It's also quite expensive.

I looked into 5G; in my area, it's patchy, and more expensive than VM, so again, not a viable option.

Should I consider dial-up, or a piece of wetted string and two tin cans as well?

As for the "service is £100, reduced to £50" thing, that's not how VM advertise their services, and you know it. A quick check on availability, using a neighbouring property's address tells me that their basic service is offered at £21.99 a month, increasing to £25.99 a month from April 2027". No mention whatsoever of it then getting a hefty increase after the 24 month contract term, and I'm willing to bet that you wouldn't find that out until you are presented with 15 pages of small print to read through when accepting the contract. This, my friend, is misleading advertising at its best.

Elongated Muskrat Silver badge

Re: Not enough

That works both ways though. If you come to the end of say a 24 month contract then by definition you no longer have to pay £50/month. However you cannot reasonably expect to keep the service indefinitely at the same price otherwise people would just never move.

Meanwhile, that's exactly how renting a residential property works, and yet people still move house.

I'd suggest that, on the termination of a fixed contract, they should be compelled to keep you on the same tariff on a rolling basis, whilst informing you that your contract term has expired, and giving you all the available options.

Imagine if your landlord decided to double your rent because you'd been living there for six months already? Incidentally, unscrupulous landlords who might do this are exactly the reason the government has introduced the Renters' Rights bill. One of the few things the current government has actually got right, albeit too slowly, and without going nearly far enough to protect renters from exploitation.

Elongated Muskrat Silver badge

Believe me, if the alternative to Vermin Media was anything other than ADSL through a BT line, and I could get FTTH, then I'd have already jumped.

The random dropouts and short disconnects their service is periodically afflicted with play havoc with the RDP connections to the remote virtual machines I have to work on. It can go for months at a time without any issues, and then I have to suffer for days for weeks from slowdowns and disconnects, whilst their status page cheerily informs me that there are absolutely definitely no problems in my area.

Elongated Muskrat Silver badge

Details, please, of this automated line and number to press. The ****ers are putting my price up by £3.50 a month in April. If this is as simple as you suggest, I will look at finding a way to automate it to counter the greedy oiks.

Elongated Muskrat Silver badge

Not enough

Utilities, including telecoms, water, gas, etc., etc., need to regulated properly, including not allowing price rises beyond RPI inflation, as well as limiting these rises to clear points within the contract's lifetime. They also need to be prevented from price hikes at the expiry of a contract, instead being compelled to roll the contract on, on a per-billing-period basis at the current rate.

Telcos are particularly bad at this; sign you up to an 18-month or 24-month contract, at, say £50 a month, and then, when the contract expires, up this to £100 a month, unless you sign up for a new fixed-term contract. This behaviour should be banned, it has no legitimate business justification and is literally a profit-gouging tax on the inattentive.

OK, so Anthropic's AI built a C compiler. That don't impress me much

Elongated Muskrat Silver badge

Re: Paging Ken Thompson, Ken Thompson

Certainly not if its training material contains "on trusting trust". Although, I'm sure that if you specifically asked it to compile in an invisible backdoor to the Unix login command, you could then be certain that it would fail to do so.

Elongated Muskrat Silver badge

Re: Spotify

Their best programmers are tied up finding all the errors in the pull requests put in by the AI garbage generators.

Elongated Muskrat Silver badge
Pirate

Re: If I read all the code in the world .....

Let's feed it Befunge and watch it melt.

Elongated Muskrat Silver badge

Re: "That don't impress me much"

95% of humans can't create a C compiler, even if they were given a year of time on it.

At the risk of sounding glib here, that's a pretty high bar, 95% of humans couldn't write a working c program, let alone a compiler for it, simply because far fewer than 5% of the population ave any familiarity with c whatsoever. Oddly enough, most people aren't programmers.

Elongated Muskrat Silver badge
Boffin

Re: "That don't impress me much"

If you want good pattern recognition, you don't need to spend millions on data centres, you need to pay a decent salary to one autistic person. This is how computer science has progressed for the last century or so, and how it will continue to do so.

The fact that most people who work in the field of software engineering are neurodivergent in one way or another, and are pretty good at pattern recognition already, means that most of us are deeply cynical about the current "AI" fad, because we can all see that it's one huge bubble. It's also a great litmus test for bullshitters in the field, because anyone buying wholeheartedly into the hype like a 5-mile high sore thumb painted fluorescent green with a flashing sign above his head saying, "bullshitter".

The statistical models that marketing bullshitters are referring to as "AI" have their uses, when statistical analysis is required. They can't create anything any more than I can make your head explode from across the room à la "Scanners".

Elongated Muskrat Silver badge

Re: "That don't impress me much"

Taking a million humans, and munging them all up doesn't give you a superhuman, it gives you a gooey mess.

The belief that a sufficiently obfuscated model will somehow gain the ability to create something that isn't a variation of its inputs is pure magical thinking.

There's an old adage in computing: GIGO; garbage in, garbage out. Except with "AI", it's (supposedly) carefully curated data in, garbage out. Just because you make the machine too complicated to understand doesn't mean you are imparting it with any abilities that the component parts don't have, and to create something new requires consciousness, which machines do not possess.

Elongated Muskrat Silver badge

Re: Whistling past the graveyard

Absolutely. When people talk about "AI" "creating" or "thinking", or "understanding", these are all category errors.

To put it in the language of data science, a principal components plot doesn't create the data it models.

Elongated Muskrat Silver badge

My real worry remains that AI-crazy companies will fire developers willy-nilly and replace them with AI to make their bottom line look better.

It might make their AI-hallucinated bottom-line look better, but that won't make a lot of difference when the company goes bust, because the quality of whatever product they are selling goes through the floor, and all their customers flee to their competitors who weren't silly enough to bet the farm on magic beans.

The UK government isn't spending much taxpayer cash on X

Elongated Muskrat Silver badge

Re: Those figures cannot be right

I know people who still use it, because of communities that have built up on there, which have not (yet) found another home. Some are moving (slowly) onto Bluesky, but as an alternative it still doesn't really have much uptake. However, none of these people would be daft enough to consider a curated and targeted social media bullhorn as a legitimate source of reliable news. They're using it purely as a communications medium between themselves.

As far as I can tell, most of the people who really are dumb enough to do so, are the sort of people who consider conspiracy theory rants on YouTube as an authoritative source of information, and who are so enlightened by it, that they then feel the need to stand in your nearest town centre with a display made of old cardboard boxes and laminated print-outs ranting about how the Earth is flat.

Elongated Muskrat Silver badge

10.8 million families use X as their main news source

Assuming "family" means two parents and the average 2.4 children, that works out as 47,520,000 people out of a current population of 69,931,500, or nigh on 68% of Brits using NaziNet as their principal source of information. That's a hairy load of old horseshit, if you don't mind me saying.

GitHub appears to be struggling with measly three nines availability

Elongated Muskrat Silver badge

If you follow the link to the "missing status page" you can see for yourself, that the time period was October, with a measly 89.9% uptime. A total of 14h 37m of major outages, plus a whole load more of degraded service according to those figures. Although, to be fair, the way those figures are aggregated, those include outages in Copilot, which nobody in their right mind would care about. Automated AI reviews of PRs can be marginally useful, but they're hardly essential.

Elongated Muskrat Silver badge

Re: It's simple...

Indeed, and note that this says nothing about the architecture once hosted on Azure; Microsoft will happily sell you a hosted Linux instance just as readily as a Windows one.

Elongated Muskrat Silver badge

Re: It's simple...

I'd be very surprised if GitHub's git servers weren't actually running Linux. The OS isn't the problem, it's the platform (Azure) that appears to have some problems with flakiness.

Elongated Muskrat Silver badge

Microsoft might not be offering two nines as their SLA, but that's what they're delivering according to the "missing" status page. Not one single component of their service is currently meeting three nines.

I couldn't care less if their Copilot is at 0 nines, but their outage on Monday meant that it took several attempts to perform basic git operations. The website showed the chintzy unicorn when trying to view branches in the repo I was working on, and simple git operations such as pulling the latest version of a branch down to my development environment in Visual Studio (from an Enterprise subscription to github) were getting various HTTP errors (500 mostly). How you manage to break something as basic as a git server in this way is beyond me, and for a company the size of Microsoft to drop the ball this badly should be raising several pointed questions about why their service is so bad.

AI video company arouses fury by boasting about replacing creative jobs

Elongated Muskrat Silver badge

This is my point, though. If you have a critical task that you need done properly, you want a deterministic outcome. You want the same results every time, given the same inputs.

If you have a non-deterministic task that nevertheless has to follow applicable rules, regulations, and laws, and which shouldn't produce wildly inaccurate results, you need a human to do it.

If you have a job, where it doesn't matter if the output is mush or wildly wrong, then generative AI is the tool for you. It's just that a human can effectively produce sloppy bullshit too, and in the long run is likely to cost less, and not come up with such wildly inaccurate outputs that they represent a risk to your business. The only industry I can think of where this sort of thing might be applicable is sales and marketing, and people are already turned off by adverts that are obvious slop. Well, to be honest, most people are turned off by adverts full stop, but there's so much money circulating in that industry it's taking a while to notice the increasing backlash against intrusive advertising everywhere.

Elongated Muskrat Silver badge

It isn't good enough for most tasks, though.

The whole point of having a computer do something rather than a human, is that the computer will rigidly follow a set of rules and produce deterministic results. A human will get tired or bored, especially with repetitive tasks, and make mistakes, so certain tasks are better automated with such a rigid, rules-based system.

Tasks that are left to humans, such as those that involve weighing up lots of information and making decisions, ones that require in-depth knowledge, or creative tasks (i.e. any task that actually required thinking) can't be done by LLMs, because they are nondeterministic. This can be seen when foolish lawyers use "AI" to cite illusory cases and get found in contempt of court, or when "AI" image generators produce errors all over the place (impossible limbs, too many fingers, lettering that looks like Klingon, etc. etc.) This is because "AI" is a massive misnomer; these are complex statistical models with a random element, there is no intelligence involved whatsoever.

So, what are the use cases? Well, LLMs do have non-generative uses which aren't terrible. Because they are statistical models based on large data sets, they can be used for various types of data analysis, for examples, being used to review large numbers of cell cultures and flag those showing signs of abnormal cells for review by a human, which is potentially useful for cancer screening, or for analysing an ageing code-base with poor documentation to provide summaries and code analysis. Again, though, these still need review by a human.

"AI" is demonstrably counterproductive for legal uses, might be useful for marketing, although people already perceive marketing materials that are obviously produced by "AI" as cheap and inferior, and this only really demonstrates that the whole field of marketing is as much bullshit as "AI" is (calling it "AI" in the first place is the fault of marketing "professionals"), and, if used to write code, produces bloated, non-performant code full of security vulnerabilities that a real programmer must waste their time correcting.

Is a formatting tool for LinkedIn actually useful? To be honest, I have never used LinkedIn, so I don't know. I'd suggest, however, that a tool for formatting your entry in what is essentially a social media site for marketing wankers, is going to be of limited usefulness. Being barely functional for a task that isn't really necessary or appreciably useful in any way isn't exactly a great boast. As for Google; well, everyone I know routinely ignores the "AI" results from Google, because nine times out of ten they are plain wrong (the other time, they are just misleading). "AI" is woefully bad when asked to demonstrate any sort of actual intelligence, simply because it has none.

Elongated Muskrat Silver badge

When AI inevitably flops (nondeterministic bullshit generators don't have that many real-world applications), we may well see some job losses on various boards. Most of the boardroom parasites will take a golden parachute and go on to exploit someone else though.

Meanwhile, companies that have laid people off and replaced them with "AI" are already reaping the benefits of increased costs, and this is during the loss-leader phase of the "AI revolution". Just think how expensive it's going to be for those companies once they have to start paying through the nose for a dependency they can't free themselves from. Perhaps I should start a sweepstake for which "AI leveraged" company is going to crash and burn first.

Tech support chap invented fake fix for non-problem and watched it spread across the office

Elongated Muskrat Silver badge

Re: Hilarious!

Well, it is solar flare season (solar maximum) at the moment, and one well-timed solar prominence could do more than just make one computer crash. All those SpaceX satellites make for a juicy target for a few billion tonnes of charged plasma.

UK justice system unplugs from ancient datacenters after five-year slog

Elongated Muskrat Silver badge

Re: Sir Brian Leveson recommends numerous uses of AI

And also very expensive to them, both in reputation, and fines, when they are found in contempt for it.

Elongated Muskrat Silver badge

Even then, I don't see how an LLM would be any better than a human at second-guessing whether a defendant is going to change their plea at the last minute, a witness not turn up to court, or determine how long a jury might take to deliberate.

GitHub ponders kill switch for pull requests to stop AI slop

Elongated Muskrat Silver badge

Re: Pull Request Hell

Surely the answer to this is firstly to automatically close all PRs that aren't properly linked to an issue, and then to be pretty harsh in rejecting anything that isn't well-enough commented, and well-structured enough for the maintainer to immediately see what it intends to accomplish. Copy-paste "Pull request closed; see project documentation for correct submission process" or similar. Just because it's open-source doesn't mean it should be low quality.

Elongated Muskrat Silver badge

Re: Wait, What?

I agree with all of this; and "agree on who is going to do the work" and "acceptance criteria" can be as little as a contributor informally asking the maintainer whether a particular feature would be useful. The easiest way to do this would usually be to create the issue for the feature and @ the maintainer. I'd suggest that if a contributor can't manage this, then their code is likely to be slap-dash too.

As for bugs, I can see two main cases; someone finds a bug and has no idea or willingness to work out how to fix it, in which case, it's a bug report for the maintainer to fix. The other case is someone finds a bug which they are competent to fix themselves, in which case it's still good manners to raise the bug report, and put the details of both the problem and the fix in it, before getting approval to proceed with the PR for it. The odds are that if it's a serious bug, the maintainer already knows about it, and if they don't, then documenting it in this way is going to be useful to them, and will make their job easier.

Elongated Muskrat Silver badge

Re: Wait, What?

My point here, is that a pull request comes pretty close to the end of a proper change management process, not the start. If someone wants to contribute to a project, surely the first point would be to raise an issue and get agreement on the maintainer (or maintainers) on what that issue should cover and its acceptance criteria, and then agree on who is going to do the work, before writing a single line of code, let alone putting in a PR for it. GitHub should make it easy for that to be the default process, not the "anything goes, here's a hundred pull requests from random strangers" approach.

I don't maintain public repos (my personal projects are private, and my work ones are private to the organisation I work for), and I do understand that open-source projects might like to encourage contributors in a less formal and process-driven way than commercial ones. However, they should make doing things "properly" the default way of doing things, or at least offer the option when creating a repository for how pull requests should be managed.

What I have done in the past, is to raise a bug report with a publicly maintained repository, which involved clicking a link on an error message in that application, which automatically filled in the issue details in GitHub (after logging in); this shows that this is obviously possible to do, and to automate. Even though the error in question included a stack trace, and it as clearly written in a language I am familiar with, I still wouldn't have dreamed of downloading the source, finding the bit of code in question, modifying it, and submitting a change in a PR, simply because of the number of assumptions I would have had to make about the intended function of that code. As it happens, it would have been wasted time as well, as the maintainer had already fixed the bug in a nightly build.

Elongated Muskrat Silver badge

Wait, What?

Moraes said GitHub is considering various options. These include possibly giving maintainers the option to disable pull requests entirely or to restrict pull requests to project collaborators; the ability to delete pull requests from the interface (to avoid having to look at AI slop); more granular permission settings for creating and reviewing pull requests; triage tools, possibly AI-based; and transparency/attribution mechanisms for signaling when AI tools are used.

GitHub allows pull requests from anyone, without any controls? That sounds... ungood.

I would at least expect contributions to be restricted to the repo owner by default, and for them to have to explicitly allow PRs from unknowns.

UK watchdog to rule on £246M Post Office subsidy over Horizon scandal and IR35

Elongated Muskrat Silver badge

Re: Private company, public money?

A working postal service would be nice, wouldn't it? Post Offices, however, are completely separate from the actual delivery service (Royal Mail), apart from serving as a collection point for letters and parcels, and a sales point for stamps. These are both rather redundant since you can buy stamps in pretty much any supermarket or corner shop, and RM collections are being made from post boxes, now being modified to allow larger parcels, parcel collection lockers, and your own home (now for a fee).

The Royal Mail themselves are an absolute shambles. I have been, for instance, awaiting delivery of a parcel using their 48™ Service (note 48™, not 48hr) which has been with them now for eight days. They're a private company now, and the results of privatisation are all too clear to see, following the same pattern as everywhere else: a diminished service at a higher price. I think I may have already made the point, elsewhere, several times, that the root of this problem is the neoliberal economic model that was popularised under Reagan and Thatcher, characterised by private profits for the richest, deregulation, and corruption.

Elongated Muskrat Silver badge

Re: Private company, public money?

Well, if private criminal prosecutions brought by the PO, sidestepping the DPP were good enough for them to convict innocent sub-postmasters, how about we make an exception, and allow them to be used to prosecute these snakes, and find them "guilty by default" in the same way? After all, if we get it wrong, they can fight it through the press, and an ITV drama, and finally get the government's notice to get some redress 20 years later, then we can think about the taxpayer subsidising them.

Yes, I know this is entirely unreasonable, and unlikely, but it does irk me that those in positions of power can abuse them to both cause great injustices such as this, and evade justice themselves so effectively.

Elongated Muskrat Silver badge

Private company, public money?

Remind me again why the taxpayer should be subsidising the compensation, and not the people responsible?

X marks the raid: French cops swoop on Musk's Paris ops

Elongated Muskrat Silver badge

Re: Enough of this charade ..

I like to maintain an air of mystique; a heady musk if you will.

Elongated Muskrat Silver badge

Re: Enough of this charade ..

Have you considered upgrading to one of the premium versions of reality, rather than the basic one that is littered with targeted propaganda?

Elongated Muskrat Silver badge

Re: LOL

...and it turns out (allegedly) that he couldn't even manage to get himself a party invite to Paedo island, because he was too cringe for the convicted child rapist who was happy to invite the Andrew formerly known as Prince and Mr "grab 'em by the pussy" along.

Elongated Muskrat Silver badge

Re: Enough of this charade ..

You do know you could use a handle that doesn't identify you, rather than always posting AC, don't you, Elon?

For example, in real life, I am neither elongated, nor a muskrat. Believe it or not, I'm not even a rodent of any sort!

Elongated Muskrat Silver badge

Re: Thanks for the French!

Twitter has ignored requests for comment, not even a poop emoji, which is surprising because they must be replete with those

They're currently needed for the Donald's (tame) press conferences.

Notepad++ update service hijacked in targeted state-linked attack

Elongated Muskrat Silver badge

Re: ... a late comment on NP++

If the author controls the signing keys, and changes to the auto-updater require the update to be signed, with the private key, then the hosting provider can't just slip in an (unsigned) replacement.

It seems this was down to a weakness in the supply chain, where signatures were not being checked by the updater (this is my understanding, correct me if I'm wrong). Unless someone manages to get hold of the author's private key to sign a bogus executable, they can no longer implement a MITM attack.

Firefox makes AI optional, like it probably should have been all along

Elongated Muskrat Silver badge

Wrong way round

The browser maker this week announced a new set of AI controls for Firefox, headlined by a global kill switch that lets users disable every current and future AI feature in one go.

If included in Firefox at all (and there is a strong argument that if it exists at all, then it should be an optional extension and not included in the main product, for reasons of bloat), then it should be turned off by default, and strictly opt-in.

Still, using Firefox is still a million times better than letting Chrome hoover up all your personal info and send it home to Alphabet.

Techie's one ring brought darkness by shorting a server

Elongated Muskrat Silver badge

Re: No Capes, No Scarves, No Ties, No Necklaces, No Watches, No Bracelets, No Rings

"Motorcycle not allowed in it."

Autonomous cars, drones cheerfully obey prompt injection by road sign

Elongated Muskrat Silver badge

Re: Teenage boys will be salivating...

If the trolley problem applied to self-driving cars, you'd have a bigger problem, that being the fact that the "trolley-car" in the trolley problem is runaway with no brakes.

The default position for a self-driving car (and for a real human driver) when presented with a potential hazard should be to slow down and / or stop. If something is about to collide with you, then usually the best course of action is not to speed up to try to avoid it, creating further hazards, but to brace for impact.

Cops put Microsoft Copilot in holding cell after controversial hallucination

Elongated Muskrat Silver badge

The problem really is that "AI" is not a search tool, for exactly the reasons you state - it composes "likely answers" based on its training data, many of which turn out to be wild hallucinations.

Quite frankly, the fact that it is not only being used as a search tool, which is pretty harmful in terms of propagating misinformation and disinformation, but actively prompted as a search tool by the likes of Google, is the very root of the problem. It;s not only misleading, but can be actively harmful, if taken at face value.

It's like if you had a theoretical automated "smart" tool box that would hand you the tool most suitable for the job you are performing, which it gets mostly right most of the time, but it sometimes randomly offers you a hammer instead. Sure, a hammer might do the job in some cases, but if you're about to do a spot of soldering, hitting a delicate piece of electronics with a hammer instead, after being assured that this is how to fix it, may not be ideal.

Personally, I think the answer here is regulation. Search sites should not be permitted to provide answers from "AI" that are incorrect, in the form of a "correct" answer. Even with heavy disclaimers (which I'm sure they already have), people will take the first answer served up to them as a correct answer, and my opinion is that these "novel uses for AI" should be turned off or hidden by default, rather than pushed into our faces as a fait accompli.