The Register Home Page

* Posts by that one in the corner

5065 publicly visible posts • joined 9 Nov 2021

Google AI red team lead says this is how criminals will likely use ML for evil

that one in the corner Silver badge

Not having a canonical copy of the dataset is just Bad Science

From just the abstract of the paper linked to by the article under the description

> Data poisoning has become more and more interesting," Fabian said, pointing to recent research

>> Our first attack, split-view poisoning, exploits the mutable nature of internet content to ensure a dataset annotator's initial view of the dataset differs from the view downloaded by subsequent clients. . By exploiting specific invalid trust assumptions, we show how we could have poisoned 0.01% of the LAION-400M or COYO-700M datasets for just $60 USD.

So they don't actually have a dataset carefully stored away? Instead they are just reading a page from the web and saving annotations about what that page says *today* and are then horrified to learn that tomorrow it may say something totally different! And somehow this fundamental feature of the web is now a Bad Thing and indicative of naughty people deliberately poisoning their precious dataset!

Heck, even the worst of the "scribble it down, don't edit, just chuck it onto the blog, never update it again" pages can be changed day to day by the content of the comments at the bottom.

Do you think that the last sentence in the abstract means they told the dataset collectors to save a copy of the page before bothering to annotate it:

>> In light of both attacks, we notify the maintainers of each affected dataset and recommended several low-overhead defenses.

Ah, no, they wanted "low overhead" so just doing the science properly (as in, with repeatability) is probably going to be ignored. And no chance at all that they'd ever think to help fund the Internet Archive (or even hosting a mirror!) and only annotating unchanging content from there!

Oh, it would be so good, when I read the whole paper (soon, not tonight) to find that the above is all just misplaced cynicism.

that one in the corner Silver badge

The Defenders of the LLM against - who, precisely?

> Anyone can publish stuff on the internet, including attackers, and they can put their poison data out there. So we as defenders need to find ways to identify which data has potentially been poisoned in some way

You mean, like not blindly shovelling up every last piece of trash you find on the Internet!

Or even, you know, paying to get data from known good sources!

Do you think he has ever considered exactly who he is defending the model from - and maybe realising that the worst problems are coming from inside his own organisation, the bean counters, the management - and the brash young turks on the cutting edge, all of whom see the 'Net as a source of free stuff they are entitled to.

Infosys launches 'sonic identity' – an aural logo to 'reinforce brand purpose'

that one in the corner Silver badge

Re: Ernest Fiddler

> jingle warfare

With the session gorilla from Cadbury's "In the Air Tonight"

that one in the corner Silver badge

Re: Sound jingle

Ah, but you have missed the genius of the Infosys sound.

I bet that, if you heard it now, you would be able to identify your company's jingle very quickly. As you should be able to with any decent ident: "Bing Bong - Avon Calling", or the Intel Irritant.

But this noise? It does have a repeat on the piano, which you might learn to recognise, but it is buried until around the 29 second mark! Which is well after the point you've got bored by the plinky: I subjected myself to the tune thrice[1] before I actually noticed that tag, as I'd tuned out by then.

[1] I was checking[2] before slagging the tune off as having absolutely nothing recognisable in it

[2] due diligence can really take its toll on your soul, gimme some rock'n'roll to heal this troll

that one in the corner Silver badge

Next week: scratch'n'sniff for the Cisco Corporate Smell

"As you run your hand over the richly textured router shell and release the gentle aroma of our signature perfume 'PacketLoss'"

Or the haptic approach:

"Open the magnesium allow laptop and feel the Tingle of Excitement, tuned to Your Environment" (aka our power supply has a dodgy connection and you get a 50Hz buzz in the UK or a 60Hz in the US).

DARPA tells AI world: Make a model that secures software, there's $25M in it for you

that one in the corner Silver badge

<nelson>Ha Ha</nelson>

Looks like someone didn't click the right button when trying to reply to RichardEM

Pope goes fire and brimstone on the dangers of AI

that one in the corner Silver badge

> Did any of his predecessors have anything similar to say about ...

The Church was not terribly happy about the printing press; except when they were using it, of course (and it doesn't really matter *which* Church you look at - except, perhaps, for the Quakers). Gutenberg himself was deemed Ok (printing Catholic Bibles helped; The Big G. was quite shrewd) but then everyone else got involved, including The Minions Of Satan (aka whichever sect you weren't in).

that one in the corner Silver badge

Re: It won't happen

Ed Sheeran's 60 years of Hollywood takes us back to the dim, dark days of 1963, not 1864!

Jules Verne passed away in 1905; his "From the Earth to the Moon" came in 1864, "Around the Moon" in 1870. Georges Méliès "Le Voyage dans la Lune" came out in 1902 and probably wasn't trying to be scientifically accurate. Compare that to "Destination Moon" in 1950, which is a famously pretty good depiction of space (the worst offence being the single-stage-to-Lunar-Landing-and-return rocket).

Okay, we do then end up with "Gravity" which makes claims that it really, really does not live up to. But the vast, vast majority of space travel depicted in Hollywood is even more unrealistic and deliberately so: FTL drives, artificial gravity everywhere, acting like space only has two dimensions (and other "Space is really just the Ocean" tropes).

Now, AI in films from 1963 to the present day, as Mr Sheeran referenced?

Well, of course we have "2001: A Space Odyssey" in 1968, showing what happens when you deliberately tell an AI to lie. And, according to some reports, HAL was a lot more polite than ChatGPT or Bing has been.

"Colossus: The Forbin Project":a chat bot is given access to the world's network and then demands to be connected to his counterpart? No weird "emotions" portrayed, just the machine doing what it wanted to and we were stupid enough to connect it to All The Things (like missiles); the use of a pen-plotter was good. Just too many flashing lights and weird modules in the hardware (but if you tried to tell Hollywood that AI actually needs to be built out of Video Game Hardware, back in 1970, who'd believe you?).

"Silent Running", 1972, showed that the robots had to be reprogrammed to do something outside of their normal duties, which seems reasonable. As humans, we read emotion into them (e.g. looking up into deep space in a soulful fashion) but all the anthropomorphism was actually coming from Bruce Dern's character.

"Dark Star", 1974, discussed the disconnect between the AI and reality - hmm, seems like we've been having that conversation lately.

Then we have AI (in terms of what we are currently being sold as "AI") just in the background, not the focus of the main story or protagonists. Self-driving cars aplenty (e.g. "Minority Report" in 2002 - or the 1966 Batmobile, which had a working "leave parking spot and come to me" bat-function).

Frankly, from this Hollywood has done a far better job of presenting AI than it ever has done presenting Space!

Google, you're not unleashing 'unproven' AI medical bots on hospital patients, yeah?

that one in the corner Silver badge

Re: "Med-PaLM 2 is not a chatbot ..."

PS

> he acted literally as described, Mr. Spock could not actually cope with the real world, and if you watch the first series attentively, you find that he does exhibit emotional responses all the time, albeit (because of his culture) with attempted restraint.

Vulcans were *always* described as being emotional - too much so, to the detriment of their society, so they practised restraint simply to stop themselves from constantly punching holes in walls (or anyone who happens to be around)[1]. All versions of Spock, TOS onwards, have been portrayed this way, as have other vulcans[2].

[1] beware of flying Plomeek soup during the Pon Farr.

[2] Spock's dad was an emotional mess during his last diplomatic mission, which was saved by dumping his mess onto Picard. Poor Jean-Luc.

that one in the corner Silver badge

Re: "Med-PaLM 2 is not a chatbot ..."

Are we concerned about creating a "general AI" or just a workable system for medical use, i.e. a workable system instead of the MediBot that Google is trying to foist on us?

Personally, right now, I'm only thinking of the latter and whether that can be usefully.

Philosophical questions about "can AI have emotions" are rather put of scope here, IMO.

that one in the corner Silver badge

Re: "Med-PaLM 2 is not a chatbot ..."

> using the brain to examine the brain will...

Replicating how the brain works is getting into the wild and woolly realm of what is now referred to as "General AI" and even then only one view of it (another being that it doesn't have have to replicate how *we* do it...).

BUT that is NOT what we want from a medial system. We want something that can be verified and validated, something that can EXPLAIN how to reached a conclusion and how it rejected other paths.

We even train doctors to be able to explain their lines of reasoning, from the quizzing at rounds to the formal procedures of the regular mortality board enquiries.

> that's something we still don't understand the mechanisms of, so we can't build it into any model

We have had successes with explicitly creating models of reasoning chains, such as Expert Systems (XPs). Which are fundamentally capable of explaining how they reached a conclusion AND, far more importantly, how they have NOT been able to reach a conclusion, even including whether or not they have the ability to complete the task (but only if you can provide data on X and Y) or it is just outside their range. Unlike what we have seen from LLMs where they will gleefully reach any old conclusion, even if it is a pack of lies.

There are plenty of reasons why XPs and their ilk are not being used here (but that is a rant for when I'm not using a tablet's touch keyboard - ouchie). Suffice it to say that training LLMs is seen as easy (all you need is a shovel, CPU time and naivety) and there is no-one who has to stand up and take responsibility for the end result (there is no domain expert who wasn't as good as everyone thought, no Knowledge Engineer who clearly failed to translate his words into code).

Money + easy shovelling + zero personal responsibility = kerching!

that one in the corner Silver badge

Mostly good questions, but:

> What is the frequency with which Google fully or partially re-trains Med-PaLM 2? Does Google ensure that licensees use only the most up-to-date model version?

Even more so than with other programs, with anything medical you shouldn't be asking for the latest version but the version with the least problems.

Especially when, as with any non-trivial neural net, you do not know what is actually going on inside and can not directly show that the newly-trained beast is actually better than the previous one: ChatGPT study suggests its LLMs are getting dumber at some tasks

One weekend's TwitX chaos brings threats from Japan; indemnity promises for users; prominent account seizures

that one in the corner Silver badge

Re: "I imagine it'll either be nursery school hair pulling"

> no one tunes in a TV set these days[*], and probably hasn't in over 50 years

FWIW, only 24 years ago, I was using a b&w portable with a tuning knob. Not that there was that much on.

We'd pay good money to see... oh dear, Elon Musk 'needs an MRI scan'

that one in the corner Silver badge

With the word wrap on this screen (and postprandial doziness) misread that last line as

> That horse sized duck fly.

Now wondering what a duckfly is and whether a horse-sized one is worse that a duck-sized horsefly.

And yet that still makes more sense than the actual topic of the article!

Boffins say they can turn typing sounds into text with 95% accuracy

that one in the corner Silver badge

Re: Bach, Beethoven or Mozart?

Spotify reports an unexpected upsurge of interest in Leroy Anderson's music, one piece in particular.

that one in the corner Silver badge

Given that coffee-powered key bashing is SOP for many, and the advice given is to "just change the style of your typing", maybe we'd be better off with a few minutes of tai-chi and zen before typing in the password for the morning log-on.

Or on a morning after the night before, log in and then down the alka-seltzer: the hiss will also help mask the clicks.

that one in the corner Silver badge

Re: Practical attack?

The paper explicitly states that they are not using this sort of n-gram approach - there is a section discussing the fact, look for the phrase "Hidden Markov Models".

Of course, there is the chance that the model itself is learning to look for this sort of pattern, somewhere within its black box of nadans.

However, as they (and the Reg's article) point out, there is concern about using this attack against passwords, which - hopefully - aren't subject to that form of analysis in the first place (even the longer pass-phrases used in some setups are short enough to confound basic "etaoin shrldu" attacks).

that one in the corner Silver badge

Re: Yes, very practical.

> a non-trivial and non-obvious way

I'll certainly grant the non-obvious part, as it is almost axiomatic that neural nets tend to spot oddities in the data - and then are incapable of explaining what they're actually doing, so even if that would fall under "obvious once you know the trick" you will never be allowed to peep behind the curtain.

that one in the corner Silver badge

Re: Yes, very practical.

No, no, no, that won't do at all.

You aren't playing the Modern Game here, with all this discussion of basic knowledge of character use in language and (probably) old-fashioned signal processing to separate the individual clicks. You can't go around suggesting that this is something that could have been done for decades.

> without relying on a language model. Instead, they used deep learning and self-attention transformer layers to capture the sounds of typing and translate it into data

> Those were then analyzed by a deep learning model, which fed them into convolution and attention networks to guess which particular key, or sequence of keys, was pressed.

There, you see: the Proper Modern Way is to just feed raw data into a "deep learning" system and various convoluted mechanisms and just Trust The Magic Machine to work it all out for you.

THAT is how you get grant money!

that one in the corner Silver badge

Welcome back to the Typing Pool

> playing fake keystroke sounds to mask the real ones

Why bother with fake when we can just bring back a faithful institution?

But how will you get your text to the Pool without being intercepted? If you dictate then that can obviously be recorded just as easily! The answer is obvious: just go back to that other old institution, perching the secretary[1] on one's knee and whispering into the shell-like.

Side benefits include getting "important" people to give up their PCs for the prestige of being seen to be dealing with Secure Data: the savings in IT costs from removing those over-specced and under-used machines from service, along with all the strange problems they seem to have (see many an On-Call), will pay for the Pool.

[1] note: no sexism here; secretaries can be any gender these days, as can the owner of the knee. We're being selective about which old institutions are revivable in the name of secure data.

RIP Bram Moolenaar: Coding world mourns Vim creator

that one in the corner Silver badge

Thanks for Vim on a Fish Disk

My A1000 was often running Vim, from the first time I spotted it on one of Fred's disks: great to be able to continue using my carefully reinforced Vi cheat-sheet from the days of the Uni lab machines - and then it did even more Good Stuff!

Nowadays, of course, Vim is on my random set of Linux machines and on the Windows PATH as well.

I won't attempt to claim that it is my most-used editor or my immediate go-to for day-to-day editing (it could well have been, if not for...), but it is certainly the one that has been in use across the greatest number of years, reliably present and working when in a tight spot.

He has left a lasting legacy, both for use privileged computer users and for the children in Uganda. Long may his legacy last.

R.I.P.

Tesla hackers turn to voltage glitching to unlock paywalled features

that one in the corner Silver badge

Re: soft locks on optional, but installed, features

More likely to be chain or line printers than dot matrix, the big beasts that you actually got IBM techs in to service them.

Was also told, back in Uni, that IBM played the "move the rubber band" trick for their big mechanical calculators (not the desktop ones!) and before then the tabulators.

Allegedly, the service techs getting rid of the customer's tech by asking for a cuppa after theatrically taking of panels (and probably lots of sucking of teeth: "you been doing a lot of division then, guv?").

Techie's quick cure for a curious conflict caused a huge headache

that one in the corner Silver badge

Re: "Ever done a little thing that made a big mess"

Two quick downvote on tales that admit causing problems with extra routers and DHCP.

I wonder if there is someone here with a bad conscience about doing the same thing but never admitting it; now they're worried that you'll give the game away by describing the symptoms, making a colleague think "Hey, that sounds just like what happened last month then Joe plugged in that router, which he swore he knows how to configure".

Playing instruments, musical talent? Psh, this is the 2020s – Meta has models for that now

that one in the corner Silver badge

Re: Just... No!

> I don't envisage AI ever getting anywhere near such an emotional performance.

Not from the drinkers, but you'll find your phone batteries are completely drained by the performance.

Oh, those poor, sad, zeroes, left alone on the streets while we stay safely backed up in The Cloud.

that one in the corner Silver badge

Re: AI Sh!t

> . If what you produce makes it easier to find cancer tumours, or improves a production process or makes a self-driving car safer then kudos to you, but we don't seem to hear much about those.

Well, no, we wouldn't hear much about those (except perhaps the safer self-driving, if that ever comes to fruition).

Aside from the old "if it works, it isn't AI anymore" aspect, which accounts for all the useful spinoffs quietly working away as "just normal stuff we use to get the job done", none of the rest are things that you would hear being shouted about in the general media (or even on a tech site, to be honest).

Why?

Mainly, because they work and don't need to be throwing tantrums in order to get some attention or any money from a sugar daddy flashing the cash and hoping to impress his pals with the stunning platinum blonde[1] AI hanging on his arm.

The production process was improved by 9% and costs will be recouped in 18 months? There was a notice in the relevant glossy to announce a price freeze to customers (whilst competitors prices went up) to use the savings by improving retention: a staid business, nobody trusts flash adverts about "revolutionary new methods".

Helping find tumours? Well, there was a report in July just gone: New AI tool can help treat brain tumors more quickly and accurately, study finds but that is too boring for any more shouting: medial stuff like this needs years of testing and approvals, and in the end it will "only" improve outcomes, it won't raise Lazarus. No immediate monetary gains? No immediate screaming from the rooftops by marketing.

[1] pssst: platinum blonde my backside! Rumour has it that model was trained using software intended for a different field. They reformatted the data to trick the program into reading it: the AI equivalent of dying your hair.

that one in the corner Silver badge

Re: Says who?

Have to admit, I am now intrigued.

FIA, this particular "thing" apparently exists already, so can you point us towards the reported customer need for an "AI" that can make a noise like a siren or some rather dreary music?

Preferably, a massive pent up demand which has been filling the relevant trade papers for months, if not years, with calls to replace existing foley artists, sound effects samples - and all those annoying session musicians. One that justifies the costs of creating and running a neural net on 20,000 hours of samples (presumably at a 44.1kHz sample rate).

And not just "me too" calls for the wonders of AI, this Miracle Of The Modern Age, to be applied to this industry sector just because all the other sectors are getting to play, we want to as well.

that one in the corner Silver badge

Re: It doesn't kown about Thelonious Monk

> More experimentation would be needed to check it does not perform ChatGPT-levels of plagiarism.

Ah, there is the clever step in this particular approach (not that I approve of what I'm about to describe; let's swap "crafty" for "clever").

By training the model on audio samples, they can (and reportedly have) restricted themselves to material they own the copyright to. When they recorded the music, for which they (may) have had to pay performance rights for modern pieces, they get reproduction rights over that recording.

If the model starts spitting out untransformed audio waveforms then those are simply reproductions of their own recordings. There is no performance going on, just pseudo-random reproduction of materials that they own the rights to reproduce. No plagiarism is possible!

Now, if you want to argue against that interpretation, then you are going to weaken the argument against language-oriented models (i.e. LLMs and the ongoing copyright suits).

So, crafty: they'll have us either coming or going. The sods.

that one in the corner Silver badge

> I wonder if anyone has tried training one of these things on sheet music rather than audio samples? There are centuries of that, mostly out of copyright, that could be used for training.

I believe they have (although don't press me for a citation right now): there is a long history of analysing music properly in maths and computer science (including all shades of AI). Given the stochastic performance of these neural net models you could probably get away with including the Musikalisches Würfelspiel dice-throwing parlour game from the 18th century (using the very best AI tech they had to hand).

Although music is a "serious business", especially if you use old pieces, classical, baroque et al and the "pile it in with a shovel" approach has been fine for a student project (like the singing dog, we are not so much impressed by the quality but...) but serious people take a serious approach and like to hand-craft their rules or, at least, have a result that can explain to people how the piece works (which, of course, neural nets generally can not). That wasn't meant to be (too) snide, by the way: I thoroughly approve of musical analysis, without the shovel.

that one in the corner Silver badge

> That said, the model weights are not open source. They are shared under a Creative Commons license that specifically forbids commercial use. As we saw with Llama 2, whenever Meta talks about open sourcing stuff, check the fine print.

(Haven't checked reality - the repo - yet, but just going from the above sentence and talking it as gospel)

> check the fine print

You've just said that all the software tools are under the MIT licence. Complaining that the weights are not similarly licenced is like praising a oddball programming language's compiler for being open source then damning it because they haven't applied the same licence to all of the programs written in that language!

Although that is not a strict analogy, of course, as the weights are less program and more of a data dump. Which at least makes the use of a Creative Commons licence a sensible usage (CC licences are not a good match for program source code but are fine for data dumps, whether that be text, images, any old pile of nadans, your database of top-ten hits since 1972, ...).

There is a difference between being supportive of open source and trying to stole the flame wars (and I'm aware of the irony that I'm probably doing exactly that).

Japanese supermarket watches you shop so AI can suggest more stuff to buy

that one in the corner Silver badge

In praise of the Old Ways of the Tech-and-related-arts Community

In days gone by the community had a sense of humour and instead of talking about "punching robots in the face" this would see this as an opportunity to Hack The Planet (one small step at a time).

Hofu Hackers, now is your Time To Play!

Gather your friends and family to create a steady stream of people through the supermarket, all behaving in the same not-quite-as-expected fashion. Can you convince the AI Assistant to send everyone looking at rubber household gloves over to the olive oil ("Customers found it made the experience smoother")? Does it seem to be judging people by appearance? Let's find out: everyone wear something that looks like the supermarket uniform and see what happens when you all buy one bottle of antiperspirant and nothing else. It is looking at how people act around the shelves? Have half the group hopping down the cereal aisle and picking up branded corn flakes, the other half skipping and picking up discount own-brand cornflakes.

Ah, for the heady days of the Improv Everywhere group and classics such as Best Buy uniform prank

Blue Origin tells staff to catch next rocket back to their desks

that one in the corner Silver badge

Employees with a named[1] desk

"I name this desk 'HMSO Red Stapler' and wish godspeed to all who toil on her"

A mug of lukewarm canteen tea is majestically (how else?) smashed over the desk's secretarial return, that magnificent structure that marks this as Design Classic. A hoarse cheer rings out and is echoed across the watching crowd as the chair majestically rolls down the unloading ramp and docks into place.

As dignitaries leave their platform, small children and other urchins gawp up at the long, smooth sides of the modesty panel, until shooed away by gruff yet friendly working men carrying hessian covered panels. These are carefully arranged around the shiny new desk to form the easily recognisable shape of a cubicle that appears to be an quiet and efficient place to work and yet, by the careful selection of a too low divider, is neither.

Their jobs now formally over, the men file out of the yard and head back up the steep cobbled streets for home. As the last of them leave, the heavy gates close slowly behind them, shutting with a final ringing clang.

A cold and foggy quiet now falls over the rows of terraces that line the grey streets. Streets that all lead down to the great lumber yards and chipboard pressing sheds that form the heart of the community. Yards that now stand silent, a community that stands silent, silent to ensure they'll not miss the call, the call they all pray to hear yet fear may not come again:

"They're takin' on men, they're takin' on men down Staverton yard again. Hunts and Hawk have opened the gates, for contracts to Staples and Equipu came in and they're takin' on men."

[1] oh, you just meant employes with an assigned desk, not actually a named one!

Baidu builds AI into cars so you can distract the kids with text-to-image tools

that one in the corner Silver badge

Don't mock the assisted drawing

or the car makers will take it away but *still* insist on having "AI enabled" *something* (just for market differentiation, as the quote in the article notes).

At least pictures of flying pandas are safe. Encourage this idea in cars across the world (ok, it'll be pictures of eagles and/or wolves in other countries, or flying teapots in certain other regions-with-obvious-stereotypes).

Otherwise we're going to get "enhanced" wipers (trained in California weather, confuse Manchester for the car wash and refuse to come out of hiding); headlights (that "helpfully blink to warn the car turning across your lane that you are approaching fast, to make sure that they have seen you"); windows (that roll down every time you crawl past the Bradford and Bingley "drive in bank" during rush hour in the snow).

Or worse, some maniac is going to connect the AI to the steering!

We're in the OWASP-makes-list-of-security-bug-types phase with LLM chatbots

that one in the corner Silver badge

The following Training Data Poisoning scenario is proposed

> "A malicious actor, or a competitor brand intentionally creates inaccurate or malicious documents which are targeted at a model’s training data. The victim model trains using falsified information which is reflected in outputs of generative AI prompts to its consumers."

Which neatly puts all the blame onto that mean old competitor brand - when *all* the blame would lie on the shoulders of the idiots who just sucked up every random bit of garbage they could find to use in their training set.

In comparison, suppose we heard that the FBI and CIA announce that, acting upon information they received from a bound manuscript found lying on a park bench, they are creating a major joint taskforce to hunt down a "Mr Scaramanga"; this individual is described as an internationally wanted assassin who is believed to use a custom weapon assembled from a gold pen and a gold cigarette lighter. Would we blame Ian Fleming for deliberately misleading the Forces of Law and Order or should the finger be pointed at whoever picked up a discarded paperback and dropped it into the case files?

Addendum: the PDF does mention "training the model on unverified data" *but* that is treated as a separate example from the situation above.

IBM to build biometrics system for UK cops and immigration services

that one in the corner Silver badge

A service interface used by external subsystems

Will that interface be a commonly understood API, say a connection to an SQL database that will allow flexibility to the consumers, allowing them to quickly respond to new circumstances?

Or will it be a strict set of queries, one per paragraph of the requirements spec, each with its own unique style of signature (this one uses JSON, this one XML, this one is in binary and only half the fields are in network byte order. This one - no idea, but that is not a problem because you have a full set of examples in portable COBOL. Here is our phone number in case you want to buy some support time, some more examples in another language or, for the really brave, a new query added to the set.

After fears that Europe's space scope was toast, its first images look mighty fine

that one in the corner Silver badge

Sunlight streaming into the spacecraft through a tiny gap.

ESA remembered to send the heavy wooden tripod, got everything nicely focussed on the ground-glass screen and removed the dark slide, but the heavy black cloth had failed to unfurl properly.

Luckily the older galaxies have stopped the young ones in the front row from fidgeting so they can take the shot again.

If we could just have the First XV cluster together now, very nice: hold it, hold it...

Voyager 2 found! Deep Space Network hears it chattering in space

that one in the corner Silver badge

October 16th:

Data slowly starts arriving again.

October 17th: Jackie, the junior researcher, who nobody ever listens to, notices anomalies from the magnetometer.

October 28th: Europa occluded. Jackie knocks excitedly on door to gamma ray observatory

November 20th: Xeelee Nightfighter, wings furled, settles into Lunar orbit. Pilot apologises for bumping into Voyager, supplies NASA with insurance details

November 22nd: first attempts to sue Xeelee pilot for whiplash arrive at JPL.

December 9th: Xeelee insurance company responds to the claims.

Date unknown: remnants of humanity stare up at gaps in the dark, dark clouds, thinning at last, as the first stars are seen to shine through once more.

Fed-up Torvalds suggests disabling AMD’s 'stupid' performance-killing fTPM RNG

that one in the corner Silver badge

Re: Generally....

What "duff kit"?

Do you have any evidence that the fTPM is not correctly functioning as a TPM?

Did you understand that the 'f' in 'fTPM' stands for firmware? I.e. this is a bit of code (in the BIOS AFAIK) that provides, basically, a cheap'n'cheerful implementation of TPM.

Ryzen motherboards can (and mine certainly does)[1] provide a connector for a hardware TPM - but even that is unlikely to be a speed demon (although, as a separate piece of hardware, it will run in parallel with the CPU so its slowness won't block everything and cause this stuttering (probably). Still, calling into an TPM more frequently than you really need to seems like it'd slow down your kernel's RNG.

[1] had to check the manual, I know I haven't bothered to buy a h/w TPM.

that one in the corner Silver badge

As far as I can tell, baseline Linux has never claimed to be FIPS-140 capable: Linus removing a performance hog won't affect that.

For anyone who *does* need to certify themselves against FIPS they can get old of a replacement RNG and either use it to replace the kernel routine or just use it within their compliant code.

Or buy the required mods & support from a third-party.

that one in the corner Silver badge

> so i am not sure why someone would access the random number generator continually.

There is a long history of squabbling over the RNG in Linux (and quite probably in other places) about what "must" be included in the RNG in order to "obviously make it better". This is not the first time that something has been added at one time and then pulled out later.

Without checking the commits (which anyone can do) we don't know if the too-high polling was in there at start of the fTPM support or if someone just saw a call that could read the value, without understanding it would be slow, and whacked it into (what we now know to be) the wrong place.

I don't believe that Linus has claimed to be a mathematician and able to judge on those issues on the RNG, he (his lieutenants) check the code for programming errors and has to accept the originators claims of usefulness. Then real life usage shows up problems that *do* fall into Linus wheelhouse and he speaks out...

Astronomers testing next-gen asteroid-hunting algorithm discover potentially hazardous object

that one in the corner Silver badge

Simonyi Survey Telescope

Good to know Charles will be remembered for something more than Hungarian Notation.

China bans export of drones some countries have already banned anyway

that one in the corner Silver badge

Yet still the complaints will come

> Beijing's ban may therefore already have been adopted voluntarily by some of the biggest potential buyers of Chinese autonomous aircraft

That won't stop the blowhards from demanding retaliation against these new restrictions.

What would sustainable security even look like?

that one in the corner Silver badge

Re: What would sustainable security look like?

If you try to break in it'll grass you up.

that one in the corner Silver badge

To start, I admit I've done access control in the past that I now know wasn't great (and am glad that those installs don't get connected to the Internet).

Nowadays, I'd quite like (it isn't necessary, but would be nice) to access some of my LAN boxes from the Internet - and I've got loads of material saying how to set it all up, thanks, but don't trust it enough to take the risk (for the perceived gains).

> You need to address expertise

Gaining expertise, particularly as an autodidact[1], in online security seems strangely hard, compared to pretty much any other area of ops (for a small LAN) and/or programming.

First is a lack of confidence in proving that the methods being proposed actually work, and against what: basically, you have to be able to demonstrate that you can break into the "unprotected but otherwise correctly set up and working system" first, then show that the added protection fixes the issue. I.e. turn a claim (or even just a vague worry) about a security issue into a testable issue in the bug tracker. To do that, I first need to be able to break into the online system like a real Bad Guy and for some reason the books[2] on securing your Apache server has recipes for setting permissions but none for smashing down the door in the first place!

Second, to be blunt, is a dismissive tone about the subject in forums where you'd hope to see better. Even in Register forums, there tends to be many replies that basically boil down to "well, I do better than that"[3] and no pointers to practical sources of learning. Compare that to other subjects (h/w and s/w) where you can often get useable tips and tricks.

To be frank, the end result is that I have very little confidence in any of the "how to do online securely" claims :-(

[1] I'm not in a position to just be put onto an expensive course at company expense

[2] tutorials etc; unless you have some references to better materials.

[3] comments on a Reg story last week (URL) even had someone else pointing out this attitude

Florida man accused of hoarding America's secrets faces fresh charges

that one in the corner Silver badge

Re: This is not a joke. This is not a drill. This is the messiah for a whole bunch of idiots.

> "canis est in cucina"

That would be dog-latin, yes?

LLMs appear to reason by analogy, a cornerstone of human thinking

that one in the corner Silver badge

Re: Haha tricked ChatGPT yet again

And for all of those solutions, the way the OP phrased the puzzle, the final mile west has the person walking in a very, very tight circle around the North Pole. In fact, spinning on one toe, the other toe pushing until it has completed the mile.

In which case, the bear can be any damn colour it wants to be, just stop the world spinning! Heave, yuuurrrrk: oh, look, the bear is multicoloured, with chunks of carrot.

that one in the corner Silver badge

Wrong methodology entirely for examining how some software works

LLMs are amenable to direct analysis of their internals, just on the basis that they are software (even though most of their bulk is, to us, an undifferentiated mass of nadans).

However, it would be very costly to perform such an analysis and be able to then predict what paths the next run will go down[1], especially as the LLMs are knowingly built without anything akin to helpful debugging aids (the AI-speak for this is that the models have no explanatory power). Some of that analysis has been undertaken but that is fragile and very limited (it doesn't trace any of the dynamic behaviour of the model).

So, instead of actually *looking* at what the LLM does (or simply admitting that they don't have access to its internals) they are in great danger of doing exactly what we hate users doing when they try to describe how any other program works: anthropomorphising it and ascribing it some complex behaviour.

An LLM *may* well have something inside it that has discovered a pattern in its inputs that matches what we could describe as reasoning by analogy. But we ought to be examining these things as software, including building them with the intent of formally analysing their behaviour[1, again]

Aside from anything else, if the training process *has* set up as useful an ability as analogising, wouldn't it be jolly useful to be able to re-use and refine that for a new model, rather than just keeping fingers crossed that the training manages to recreate it?

[1] after taking control of the random number sequence

'Weird numerological coincidence' found during work on Linux kernel 6.5

that one in the corner Silver badge

Re: The what?

One might think that I'd been waiting for years to have an opportunity to use that last line, but that would just be silly, wouldn't it.

Now, is there a Reg story that refers to a radish and a bloater fish? There is a tweaked line from Abba just begging to be used...

that one in the corner Silver badge

Re: The what?

The responses from Twitter were all gathered into a suitable container and are now used to predict the future of X.com (so far, with startling results).

One senior editor did ask if they were sure that it was safe to do this, as the guzunder is getting seriously full, but was told, as the song has it, "it's my potty and I'll scry if I want to".

What does Twitter's new logo really represent?

that one in the corner Silver badge

Re: Let's hope it stops the hate comments

Well, to be fair, you did forget to mention the story in TV21 where they modified TB2 to go into space as well. IIRC they also (of course) packed Pod 4 and took a modified TB4 to do the actual rescue in an icy "sea" (mind is telling me it was Titan, but given the existence of Rock Snakes on Mars it may have been an ocean of Venus).

Not sure if that story counts a canon, but maybe we have a *very* avid reader here.

Aliens crash landed on Earth – and Uncle Sam is covering it up, this guy tells Congress

that one in the corner Silver badge

Re: Absence of evidence is not evidence of absence

Well, if we take the 8 light minutes (quoted a few messages back) from the Sun to the disc, add in the flat Earth claim the the Sun is somewhere from 30 to 37 miles above the surface, I believe that gives us a speed of light around 215 miles per hour.

If the Sun also takes a mere 24 hours to circle the Equator then, um, something about the light not reaching the other side of the disc until the Sun has itself already got there (it might have gone around 1 1/2 times, the maths gets a little - odd).

If the light from "night" reaches the other side when the Sun is also there, it must be "day" light after all, not "night" light - so the night's *are* dark because the Sun isn't there and neither is the light.

Or something like that.