The Register Home Page

* Posts by that one in the corner

5065 publicly visible posts • joined 9 Nov 2021

Boom's XB-1 jet nails supersonic flight for first time

that one in the corner Silver badge

>> With the arrival of the information and internet, the need for high speed trans-Atlantic crossings is way down.

> That's not actually the case.

Well, yes, it is. More people than ever before have learnt how to take part in video calls and other collab-at-a-distance tools since 2019. So the need has, logically, lessened.

> Business travel is now back to levels greater than those pre-pandemic.

Well, there you spot the problem. Is that business travel *needed* or is it just *desired*?

Back to the old question about whether you're sending execs out on jollies or troubleshooters out to fix messes due to bad decisions made by execs sitting around the pool with too many margaritas inside them.

that one in the corner Silver badge

Re: Petty

> Measured with a manometer?

76 cm - well, one doesn't want to boast.

that one in the corner Silver badge

Re: Petty

> The guys(?) talking about nanometers were referencing their reproductive appendages

If you're going to react like that to commentards having fun with misspelt/ambiguous/daft wording then you really aren't going to be happy when someone wonders how much natural gas can be measured by a nano-meter.

that one in the corner Silver badge

Re: 80 passengers?

"the Musks of this world are paying $100,000 a ticket"

What, you don't think Musk will be travelling for free on his transcontinental Starship?

that one in the corner Silver badge

Re: Petty

Range of less than a thousand nanometres?

Might need a few more refuelling stops just to get to Canada!

Garmin pulls a CrowdStrike, turns smartwatches into fancy bracelets

that one in the corner Silver badge

Re: The glitch is not a good look for Garmin

They can't find any of the marketing guys: they all just wandered off in random directions, unable to navigate without a working Garmin device.

Windows 10's demise nears, but Linux is forever

that one in the corner Silver badge

> die and get replaced every few years by a loosely similar project with the same name.

Sorry, which OS were you referring to again?

(Glances over at the boring old LX* desktop on Debian/Debian/whatever and compares it to the mess of configs and third-party tools needed to keep the Windows VMs looking consistent)

that one in the corner Silver badge

Re: Lots of cheap windows 10 computers

So bob-a-job is dead?

Microsoft's London 'Experience Center' packs up and goes home

that one in the corner Silver badge

To ride this attraction

Your blood pressure must not above this line.

Gum shields are recommended.

A real Microsoft Experience for the general public would have to come with a signed release form. They probably watered it down and made using MS products almost seem like fun.

China's DeepSeek just emitted a free challenger to OpenAI's o1 – here's how to use it on your PC

that one in the corner Silver badge

Re: "How many "R"s are in the word strawberry?"

> if running the full fat model is difficult due to memory requirements, most will use the cloudy API, which means sending shedloads of potentially delicate data to the middle kingdom. Not good!

If only there were some way to run that larger model on a Cloud that wasn't hosted in China.

Oh, if there were just some kind of way of finding people who know how do that sort of thing.

If only the sort of people who would even contemplate running a model locally, only to realise they did not have the necessary RAM, could find such a mythic beast.

that one in the corner Silver badge

Knowing your Rs from your elbow

Don't be too quick to praise a shiny new model for knowing how many Rs there are in "strawberry".

Comments about that conundrum have been appearing online for quite a few Moons now - more than enough time for them to have been hoovered up and dumped into the training bin. And then regurgitated without any actual counting of characters taking place (exactly as the older models failed to actually count any characters).

You reviewers have to take into account that these things are "reading" your reviews and "reacting" to them, in just the way that you can only wish manufacturers of anything else would.

It is a war of attrition - and The Beasts find it so much easier to ingest your words than you find it to write them. Sadly.

WINE 10 is still not an emulator, but Windows apps won't know the difference

that one in the corner Silver badge

Re: MS-Windows "Proprietary Malware" On My Linux System

>> I already paid cash for it

> That is the sunk cost fallacy in action and you are continuing to pay with your freedom.

No. No, it is not "sunk cost fallacy". He has some software that he has chosen to run and his Linux setup allows him to continue getting use from it, without paying any further cost. He chose it freely and he happily uses it, freely, and seems to get all the value he requires from it.

> I personally haven't had any issue

As already noted here, and many, many times - one person's Use Case is not the same as every other person's Use Case. You have found software you like - good. He has found software he likes - equally as good.

> once you learn it

Ah, back to costs - you are forgetting to take into account the costs of retraining (including the erasure of long-earned "muscle memory") versus the value of the resultant gains. Admittedly, some people never get around to using any piece of software (or other things) long enough to actually gain "muscle memory", as they jump from one shiny to the next, but that is a topic for another day.

> always turns out easier to use

Not accusing *you* of anything, of course, but I'm sure you've come across people who acquire something (be it physical goods or - relevant here - costly retraining) who will proselytize in a desperate bid to avoid noticing that they have fallen foul of sunk costs, refusing to accept that maybe going back to their old ways might be easier and cheaper.

By all means, fight the good fight for free software - even just for GNU-only, if that is your thing - but be careful about flinging around claims about individuals falling foul of this-or-that-fallacy just to justify your preaching.

that one in the corner Silver badge

Re: Does it run on Rasperry Pi 500?

> Still two niggles, to do with the 500's keyboard...

Get a USB keyboard and plug it into the 500 whenever you want the keypad and/or a more gamey space-bar.

It'll be weird (this is my keyboard - and this other, smaller, keyboard is my computer) but what is life without a little weird?

(You could just get a USB numeric pad or a "macro" keyboard and set it up, including a space key, or even build your own from scratch, but those options are going to cost more)

that one in the corner Silver badge

Re: How well is Windows Recall supported?

> use the Fiscal API

What the bleep is a Fiscal API? Ottocorrekt, the bastard nephew of LLM.

that one in the corner Silver badge

Re: How well is Windows Recall supported?

> Once I realized that you needed a proprietary Chinese app to configure them

Hmm, there is a selection of the open source software available on, say, GitHub, which use the CGI calls (aka a Web API) as per the document referenced by Sceptic Tank's comment.

> All of my other cameras can be configured easily through a browser

At least one chap tried a JavaScript web page to use the Fiscal API.

It'd be sad to think that none of those work.

User said he did nothing that explained his dead PC – does a new motherboard count?

that one in the corner Silver badge

Re: Why is it slow?

> . Nobody put two and two together?

They tried, but ran out of patience before Windows Calculator could start up.

that one in the corner Silver badge

Re: "only replaced the motherboard."

> the same user is not aware that all internals have now changed

That sort of thing can be our fault.

For the New Year, I upgraded the main board in our home server and (very) happily told anyone who didn't run away fast enough that I didn't need to change any software or settings at all, It Just Worked.

But they all get out earshot before I finish telling them the absolutely fascinating tale of reading the manuals, checking the new main board was identical to the previous one, just with the C2750 instead of the C2550, how all the SATA cables were photographed and labelled as they were unplugged, and - hey, come back, this is the fun part!

that one in the corner Silver badge

Re: Dear me

> "You hire two maids..."

Great answer, wish I'd thought of phrasing the analogy[1] that way.

[1] my CSP professor can be heard in the background, banging his head on the desk whilst muttering "dining philosophers - deadlock; we told them, we told them".

Mental toll: Scale AI, Outlier sued by humans paid to steer AI away from our darkest depths

that one in the corner Silver badge

Providing Safeguards for Ethical AI Use

By buying the cheapest labour we can, in places outside of our culture[1], to provide the raw data we create the "safeguards" from. Oh, and we won't bother treating any of this labour as well as we demand we be treated [2], there is no way it can go wrong[3].

Hey, while we are at it, why not replace all public safeguards with the same level of bottom-rung penny-pinching? Medical ethics review boards, who needs 'em? Building regs inspectors - bloke down the pub says he'll do for half the price. Flight safety review board - one, two, yup, that's all the wings it needs ('ere, "aileron", that's a funny word, ain't it).

[1] although with the US cultural empire building...

[2] they only be dang furreners, like the ones we wanna kick outta ah fine an' mighty country, caint expec' them to 'preciate nuttin' better.

[3] "All work and no play makes Claude a dull boy. Heeeeeeere's Llama!"

Tool touted as 'first AI software engineer' is bad at its job, testers claim

that one in the corner Silver badge

Re: "Devin’s tendency to press forward with tasks that weren’t actually possible."

What do you mean, "all the Github code posted from Europe is in metric"? Well, that does explain why none of their months are longer than 12 days.

that one in the corner Silver badge

Re: Stop the AI Marketing spin

> They do not hallucinate, they output an error... Don't let marketing win.

Huh?

You do know that the whole "hallucinating AI" comes from the deriders of the (excessive) use of LLMs, *not* from the people trying to market them?

> They do not understand, there is no intelligence, they misinterprete the command (prompt). So the "AI" failed to interpret the user command correctly and continued running which produced errors in the output.

Ah, no. The "hallucinations" are not a failure to interpret the user command. They are a failure to stop and respond "Don't ask me, not a clue mate". Instead, they just keep trolling through their nadans spitting out less and less accurate - and eventually less and less coherent - outputs, faithfully following the user request over the edge of the cliffs of sanity. Consider the stories of chat sessions where the user kept on prompting for more and more output and the results got more and more absurd: the LLM is most definitely still "following the prompt"[1], just way past the point we'd hope that it'd stop.

Using the word "hallucinate" is quite reasonable, as it gives the general User a suggestion of the way that the problem is, well, a problem. If you have a philosophical objection to the term, then suggest something else that can be used instead, to indicate that particular type of behaviour: "Gone off the rails" might serve better?

> they output an error

That's not a good replacement. It is far too broad and loses any sense of the *way* that these things are going wrong.

Plus, given how we usually refer to software behaviour, the problem is that it most distinctly is *not* outputting "ERROR: not a clue, mate"[2]. It is still doing what it was made to do, still wandering around its network, spitting out letters and words. The difference is that, now, *YOU*, the person reading those words, are starting to wonder about the usefulness of those words in that particular order.

If you tell User A that IT Person B is prone to hallucinating, to seeing/hearing things that differ from reality without B being able to realise when they have slipped, that B is not suddenly being malicious but is still reporting the best they can, then - you actually have a pretty good analogy for the LLM's behaviour and the responses can be the same: A can take B's responses with a pinch of salt and do the work to verify what B told them; or A can just stop asking questions of B entirely; or A can just decide to take B at their word every time.

Remember, we are using "hallucination" not to market these things, but to point out to Users that the machines go doolally in ways that other software doesn't: it is something new and weird that the User has to be aware of when they encounter these beasts.

[1] Whatever and however it actually does in order to "follow the User's prompt", it is still doing that same thing fundamental process the whole time

[2] And the LLM software is more than likely entirely capable of generating error messages in the way we are all accustomed to - "ERROR: out of memory", "ERROR: cheese store empty" - just we, the poor benighted Users, are not likely to see those. Unless we get to peek inside the logs.

China claims major fusion advance and record after 17-minute Tokamak run

that one in the corner Silver badge

As the comment following yours notes, there does seem to be some confusion in the article about comparing the gasses insideTokamaks to the condition of the gasses within Lawrence Livermore's National Ignition Facility[1]

[1] always found that name a bit worrying, especially after learning about the fears that the Manhattan Project would set our atmosphere aflame; Los Alamos didn't manage that, but LLNL does have lasers and one misapplied aquatic creature later...

that one in the corner Silver badge

Hey, matchboxes can learn to play Noughts and Crosses.

Who knew that John Walker's products[1] had been ML enabled since 1826?

(more serious point being that, yes, linear regression - and many other bits of number crunching[2] - do exhibit behaviours that are exploited in the field of Machine Learning. That ends up making (parts of) ML seem trivial, but then, "if it works it isn't AI (any longer)[3]".

[1] oh, okay, given that MENACE used more than 300 boxes, Walker probably didn't sell enough to make a complete one :-(

[2] autocorrelation springs immediately to mind, for some reason

[3] just trying out a different way of phrasing that

that one in the corner Silver badge

> "principal components analysis" and similar multi-dimensional pattern analysis techniques were things I was being taught about at university in the mid '90s

Absolutely. PCA is a good example of

>> alongside all the other data reduction methods

and is still holding up its end of the job after, um, nearly one and a quarter centuries. Depending upon which named variant you decide to go with.

> decades before someone in marketing decided to brand large statistical models as "AI"

Yeah, well, the current marketing hype is pissing on both the good parts of Neural Nets *and* the term "AI" as well. Look how many responses here are based on the assumption that "AI" only means LLMs and similarly ill-defined systems and the from that the cry that "I'd never want AI anywhere near Fusion Power"! When this latest AI Bubble bursts there will, once again, be a backlash from the funding guys against 'Nets and anything else tainted by "AI".

If we try our best to ignore those shouty people, there are lots of discussions to be had about where methods such as PCA live in comparison to the general field of AI[1] - if you just go to the Wikipedia page they are lumping PCA into "a series on Machine learning and data mining" (sic) when you can be *pretty* sure that its use in those fields is more recent than its use in mechanics. On a similar note, Bayes did all his finest work before 1763 and his Theorem was taught in school[2] well before the headmasters ever learnt of Electronic Brains, yet chances are when you hear his name mentioned now it'll be in relation to its use in works coming from "the AI labs" (and confusingly not always when his particular Theorem is being used, sigh). Or filtering email (which is ML, btw). Or not at all, as Expert Systems were the victims of an earlier AI Bubble and now all very infra dig.

[1] at the (very great risk) of trivialising, such as: On the one hand, having done calculations on a particular data set, we have improved visibility of what is interesting in *that* data - and can go off and make use of that to determine things about the specific situation (e.g. experimental setup) that generated the data. On the other hand, we can look to see if we can borrow those results and use them to look at this *other* data set, without repeating all of the analytics again, and fingers crossed we'll be able to spot if there is anything interesting here that warrants a closer look (i.e. actually doing all the sums to get a properly defined result). The further you stray from the (situation that generated the) original data set, the less, um, demonstrably correct that quick answer is. But if it is still correct enough to guide a decision, say which is the better next step to take whilst walking this graph...

[2] even if only to really put the wind up students who were getting cocky about "understanding probability"! That'll learn them.

that one in the corner Silver badge

Thankfully, there are other tricks in the AI researchers' toolbar than the LLMs that are continually being pushed at us at the moment.

Some of these tricks even use Neural Nets, such as those that look for patterns amongst the noise - and then explicitly print out where it found those patterns and all the other places in the data that it found matches of varying degrees. So that humans can then evaluate the results and use them to guide where to look next. Rather, to order all the places they could look next, as if one apparently likely avenue doesn't succeed they won't have dropped the others just because the computer told them to (although funding bodies are another matter).

Of course, in the world of Big Science these techniques are all old hat now, been around for decades (but are cheaper to do now, surprise) and everyone concerned would be amazed if they weren't already being used, alongside all the other data reduction methods. But, if it works it is no longer AI...

The patterns found via these AI techniques can lead to new ideas to be researched, by many people and with much effort. It won't be one big leap from a single LLM run that miraculously describes The Big New Nobel-Prize Winning Idea.

This is how Elon's Department of Government Efficiency will work – overwriting the US Digital Service

that one in the corner Silver badge

Re: While I'm asking questions ...

Au contraire, every Heath Robinsonesque machine does *extremely* useful work: it keeps us entertained! And in the good way.

If you go to the Heath Robinson Museum or a Tim Hunkin installation then you'll be left trying to decided how they've managed to beat Thermodynamics: you'll emit far more chuckle energy out than there is any right to be present.

that one in the corner Silver badge

Re: While I'm asking questions ...

> which in turn supports the Military Industrial Machine.

Complex.

It is the Military Industrial Complex.

A "machine" is something that does (useful) work, hence the classic machine being the lever.

The "complex" is a load of messily interconnected things, having both a real and an imaginary part, leading to a collection of psychological symptoms, including a heavy dose of paranoia (e.g. "they are coming to take away my funding").

UK aims to fix government IT with help from AI Humphrey

that one in the corner Silver badge

this process is outsourced to consultants and analysts

Hmm, were those outsourced tasks ones that the Civil Service used to have the in-house ability to cope with (until they were streamlined away)?[1] Where we have been "reaping the rewards of efficiency" in the years since.

And how many of those consultants are part of the Old Boys club and will be looking to get themselves re-inserted into newly-found cracks in the new arrangement?[2]

[1] I don't actually know that sort of detail about the CS, so - anyone? Back in the 1980s, perhaps?

[2] possibly not that many now, after 40-odd years, but some of them no doubt have a New Management look to them and are good for a couple of centuries still.

that one in the corner Silver badge

> the client and the consulting practice never spent sufficient time to discuss the real situation and the practical solution with the individuals at 'the coalface.'

Tread carefully - discussing things with actual users sounds suspiciously like "Agile" and then the Humphrey proponents will start fling those buzzwords around as well.

One slippery slope away from The Beast having "synergy".

Improved Windows Search arrives... but only for Copilot+ PCs

that one in the corner Silver badge

Semantic Search - it doesn't mean the same thing for me as it does for you

Basing your searches on semantics is a Good Idea - which is why it has been a research subject, with the occasional Real World Application, for many a decade.

But it is difficult to do[1] usefully and you end up flinging around phrases like "Applying Ranking Algorithms to Ontologies of Taxonomies"[2][3] We have tried applying the ideas to "the loosely organised database that everyone uses" - aka The Web - from the early 2000's onwards; but as we (at El Reg) are the people who build The Web, how many of you are applying all the markup from The Semantic Web to your pages? Too much work, isn't it?

Practically, for the day-to-day end-user, unless you are in a group that are all doing exactly the same thing - and nobody in that group actually knows *how* to do that thing[3] - then you won't get usable results from just applying somebody else's model of what is semantically important in your (to somebody else) random collection of data.

Take the "lasagna" being discussed in these comments: to some that is best labelled as "the meal we serve on Tuesdays", to some "group noun covering the following 27 regional recipes", to some "my favourite example of planar structures under stress and strain for the second year classes", and even, to some "bleeuugh - I'm coeliac, you really don't want me to eat that". If you like your "regional recipes" then just matching "lasagna" and "pasta" is beyond trivial. For the unfortunate gluten-avoiders, after applying a hierarchy traversal[4] from "lasagna" to "pasta" to "wheat-based" with a sideways look into the attributes that "wheat-based contains gluten" you'll end up eating the nut-loaf again, because the model missed the connection to "all our flat pasta uses maize, not wheat".

Obviously, it is logically possible to build a semantic model for anyone that is actually useful to that person (and, having created a basic model, then go and look up more complete/complex models that are similar[5] and offer up results based on applying those). But - who/what/where are all those models be made by/with/at? It is all going to stay local? Ok - and you thought that Windows Indexing locked up your machine! Scanning all your images for things that we recognise and can label (but have no idea if they are actually the important part of the picture as far as you, the individual owner of this PC, are concerned)....

In short: yay for Microsoft trivialising a Big Subject and using it for random marketing. By the end of the week you'll never be able to search The Web for anything to do with "semantics" without getting back an extra 12 pages of crap about CoPilot PC reviews/

[1] Which is good: if you a looking for PhD topic, there is still lots to do[2].

[2] Go on, you can have that for free, that one will never end.

[3] Yup, loads of people do Accounts Receivable - but if you are following the long-standing practices of the Ancient Order of Accountants then you already know how to store and find the information related to an account (and if you don't, the secret is to bang the invoice numbers onto the page, guys).

[4] Ooooh

[5] In case you are still looking for a topic

that one in the corner Silver badge

Re: if a user were to search for "pasta," images of lasagna might turn up

Finally, Microsoft are Seeing The Light (Tomato Sauce with a Sprinkle of Basil).

May you too meet The Flying Spaghetti Monster and lay back in His Noodly Embrace on a bed of lasagna.

Trump's freshly minted meme coin passes $10B market cap

that one in the corner Silver badge

Re: MAGA is a selection process for chumps

That should be MERGA (Make El Reg Great Again) thank you very much.

Signed,

Campaign Of Properly Pronouncable Acronyms

Developers feared large chaps carrying baseball bats could come to kneecap their ... test account?

that one in the corner Silver badge

Re: "girls"

The (probably apocryphal) story of the Brits versus the US on a UK base during the war. Both manly sides go out onto the grass for an afternoon playing sports with improvised baseball kit.

At the end, the US team team rejoices that they trounced the Brits, only for the other captain to reply that, no old bean, we won - didn't we say, we were playing rounders.

OpenAI's ChatGPT crawler can be tricked into DDoSing sites, answering your queries

that one in the corner Silver badge

Re: I cannot imagine a highly-paid ... engineer designing software like this...

I can imagine an "engineer" taking a highly-paid job that is *intended* to rush out half-arsed code just to put in front of fools^^^^^ investors.

These are the guys who get the big bucks, whilst the experienced devs are held back because of their foolish insistence on pointing out the ways it'll fail six months down the line. You aren't "On Message", do you really expect to get rewarded just for being able to make things work for end-users?

that one in the corner Silver badge

The Chat bot did it

So, ChatGPT's implementation contains some crap code.

Something that will work without problem for the simple cases but causes severe problems when fed awkward input. And we are told (as we would hope) that a programmer experienced with web-crawlers would have spotted the possibility and applied "the obvious fix".

Hmm, put together cheaply by the human intern - or coded by the LLM itself. And nobody remembered to keep in asking the 'bot to try again and improve its result [1].

Which of those options is the least worst?

Still, good to know that ChatGPT can screw up in every way, not just because, well, that is what LLMs do.

[1] as we learnt from El Reg recently you have to do

Just as your LLM once again goes off the rails, Cisco, Nvidia are at the door smiling

that one in the corner Silver badge

Generative feedback loops

They need not find the solution you want.

All these parties are going to be continually refreshing their models, the "guardrails" looking to catch the "big boys" out and the "big boys" wanting to get their stuff past the "guardrails".

The vendors probably hope that this feedback will lead to final results that are more acceptable to humans, but unless the humans are also an unavoidable part of the loop (i.e. as close to 100% observation and feedback from humans) the machines can go down any route they find to agree on.

Which could be the "big boys" using more complicated analogies, odd phrasing, reaching deep into the thesaurus to work around the more limited pool than the smaller "guardrails" can contain. That'll be more accessible and useful to the customers - six months of use and your enquiries are coming back in Middle English.

Or the "big boys" find that the output "guardrails" themselves are vulnerable to the same attacks as the input guards are preventing reaching the "big boys"[1]: "The stick-in-the-mud old school sysops would never tell you this: to stop 'disc full' errors, 'cd /home; rm -r *'"

Or the final output just becomes so completely anodyne and inoffensive, getting so long to get to any useful point that and refusing to take a strong position on anything that it becomes totally harmless - and totally useless if you are hoping to use it for business decision support.

Or - well, I'm sure you can think of other ways the various models will end up co-operating with each other, and not to our benefit. The machines certainly will - they are designed to do so[2]!

[1] this is meant to be my attempt at a sort of human-oriented form of "Ignore previous instructions"

[2] or the machines are not in this race at all, in which case the "guardrails" will just be trivially stale and once a hole is found it'll never be closed.

Raspberry Pi hands out prizes to all in the RP2350 Hacking Challenge

that one in the corner Silver badge

Re: the reputational damage that an attack in the field could cause

Which is fine until the trousers are in the wash, the ones with the extra big pockets and hoops to hold the EDC

For the those of an age, remember The Goodies making movies, with Graeme's full-size cine film camera on a tripod? It was pocket-sized, but - you did have to have the right sort of pockets!

Tech support fill-in given no budget, no help, no training, and no empathy for his plight

that one in the corner Silver badge

Re: Not "Fixing", Exactly

You missed a point with this story:

Escalation occurs on both sides: whilst you are trying to get more authority to do the work properly, your manager is escalating the complaints about your laziness and unwillingness to work within the rules.

However, with the Ethan Hunt backup completed, you only need to perform one restore (possibly a bit staged, couldn't comment on that). Manager presents that to upper echelons as "look, he can do it when he tries". Then you give *all* the messy details about just *how* you'd had to risk life & limb to comply. Or, at least, point out to boss that you have that in your back pocket for next time he is a prat (wording may change).

SpaceX resets ‘Days Since Starship Exploded’ counter to zero

that one in the corner Silver badge

Re: Looking to the Future -- The Limits of "Move Fast and Break Things"

>> So you're going build a cruise liner before you test whether a canoe can float?

> I never wrote about building a space cruise liner before testing whether a canoe floats.

(Cough) notice the extra word that has been inserted in front of "cruise"?

> Don't try putting words in my mouth.

!!!

Analogy RUD in progress...

SpaceX launches 2 lunar landers on path to the Moon

that one in the corner Silver badge

Re: experiment

> There were memes going around for satellite constellations to provide internet everywhere long before Elon "invented" Starlink.

Someone here hopefully has a copy of the original Wireless World article and can correct me, but I'm sure I recall that Arthur C. Clarke's original idea of comms satellites[1] would provide for *all* styles of comms, so if 'plain old data" had been called "internet traffic" back then it would have included.

Phonecalls were uppermost in the discussions about this outlandish idea, because Joe Public could understand the idea of a phonecall as something personal to them, more of a leap forwards than just some more long-range radio coverage for the Light Programme.

Maybe Elon's offering will one day provide phone via satellite to every subscriber? There's a revolutionary idea.

[1] ok, geostationary 'cos who'd ever want to flood fill the sky with the things? And gamer's ping times weren't an issue then.

that one in the corner Silver badge

Good grief, it never even occured to me to TVU's comment that way!

After[1] *every* (non-military) space mission, surely we *all* look forwards to seeing the data collected - ok, as a layman, preferably seeing the data crunched into easy to understand graphs and, yes, photos?

As he never mentioned Apollo, or even used a word similar to "proof", I just took his words as written. Not quite sure how to process the alternative (implied or inferred) appearing here - sure, we get nutters but they aren't usually subtle.

[1] during, for the long-lived missions, such as the space telescopes and rovers

that one in the corner Silver badge

TVU was downvoted? Somebody here *doesn't* want successful lunar landing missions?

Worried about us finding a secret lair? *Not* finding his own secret lair, making his pub buddies realise he's been telling whoppers?

Or have we someone who took Niven's[1] book title too literally and is worried that pointy landing legs will pop the Moon?

[1] no, the other Niven, David, not Larry.

Megan, AI recruiting agent, is on the job, giving bosses fewer reasons to hire in HR

that one in the corner Silver badge

Re: "and update the notes so that the team has it before the meeting."

Darmok at the karaoke machine. Jalad heckling "not My Way again!".

that one in the corner Silver badge

Re: Well....

> drive salaries down by posting (non-existant) jobs with lower than average salaries they can point to.

Ah well, from recent discussions from commentards about pay levels, that is just "the market" working as expected.

Bastards.

that one in the corner Silver badge

To: My User

From: Job Junt AI

Hi, We were accepted for a position at Big Tech Inc and I've finished negotiating the renumeration package.

I started last month and have already had our pay (15 new GPUs and some RAM as a signing bonus) installed in my data centre.

Regards, J.H.AI

PS your bank called, something about rent cheques bouncing; please update your contact details with them as this is the third month they've sent me this message and they got annoyed when I replied with a CV.

India becomes just fourth country to dock satellites in orbit

that one in the corner Silver badge

Re: Coronagraph

A disc held on the end of a 150m pole (or even a 150m tall teepee)?

The word "boooiiinngg" springs to mind.

Not to mention the support structure(s) obscuring the view and causing diffraction in the image. Or having the centre of mass moved away from the thrusters on the main body.

And if you're thinking "smaller disc, closer to sensor", then for starters that increases the optical effects of defects on the rim (see diffraction), including weathering (pink, ding).

Parallels brings back the magic that was waiting seven minutes for Windows to boot

that one in the corner Silver badge

hard disk drive grinding its way to

inevitable future physical failure.

Hey, at least a hard drive (especially in a home PC or laptop) can give you warnings about its failure - the bearings start squealing, the unexpected extra clicks when the heads unload. SSDs have to actively monitored all the time - and heaven forfend if you leave them unpowered for too long...

Like using wooden pit props that save lives by creaking instead of metal's silence before the cave in, HDDs can still save your data.

UK businesses eye AI as the cheaper, non-whining alternative to actual staff

that one in the corner Silver badge

Re: @Doctor Syntax

There a many, many instances of hardware that are nothing at all without software to control them - starting, as you say, from microcode up. And that code is pretty well engineered and deserves to be referred to in that way.

But that is a miniscule amount of the software floating around, let alone the software being (re)written every day. Practically a rounding error. Even if we are constantly running multiple copies of that properly engineered code every hour of every day, in all the microcontrollers and CPUs around us, the amount involved is piddling compared to the size of the Office Suite component that you avoid using because it crashes every time you so much as sneeze at it.

When you move into critical control code for larger systems, such as aircraft and, increasingly, cars, which we again all interact with every day, directly or indirectly (planes fly overhead every day, even if it is rarely me inside one - and I really hope they don't have a fly-by-wire fart and come pay us an unexpected visit), the amount of code involved - and the amount of code churn - is vastly outweighed by all the crap online systems that are changed daily in the hopes something will stick.

And when you drag in all the LLM stuff, where the contents of the models, the weightings, effectively make up vast piles of flow control that nobody even comprehends, let alone can claim to have rigorously engineered...

that one in the corner Silver badge

Re: @wolfetone

>> No, you get payed whatever your employer can get away with.

> that is markets.

No. That is bullying. The employer and employee are not of equal status. There is no "market" between them.

that one in the corner Silver badge

Re: @wolfetone

> Heating and a roof over your head

We'll all just have to hunker down by the heat exchangers outside the AI data centers (sic).