The Register Home Page

* Posts by that one in the corner

5065 publicly visible posts • joined 9 Nov 2021

Where's my money?! Now USA Today publisher sues Google over online advertising

that one in the corner Silver badge

Re: Feel the sympathy melt away

Yes, that is what *actually* happened, well done, we all know that.

Did you miss the bit where they *expected* a windfall and they suing because *that* didn't happen? The quote doesn't say "we just wanted a fair return for all our hard work".

that one in the corner Silver badge

Feel the sympathy melt away

> Internet advertising, Gannett's lawyers argued in their case, should have been a windfall for publishers shifting from print to online

We are supposed to be sad that you didn't get a windfall? So you are suing because you didn't get something for nothing?

Trot on.

Microsoft rethinks death sentence for Windows Mail and Calendar apps

that one in the corner Silver badge

Got that backwards, mate

> If you can't port a UWP app to the native toolkit, it's basically an admission that nobody should ever build native windows apps

Or a clear admission that you should have stuck with the native Windows and never written the UWP one in the first place.

Nowt wrong with Windows native[1], everything ends up invoking it right at the bottom. Plenty of additional toolkits available if you want extra fluff.

> Not even Microsoft is using their own toolkit.

Way to miss the point! They just want you on their cloudy thing, that is all.

[1] yes, yes, I know: "you are using Windows, that's what's wrong"!

Over 100,000 compromised ChatGPT accounts found for sale on dark web

that one in the corner Silver badge

Re: Is this worse that other products?

> Is there evidence that ChatGPT is worse than other products?

Well, it isn't anywhere near as tasty or nutritious as Ambrosia rice pudding. Nor does it work as well after you add Golden Syrup.

that one in the corner Silver badge

Re: Is this worse that other products?

No.

It would have been clearer if the article had pointed out that The Racoon info-stealer it mentioned is malware that infects individual PCs and grabs anything it can.

This isn't a report that OpenAI's servers have been breached.

In fact, the references to ChatGPT are only here to manufacture a headline: all the Racoon-infected PCs probably coughed up a lot more valuable logins than those for ChatGPT but there is nothing newsworthy about leaking bank accounts or Github credentials.

Another redesign on the cards for iPhone as EU rules call for removable batteries

that one in the corner Silver badge

> Apple and Google both mandate that developers target current OS versions (released within the last year).

Meanwhile, our favourite poster boy for "Why won't they learn to code!' - aka Microsoft - still allow the running of code built for Win2k (and will until they kill off the Win32 API).

The security fixes to the OS are "under the hood"; the API changes are (almost entirely) around adding new features (which you can ignore if you don't to bother with them) and those that aren't are given a backwards compatible form.

Hmm, pretty sure Linux can manage this sort of amazing feat as well.

Strange that Google & Apple can't manage backwards compatibility - do you think there could be any ulterior motive? Can't just be because they aren't as clever as Microsoft!

PS

Lest anyone then claim that any Win2k-compatible exe must be so old that it is itself insecure and dangerous, pretty much all the Windows exes I build are still Win2k compatible[1] whilst using decently up to date OSS libs[2] (so not just my weird code).

The only real "trick" is that some IDEs/compilers generate exes that insist on checking for newer OS versions, so they get options tweaked - or just don't bother with those unless it is really necessary to.

[1] yes, I still build 32-bit as well as 64: I've got perfectly functional 32 bit hardware chugging away, as well as the Win10 box. The 64-bit builds, curiously enough, usually end up XP compatible. Have to admit to not having fired up the Win98 laptop for a while.

[2] do have to sometimes put in a few #ifdef's or break a C file in two because the author didn't separate out their use of fancy new features I still don't care about from the actually useful bits of their lib, but that is simple enough.

Whose line is it anyway, GitHub? Innovation, not litigation, should answer

that one in the corner Silver badge

Re: Playing by the rules while making things better?

Interesting approach.

> Partitioning may affect the LLVMs ability to generate useful output

> If the target project is GPL, only the GPL LLVM[1] can be used

You could perhaps group these smaller LLMs that are GPL-compatible and try each member of the group in turn to see what gives a useful result. Although that requires discipline on the part of the User (or some automation in the IDE) to ensure the correct licence is applied to the correct lines of code.

Unfortunately, licences that require attribution would end up in one LLM all by themselves.

[1] if only this stuff was coming out of LLVM: those guys do good work you can use without all these arguments!

that one in the corner Silver badge

Re: Playing by the rules while making things better?

Yes. And?

I'm assuming you are positing this as a riposte to my description, but you don't say how or why that disagrees with what I described.

Although what doesn't help is that the quote said "copy code" when it ought to have said "replicate code" - subtle difference, but important.

There are two parts to that replication, which work together to both agree with Tim Davis's statement but also indicate how the LLM structure as it stands still makes it very difficult to generate attributions. What follows is all broad strokes (or we would be here all day) and considers an abstract chunk of code, not Tim's specific code (if for no other reason that neither of us has chased down. Also, I have absolutely no idea how much CS you (or anyone else reading this) has, so it is going to be really crude and simplistic and full of holes; apologies, but bear with me.

Part One

The learning process took Tim Davis's code (call it T) and started with basic lexing[1], breaking it into tokens. These are treated, along with all the tokens from the rest of the the training data, with processes like[2] "take all the pairs of tokens and count how many times each particular pair occurs; reject all the zeroes, convert the counts to a fraction of the total and you have the probability of such pairs occuring in a sample of your data". You can now run this "in reverse"[3] to generate output text that precisely comports to those statistics but which will be total gibberish.

If only you could also find wider correlations, such as "every { must be followed by - stuff - and then a }". If you could then the generated output will look more realistic. You may decide to say "those are just the syntax rules of the language, I can put them in explicitly"[5]. But that is starting to get hard, as you really need all the semantic rules, which even the best compilers don't encapsulate: e.g. for a certain commonly occuring nested loop structure {for (i=0; i<n; ++i) { for (j=i; j<n; ++j) {...}} you have to match the variables used instead of i and j, *but* not all commonly occuring nested loops start with j=i, some start j=0. The size of the problem is getting way, way out of hand to do manually: you don't have space to store all the tables that *accurately* describe the correlations and you are unlikely to even think of all the possible correlations, let alone how to express them.

So we turn to an algorithm that will (try to) find lots and lots of such correlations but, unlike the above, can't be guaranteed to find them all (it may not even find ones you consider obvious): this learning algorithm is generic, but by prefixing it with our above steps (i.e. reading text, lexing it) it magically becomes an LLM. But all it is doing is building up a massive graph with weightings along each transition which is a sparse version of the infeasibly big "n dimensional table of every correlation, accurately calculated" we tried to create, above.

Hopefully, with enough handwaving, you can see how we get to a model that generates really clever looking output *and* how that matches my earlier description of how hard it is to tag every node in the graph with the relevant attribution.

Part Two

BUT how can this ALSO replicate Tim Davis's code, apparently as a single chunk of licence-breaking text?

If we change our input reader slightly and instead of chunky lexemes we read the text literally bit-by-bit we can obviously do all the same work and get the same result (an cod-Copilot) but one that generates its output as a bitstream (which needs to be clumped back together as characters/codepoints to be legible to us) and is even less efficient in operation than the first version. The model no more and no less "understands" the bitstream as it did a lexeme stream.

Now go back to the first table of pairs of lexemes, ordered by probability and consider it is now a table of bit pairs, ordered by probability. As we've reduced the token space to two, we can afford the RAM to extend this from pairs to longer sequences and we can store this as a binary tree, rather than an n-dimensional array, still sorted by probability with the empty token as the root. This structure is now strangely familiar[6] as the basis of Huffman Encoding - i.e. we can see that this data could be used to simply (de)compress all of our input texts (if we fed all the original texts back in, the Huffman coding is now our lexer and if we feed its output into the generation process instead of using the random process, tada lossless compression and decompression).

Expand back out (whilst wildly waving hands even more) from a neat and complete binary tree to the sparse graph we had before, we can relate to that graph as a means of compressing the inputs: if we walk it in the *correct* order, we can (almost, sort of, because it is sparse and has gaps) think of it as decompressing the original texts back out again.

It now sounds like I have contradicted myself - if it is decompressing, then it stored the original, just like any other compression system, so it did just copy Tim Davis's code into itself and spit it straight out again! So why can't it admit that and add in the attribution correctly?

Well, the "decompressed" text is a bad copy: there are all those gaps in the graph and the random numbers used each time there is a choice - we aren't following one fixed *correct* order, it will (likely) be different each time. And, just to be really weird, there is also the potential for inefficiencies in the learning process that make it probable that constructs we "see" as being the same are replicated multiple times, possibly with irrelevant (to us, and to a compiler) details: how many ways are there to write a "for" loop in C? What about use of whitespace? "for" or unrolled as a "while"? It is entirely possible that 12 out of 17 next choices from this node may all result in for loops, just not quite the same one, so most rolls of the die do the same thing (as far as we are concerned, reading the output).

How bad a copy? Good question - not helped by not seeing *precisely* what Tim Davis saw [7] and how it compared to his original: was it character perfect or did it look like it had been effectively retyped? How many runs did he try and did they all generate the same thing?

As for the attribution: The same conditions apply as in my previous comment (each node in the graph is derived from all the inputs - precisely the same way that each node in a Huffman tree is derived from all the inputs). And remember all the variations on a loop? They came from different places...

Shoot Self In Foot, Destroy Own Argument

Having said all that, there *is* the possibility that one or more traversals never actually hit a choice point and will always generate identical outputs. And one or more of those *could* have formed from a weirdly unique input text, which has now been memorised and can be said to have been copied verbatim and, as it is the only thing that could be generated once that node has been reached, it could even be attributed.

Unlikely, but not impossible. Um, could even be due to a bug (i.e. the learning process isn't actually doing learning, it *is* doing compression).[8]

Footnotes

[1] intriguingly, so far I've not read if the lexing was adjusted from that used with the a natural language models into one that includes the lexing rules we apply to programming languages (i.e. the lexers in all our compilers/interpreters) - citations gratefully accepted.

[2] or totally unlike! Sorry, it gets messy being crude, and one problem is that you can not only do the crude reward/punishment process to slowly nudge the weightings until they (sort of) match the correlations, you can also pre-process the data by using faster (aka sensible) methods to generate the early, simple (and explicitly accurate) correlations between, say, token pairs and triplets - but you don't go very deep with this because, duh, the combination space grows in size and you run out of memory. Even the initial lexing step is one such pre-process.

[3] one array per token, including the empty token, fill it with the pairs starting with that token; sort by probability (randomly ordering equally probable items) then tag each entry with the cumulative probability from the start of the array, beginning with zero - each entry in the array is now in the bucket between its cumulative prob and that of the next entry. Start with the empty token, loop: generate random number 0.0 to 1.0, find the corresponding bucket, output the token in that bucket, set current token to that value, stop when you hit empty token again.[4]

[4] at this point we have the old programming exercise of the "rubbish generator", which done properly uses a decent Markov Chain, of course.

[5] you have also spotted that a parser's "follow sets" for a (programming) language can also be put directly into the generator portion of your Markov Chain assignment to produce "sort-of-recognisable program sources".

[6] still waving my hands, remember!

[7] if you can provide citations for that detail, please, please do

[8] In which case it isn't generating a proper LLM thingie in the first place and as I've only been rabbiting on about LLMs then all my arguments still stand. Phew.

that one in the corner Silver badge

Re: Co-pilot

> Microsoft has plenty of proprietary cade, yet they used none in the training,why?

They did use proprietary code (and other text), that is what the other half of the argument is about.

Oh, you meant proprietary code owned by Microsoft?

Just how awful do you want the results from Copilot to be?[1]

[1]Although using different style guidelines in every dialogue box it generates for you would show that Copilot is being creative[2]

[2] he says, clearly angling for that job in marketing as there aren't quotes around the last two words there.

that one in the corner Silver badge

Re: Consensus reached by whom?

Indeed. The Magic Word is:

CITATIONS!

that one in the corner Silver badge

> why are they putting it on GitHub instead of a PRIVATE repository somewhere?

The extra tools provided above and beyond the use of git (if you really feel you need to use git - but that is a whole other discussion) - plus making git bearable in the first place (are my opinions showing yet?).

Github non-public repositories are supposed to be as good as any other private third-party managed repo hosting...[2] and lots of places (however we may regard them for doing so) use third-party repo hosting.

> The whole point of GitHub is to make the code available

Even from the beginning, making money was the goal[1] (although only in order to be able to host OSS, of course (?)) and, well, Microsoft owning it...

[1] and to get onto and promote the whole fad of using git for absolutely everything, of course (ref whole other discussion, above).

[2] until they are accidentally read by processes run by the admin, of course (Copilot).

that one in the corner Silver badge

Re: Playing by the rules while making things better?

> it would be maybe a day's work to write functions that can parse them, strip out the author credits and add them to to the generated code.

Adding them to the generated code? Nope, not a chance. Definitely not in a day.

The LLMs do not keep large chunks of recognisable code, that could be tagged with an appropriate licence. The information[1] from any particular input is smeared across thousands of the nadan[2] values and gunged together with all of the other inputs. At best you could manage "This character of output came from a% triggering nadan(2006) and b% nadan(7361) and .... nadan(2006) is x% this licence, y% that licence... multiply together and tag the character. On to the next character[3].

It is almost as if the people unleashing these models on us knew perfectly well that attribution is impossible (and, oops, forget to even hint to His Honour Judge Joe Bloggs that this is only one class of algorithms they could use[4])

[1] which information is mostly correlations between correlations between correlations - the connection to specific lexemes gets lost pretty quickly as the layers are progressed.

[2] nadan == near as dammit arbitrary number

[3] yes, yes, they work not in characters but lexemes, some of which are whole words, some are single characters - or codepoints, mister fussy Unicode :-)

[4]

that one in the corner Silver badge

Re: The problem is simple: Copyright.

> Copyright as it is still implemented, comes from a time when the means of producing AND distributing content were scarce.

Not really - they came as the means of copying and distribution were growing, very quickly (sometimes faster than the ability to consume, which helped push literacy - another topic).

> That time ended decades ago ... thanks to technology, scarcity in the means of distribution ceased to exist

More than decades ago - scarcity was in sight closer to a decade ago. There is a big difference between scarcity and the outright flooding that is possible now.

> And instead of changing the laws to reflect reality, instead we invented new laws

Which is how legislation proceeds: the new laws supercede the old. 1662 law affected the 1556 company charter, 1710 supplanted 1662, ... up to 1988 (UK btw).

> courts have to deal with absurd questions like: is temporarily storing information in RAM

Absurd to the cognoscenti but still valid concerns for law: you can keep a copy in RAM for years, with care. Do you want courts to understand what is *actually* going on or do you want them to work on the basis of "it is all black magic"? Asking questions in court is all part of the mechanism of debugging legislation. Even if - especially if - the "absurdities" are being pushed by the Bad Guys to get their way.

> We entered an age where people can create an entire novel, even an illustrated one, in a matter of days ... How will copyright laws ... fare in this new reality

Hopefully by enforcing what they already cover: acknowledge attribution and licensing terms.

If the *distributors* and those making money from the generative systems can not demonstrate attribution then they should be held accountable in that basis - and any argument that "this system can not do that" becoming an argument of "not fit for purpose".

that one in the corner Silver badge

Interesting recap of discussions from commentards[1]

from earlier articles on Copilot, even if some of the arguments have been pushed a bit far, e.g.

> Even if giving people the right to remove their code from Copilot and the like results in a ton of good stuff going away

Just removing code from Copilot's maw doesn't make it "go away", even Copilot could suggest invoking that code rather than copying it.

Interesting to see if new responses arise here.

[1] which is fine in general, summary articles save a lot of reading, but <cough>attribution</cough>?!

Elon Musk's Twitter moves were 'reaffirming' says Reddit boss amid API changes

that one in the corner Silver badge

Re: Reddit's CEO doesn't realize

> other than having the biggest concentration of discussions in one place it has no advantage over the million other forums on the internet

To be fair, Reddit content does have the ability to totally screw up GPT models, as demonstrated here:

Glitch tokens (Computerphile) https://m.youtube.com/watch?v=WO2X3oZEJOA

that one in the corner Silver badge

Twitter? Profits? Ever?

> He also said that he often wondered why Twitter couldn't turn a profit under previous management

> my takeaway from Twitter and Elon at Twitter is reaffirming that we can build a really good business in this space at our scale

Implying that he believes Twitter is now making a sustainable profit under Elon.

Ok, have I missed something? Last I recall, Twitter was having problems getting the advertisers back, still owed a lot of money in unpaid bills and we were all waiting for the understaffing to really bite.

Not a great model to be talking about if you plan to IPO - *unless* perhaps he is trying to convince Elon to buy all the shares at IPO (after all, he bought one bag of flaming poo...)

Google warns its own employees: Do not use code generated by Bard

that one in the corner Silver badge
Facepalm

Re: The inverse of dogfooding

Typo alert!

Missed out the "or" at the end of the last sentence, should have read

"OR is there an existing well-known phrase I'm ignorant of?"

So asking for new suggestions, as well as any oldies I'm not aware of!

that one in the corner Silver badge

The inverse of dogfooding

> Do not use code generated by Bard

Clearly, we need a new phrase to mean the opposite of "eat your own dogfood", which can be easily explained to Joe Public.

Something that gets across the idea that this is the sort of advice usually given to writers of deliberate malware.

So far I've not come up with any candidates, but place my trust in fellow commentards. Is there an existing well-known phrase I'm ignorant of?

Google searchers from years past can get paid for pilfered privacy

that one in the corner Silver badge

> Nobody in their right mind is going to waste the several hours of jumping through hoops

Coming to a YouTube channel soon: "I show YOU how to get $$$$$ from Google (Step #7 will ASTOUND you!)"

Notes:

Only 5 dollars 'cos he's encouraged more people to take up The Google Challenge (like a dozen other YT ot TT Challenges, but more useful to society)

Step #7 was his actually managing to wait on the 'phone for 317 minutes to before reaching the automated service, without sobbing that he never wanted to hear about the Girl From Ipanema again (sequences shown may be shortened; your call is important to us)

that one in the corner Silver badge

Re: Context matters. Pedanticism kills.

WTF is "Pedanticism"?

Oi, watch where you're throwing those knives, you could had my e

that one in the corner Silver badge

Re: Context matters. Pedanticism kills.

Oh, snap, you just handed that AC his "r"'s!

False negative stretched routine software installation into four days of frustration

that one in the corner Silver badge

Re: Marital Status: British

Are you sure the applicant wasn't a certain Elizabeth Windsor?[1]

The monarch is married to the country by duty bound and all that.

[1] Assuming "I once" was a while back.

that one in the corner Silver badge

Re: Noisy installers suck.

How did I contradict myself?

You yourself said it - the core dump contained *far* more information than a stack trace does - with a core dump you can do a *full* post-mortem debug: pull the stack *and* examine that state of the in-core data structures. And the sentence I quoted was:

> the days of memory dumps and post-mortem debugging are long behind us

So, yes, having *at best* a stack trace is nowhere near as useful as having the full core dump. Examining a core dump and doing full postmortem debugging is skill that is being lost, IMO, so things are getting worse!

Not sure how your scenario adds to this argument: of course, if you have the data, the precise version of the program and resources to replicate entirely the client's actions then you can recreate the problem (in this case, a simple enough overwriting). But note that you didn't attempt any postmortem debugging and were in a far more favourable situation than many when faced with a User crash.

But it was an interesting story of the programmer not expecting the User to do something that daft.

that one in the corner Silver badge

Re: Noisy installers suck.

> the days of memory dumps and post-mortem debugging are long behind us

Once again, it feels like the more time passes, the worse everything gets.[1]

Once upon a time, devs were proud of the ability to debug a User's problems, now they have to see the stack trace for themselves or it is marked "can't reproduce".

[1] and me with this pain in all the diodes down my left side.

Astroscale wants to be the world's friendly neighborhood space garbage collector

that one in the corner Silver badge

> deorbit the moon

Wasn't there a documentary about that recently? Although I recall that they forgot to concentrate on the core issue of cleaning up LEO and got sidetracked discussing interpersonal relationships for some odd reason. Should have had Brian Cox doing the narration.

AI is going to eat itself: Experiment shows people training bots are using bots

that one in the corner Silver badge

The workers were paid $1 for each text summary

> summarize the abstracts of 16 medical research papers

This is an amazing discovery they've made here: if you want high quality input data then you have to put in the resources to get it! In this case, including paying the people, setting clear statements (no cheating!) *and* paying up whatever is required to apply strict controls to *prevent* cheating in the first place. In this sort of case, gather everyone in Big Hall and provide staff to invigilate!

But no, use Mechanical Turk because it is cheaper to do so.

This is just part of the ongoing problem with the current approach: time was, the machine cycles were the most expensive part of the process and the memory wasn't available to do huge models - so the effort was put into getting the input data as good as possible to make the best use of every day spent on the training. Now any fool with a Cloud account can buy the cycles and shovel into it cheap fodder: the result will *look* impressive enough to fool the VCs into handing over dosh.

Amazon confirms it locked Microsoft engineer out of his Echo gear over false claim

that one in the corner Silver badge

Re: Rent

Ah, afraid I have to disagree with you this time.

> The OP didn't say they'd cancelled their Amazon account, they said they'd cancelled Prime.

But he said:

>> I decided to cancel my Amazon France account

Not that he'd cancelled just Amazon France Prime (or Le Prime de Amazon Francais, as they like to do things backwards), but the whole account.

> for which I had paid for a perpetual license, to be licensed in perpetuity

The material would still be licensed to you - you can download and keep it, no problem - but I'd still not *expect* to go back and pull a fresh copy from their server if I didn't have an active account with them - just for starters, logging in would be tricky...

that one in the corner Silver badge

Re: If it needs an app to work you don’t own it

> a show we wanted to go to only provided tickets to a smart phone

Ooooh, that is annoying when they do that. You get to the venue - oh look, bugger all connectivity (even if you aren't out in the middle of nowhere, guess what everyone there is trying to do) and the app won't show the QR to front of house until it has verified you (nope, no idea why it needs to do that, but in the meantime here is an ad for another show).

Screenshots, an absolute life saver.

that one in the corner Silver badge

Re: Rent

Very true. If I cancelled my Amazon account I can not see any reason at all to assume that I'd be able to, say, run the Amazon music app and expect it to still be allowed to connect to their servers.

Ditto any other online shop offering digital data: you close your account, they close your access to their storage. Would you really expect anything else?

that one in the corner Silver badge

Re: Keys not yours

> Just wait till the power company locks your out of your “smart” thermostat.

2021: https://news.yahoo.com/power-companies-taking-over-smart-185552524.html

"Have you ever seen your smart thermostat go rogue? Galveston, Texas, resident Shelby Rogers recently did, and discovered that her power company had manually changed her air conditioning unit to be set at 80 F."

https://www.theverge.com/2021/6/18/22540015/psa-energy-saver-program-smart-thermostat-adjust-temperature-heat

"Known as demand-response programs, some Texans were taken by surprise this week, as their thermostats were turned up without any action from them."

From those and other reports[1], it appears that the "affected" had voluntarily signed up to let the power company do this, to get a bit off their bill and those "surprised" were just the usual suspects who didn't read the T&C (or the instructions for their home units) but the point is, the ability is built into those things (think you'll still be able to override using the app? That they had written for them?).

In the US, at least.

that one in the corner Silver badge

Re: What, no backup strategy?

Wow, you leapt from keeping up the convenience of his lifestyle to rape?

that one in the corner Silver badge

Re: no backup strategy, SMH stupidity

Shout. Shout!?

Our throats were so sore from eating t'gravel for breakfast we had to catch foxes at night then make 'em bark at door for us to be let in.

Well, I say "in"...

that one in the corner Silver badge

Re: no backup strategy, SMH stupidity

> that ALSO depends upon internet connectivity with a cloud provider (Apple's Siri).

The way Siri was talked about in the article confused me a bit (I don't have a setup like Jackson's; surprise!), so did a little digging. It appears that if you have a new enough shiny Siri can work locally doing the voice recognition on your iDoodad.

So let him have that point back.

Now start to ponder that Siri really is listening in all the time, even when you are disconnected, gathering all those useful little nuggets...

that one in the corner Silver badge

Re: no backup strategy, SMH stupidity

No, the Luddites provide a sabot on a bit of string, so you can throw it at the door.

that one in the corner Silver badge

Re: What, no backup strategy?

And he continues "left me with a house full of unresponsive devices"

You seem to be assuming that "primary" meant he had a secondary means of controlling *all* of his devices? And yet they remained unresponsive.

Maybe "primary" here was more casually used, to refer to the most often used mechanism when interacting with his surrounds? And for that subset of his surrounds, the only?

that one in the corner Silver badge

Re: What, no backup strategy?

Dunno about you, but what makes somewhere "home" and not just "the place I'm living in" is, among other things, the feeling that "this is *my* space" (our *our* for the fortunate) and I have control over it (barring force majeure).

If the light stops working then it is frustrating when it is down to physical reality: the bulb is old - ah ha, I planned for this, quick swap, smugness (or didn't and grrr, self); the entire village has lost power, time for the good old Blitz Spirit, crank up the Victrola.

If the light stops working on the whim of someone else, that is an intrusion into the "home" - it may as well be a hotel for all the sense of "belonging there" that is left. And that "someone else" doesn't even know you! It isn't even personal, you are literally nothing to them other than an account on their screen.

Maybe one happy day you too will enjoy a home and not just a roof over your head.

that one in the corner Silver badge

What, no backup strategy?

"This wasn’t just a simple inconvenience, though," he wrote. "I have a smart home, and my primary means of interfacing with all the devices and automations is through Amazon Echo devices via Alexa. This incident left me with a house full of unresponsive devices, a silent Alexa, and a lot of questions."

Hopefully, the first question being: why did I tie my house so thoroughly into a third-party cloud, without considering what it would do to my lifestyle if it was interrupted in any way?

If he had the nous to do some of it locally, what was the gain supposed to be in outsourcing the rest?

Not his computer in control, means it wasn't really his home...

Music bosses go after Twitter's unlicensed soundtrack to the tune of $250M

that one in the corner Silver badge

Interesting list of tracks

"Another brick in the wall (pt 2)" is there - 'cos Elon don't need no education - but Twitter doesn't seem to have "Money".

Decision to hold women-in-cyber events in abortion-banning states sparks outcry

that one in the corner Silver badge

Restricts drag shows in public places

RIP Danny La Rue, Dame Edna Everage, Dr. Evadne Hinge and Dame Hilda Bracket.

Sad that you are no longer here but at least you can not see as misguided parts of the world turn against you and your kind[1]

Perhaps PBS could show a retrospective of their work.

[1] that is, good Saturday evening entertainment and people who knew how to do their make up well. They were all good enough for the Royal Variety Performance at the London Palladium[2] but not good enough for Tennessee! Pah!

[2] As if the RVP isn't public enough, the Palladium is only 273 metres from Saint Georges, Hanover Square (that is less than 900 feet)! Shocking, how could they have allowed it, etc etc.

that one in the corner Silver badge

Re: Mixed Feelings

> As for autism, why is there such a correlation between trans and autism?

Dunno. And neither do the authors quoted in your citation (as is clearly indicated at the end, quite correctly).

But that doesn't mean that you get to fill that void with your pet theory (unless you've also done all work to show why you are correct, which you would back up by including the relevant citations, of course).

> Going by the babygaga list a good portion of German men are closet trans as they pee sitting down.

I wasn't entirely convinced by that article, but as they don't actually mention standing to pee (despite the section headline suggesting that is what they'll talk about) AND even they say not to judge on the basis of a only a single indicator, we can dismiss your radical take on this as one as well.

that one in the corner Silver badge

Re: Women in Jobs?

You seem to have lost the first half of that sentence you are quoting, which seems a shame, it isn't too long to quote in its entirety (we wouldn't anyone to accuse 2A discussions of cherry picking, would we /s)

"A well regulated Militia, being necessary to the security of a free State, the right of the people to keep and bear Arms, shall not be infringed."

> The requirement is that in ... the people must be armed.

You believe that the people are *required* to be armed? Where do think this is, Switzerland?[1]

> (to defeat the British)

Nope, they were looking beyond that fracas (as were the British by then, other things to do Old Bean).

[1] Yes, I know it is only the militiamen, 18-34 and fit to *be* militiamen...; curiously, that sounds closer to what the US 2A intended, but IANAL, not USAzian etc etc.

The ZX81 finally gets the keyboard it deserves

that one in the corner Silver badge

Re: How times have changed..

> With someone like Apple, you can be fairly certain that if they say something will be available in 30 days, it will be available in 30 days.

These days, probably.

Back in the day? Not so perfect (<cough>Apple III<cough/> <cough>a working one<cough/>)

that one in the corner Silver badge

Re: *twitches*

> it looks like it's trying to replicate the concave-ness you'd get with some keyboards back then.

And the concave keyboards you can still get, using shaped keycaps and/or a curved backplate (the later for the really posh).

Personally, I love the concave keyboard: hanging over the home row, flexing the fingers mostly with the middle joint and the ends move in an arc that matches the curve of the keyboard.

But as I've said before, all my workaday keyboards are 30 plus years old juggernauts - whilst the more portable (!) newer mechanical jobbie to make the laptop usable is all weird and stepped rather curved. Shudder.

UK smart meter rollout years late and less than two thirds complete

that one in the corner Silver badge

Use a water mister to dampen yourself, rub thoroughly with shower gel; use a strigil to get most of it of yourself and only use the shower proper to rinse off: as you say, 10 seconds should do the trick.

Gen Z and Millennials don't know what their colleagues are talking about half the time

that one in the corner Silver badge

Re: Most misused list - where is "steep learning curve"?

Time along the y-axis: bold choice.

that one in the corner Silver badge

Re: Most misused list - where is "steep learning curve"?

They chucked me out of the stacks for excessive pedantry unbecoming a library ("Where do you think you are, mate? The Oxford Union?")

that one in the corner Silver badge

Re: Most misused list - where is "steep learning curve"?

Ok, but I'd love to see how you're going to collect the data (say, on a weekly basis) to draw the graph of actual versus the planned/expected/commonplace plot for "amount of information that has to be absorbed in a set amount of time".

And give a cogent explanation of why that graph is the one that springs to mind every time anyone uses the concept of "learning curve".

No, seriously, if that is a sensible and useful and useable way to plot the data then I'm open to learning how; but right now I just don't see how it could be made to work.

that one in the corner Silver badge

Re: I'll Be There

"A Touch of Cloth" - must watch that again.

that one in the corner Silver badge

Re: Uniting the world

If you're food is cremoed, mate, it's time got crack a stubby and throw another snag on the barbie; just keep an eye on it this time, right!

that one in the corner Silver badge

Grok sounds like baby talk

The splendid irony is that, of *course* it sounds like baby talk!

I always thought that was a clever move by Heinlein.

Working with the old idea that languages will tend to use the shortest and simplest words not for the simplest concept but for those most often encountered: up, cat, fork etc; you see this happening with commonplace acceptable abbreviations for the voiture omnibus or telephone.

*All* of those simple words are the ones we teach our children very early on, for the obvious reason that they deal with the things our offspring encounter as frequently as we do. But because we grownups also need to use them, and consider what they refer to as commonplace, you just accept the usage without taking any special note of it.

So Heinlein takes a concept that is, oh so very rarely encountered in Real Life and gives it a really short and simple name, knowing that it "sounds like baby talk" (even readers pointed this out; others took it and ran).

Heinlein is pointing out that this Really Good Thing, which is totally commonplace for Valentine Michael Smith, known to him since infancy, is so very much lacking in our society.