The Register Home Page

* Posts by that one in the corner

5065 publicly visible posts • joined 9 Nov 2021

You break it, you ... run away and hope somebody else fixes it

that one in the corner Silver badge

The line printer stopped both its buzzing and its merriment.

An untestable hypothesis:

As it had just come to the end of its run, not-so coincidentally with one of the many, many test runs being attempted that evening.

> He tried everything he could think of to get the computer to resume whatever task it had been doing, but nothing worked.

Because it had finished!

A student's natural paranioa kicks in, leading to a lifetime of looking over his shoulder for his old tutor, before caving in and confessing to El Reg in the hopes of stopping the dreams of Sysop Past, draped in fanfold.

Moral: always print a big "DONE" at the end of the run.

Immoral: don't print even a tiny "end" because one day you'll read a story like this and cackle over the anguish you've caused one of those annoying students, cluttering up your beautiful lab.

Hyperfluorescent OLEDs promise more efficient displays that won't make you so blue

that one in the corner Silver badge

Re: Same color OLED

The eye uses green to determine the brightness of a scene, more so than red or blue. This is reflected in the importance given to green when converting RGB to a luma+colour space, such as YUV.

The peak sensitivity is in the green under daylight illumination and shifts up towards blue/green at nighttime.

The use of red lighting for any control panels is to allow our eyes to become night-adapted (something astronomers still rely upon).

As noted above, the illumination in and around cars at night time has become far brighter, with over-strength headlights "for safety" (they just mean drivers have no visibility outside the illuminated area, so good job, whilst dazzling everyone in oncoming cars) shining into the cabin, to the glowing screens and multicoloured LEDs.

So nobody has a chance to get dark-adapted in the first place and the red car dashboard lighting no longer serves a purpose.

Why Microsoft's Copilot will only kinda run locally on AI PCs for now

that one in the corner Silver badge

Re: Co-pilot key

Your PC doesn't *have* to have a Co-pilot key.

You just have to be prepared to have a reduced Windows experience if the key is not detected. This may range from Windows 12 refusing to install the best driver for your GPU (just to be spiteful) all the way up to the Microsoft Azure AI server emailing you an hourly email advertising the range of Surface devices you could be using instead (these emails, as expected, have an unsubscribe option, just press the Co-pilot key and type "end").

Malicious SSH backdoor sneaks into xz, Linux world's data compression library

that one in the corner Silver badge

Re: More Details

> Some languages like C# even specifically require that the same source built twice results in a different binary

I knew there were good reasons to dislike C#.

With or without invoking C#, chucking reproducibility out of the window is another step on the path to madness.

> unless extra-special care is taken with exact build system and dependency setup

That should not NEED to be "extra-special" in any way, shape or form. You can certainly choose to move your build to another toolset - e.g. you want to use clang instead of VisualC - but any proper build system should take that in its stride and let you run both toolsets.

Reproducible builds - whether of Open Source or proprietary code - was demanded of long-term (10 to 25 year product life) contracts and a requirement of having the build accepted into escrow.

Anyone denigrating the idea of reproducibility is a danger to be avoided like The Plague.

Actually, we have treatments for The Plague now, so...

Can a Xilinx FPGA recreate a 1990s Quake-capable 3D card? Yup! Meet the FuryGpu

that one in the corner Silver badge

> Er, writing device drivers for any operating system is hard, low-level stuff that requires a complicated development set up

> Windows is actually fairly good - there's a lot of dev kit and tooling available from Microsoft (for free),

Come of it - Windows is the one that *needs* all that extra dev kit and tooling!

Now, it has been a long while since I had to do it and things may have gone down hill, but the instructions for My First Linux Device Driver did not need a special compiler to be downloaded and installed, nor any other dev kit. Just a short bit of template code, fill in the few calls that were relevant to you (usually just the the read-call, as we didn't have much in the way of selectable behaviour). And dead easy to get into the final system as well.

At the same time (about 7 years ago now?) under Windows, it was just easier - a heck of a lot easier - to give up on the device driver entirely and just make our add-on hardware expose a USB file system and plug it into a motherboard header: no compiler we didn't already have on hand and the USB client library was a doddle to work with.

It was always a grave disappointment, moving from Unix, Minix (not Linux, too early) and AmigaOS, to Windows and finding how *few* interesting drivers there were, especially file system drivers. It was not fun, going from being able to find multiple, fully working and interestingly different drivers for non-native file systems (especially Union FS, my favourite) to "you get FAT on floppy and FAT on hard drive; one day we'll write NTFS for you".

Consider even now the difference between the number of FS that you just plain expect to work from within /etc/fstab and the (lack of) variety in being able to connect a drive to Windows and expect it to be readable, let alone writable.

Majority of Americans now use ad blockers

that one in the corner Silver badge

Re: I wouldn't mind reasonable ads

> When I want to buy a lawnmower I'm happy for adverts showing lawnmowers. I'm not happy to still receive them three weeks after I bought one and am trying to buy a computer.

Now, this is a bit of the discussion that I find it hard to get behind.

> I'm happy for adverts showing...

Ok, so it is not the form of the ads that is a problem here, so guessing we're not referring to those ads that bounce up and down, obscure the main article or play loud voice clips.

> I'm not happy to still receive...

The gist here seems to be that you *want* ads and actually are invested in reading the ads, to the point where if they are no longer interesting it sparks a noticeable emotional reaction.

Which is ok, you do you - but you are also getting a significant amount of support, indicating that there is a significant section of the audience here who are also similarly affected.

Aren't ads just part of the background noise? So long as they are not of the vulgarly intrusive style, we have all been seeing these irrelevant ads everywhere, for our entire lives: roadside banners, flyers on walls, in newspapers and magazines, on TV and in all of the other media we consume.

And this background noise can just be quietly ignored, no need for any reaction to it one way or the other.

that one in the corner Silver badge

Re: I wouldn't mind reasonable ads

> When I want to buy a lawnmower I'm happy for adverts showing lawnmowers. I'm not happy to still receive them three weeks after I bought one and am trying to buy a computer.

> It just shows how crappy the software is.

AFAIK it is basically a sliding window average - so if you look at chain rivet tools for a week, it gains in the weightings, but then you have to wait for it to get flushed out. So to flush it faster you need to have a flurry of other searches and browsing just after buying.

Now I think of it, maybe it is time for an experiment with cURL: a batch file that simulates searches & browsing for something you *don't* mind seeing (the tricky bit). Don't forget to use the option change the browser id string.

> Even Amazon will often recommend products of the type I've just bought completely unrealistically. I'll probably never buy another chain rivet tool in my life, let alone a week after buying one.

Again, wild guessing: I've come to believe that some products are often bought in bulk: you may not want more than one rivet doodad but they've spotted that a significant number of buyers get enough to fill the school work shop, so it is worth trying to se if you do want another.

I base this mostly on the ONE time I bought a concordance (for a gift) and then got suggestions about assorted Bibles for well over a year: buyers of concordances are presumably keen enough on the book that they buy in bulk and hand them out to anyone they meet - and/or try to get the whole set and compare translations.

Do not touch that computer. Not even while wearing gloves. It is a biohazard

that one in the corner Silver badge

Re: Following BSE in cattle...

> It was remarkably easy to use and work with

That is good.

> and this was something like 20 years ago when web design wasn't as advanced as now.

Ah, that explains it, it was before web design really learnt how to make things dreadful.

AI hallucinates software packages and devs download them – even if potentially poisoned with malware

that one in the corner Silver badge

So nobody ever tried the commands before publishing?

> Alibaba, which at the time of writing still includes a pip command to download the Python package huggingface-cli in its GraphTranslator installation instructions

And this package didn't exist, in any form, before the "naughty" version was created.

It is one thing to publish instructions that install the wrong package without any errors, but even the most incompetent of "testers" can surely spot when pip tells you there is no such package at all?

Pragmatic Semiconductor opens UK's first 300mm wafer fab in Durham

that one in the corner Silver badge

Hope they have a proper regional opening

And invite the Northumbrian Piper's group from Sedgefield to play.

(Less raucous than those Highland Pipes)

Windows Format dialog waited decades for UI revamp that never came

that one in the corner Silver badge

> Only time I'd do a BIOS update from a USB stick was if the machine didn't have internet access

Clearly I am way behind the times. I always keep the BIOS images - current and new - available on a.n.other PC and update via memory stick, just so I feel confident I can roll back in case the update - isn't - and the box no longer has 'Net access (or anything else access).

Signed, Mr Dull But Practical.

that one in the corner Silver badge

Uh, are you trying to imply that the BIOS was near the 32GB limit for FAT and was in danger of not working? 'Cos that is one huge BIOS!

If not, not sure what the point you are trying to make is. There are lots of old bits of kit that want a FAT - or, more likely, FAT16 or FAT32 - USB stick or memory card. Which is fine.

Sometimes wish the camera made better use of huge cards (wrt naming mostly, as they still stick to 8.3) - then I remember the bleeping Sony camera, which seems to think the SD card is a BluRay for weird reason, and go back to being thankful for simple layouts.

that one in the corner Silver badge

> a simple vertical stack of all the choices you had to make, in the approximate order you had to make. It wasn't elegant,

Yes, yes it was elegant: neat, tidy, no guff, sensible, easy to use.

What it wasn't - and, thankfully, isn't - is garish, over-designed, spread out over too many subdialogues or needing to use a search box just to find the option you *hope* is still available.

> constraining the format size of a FAT volume to 32 GB.

Ok, unintended side effects are not what you want - OTOH by the time you get to 32GB aren't you glad to be using something a bit more capable[1] than FAT?

[1] hmm, except that FAT is easy enough to patch by hand, with a copy of Norton by your side, so maybe that counts in its favour re getting your data back when things go awry. Choices, choices.

BBC exterminates AI experiments used to promote Doctor Who

that one in the corner Silver badge

Re: You will be upgraded, resistance is futile

You are both right.

It was not the most canon-caring story line.

I suggest trying "Ensign Two: The Wrath of Sue"[1] for your Doctor Trek / Star Who reading pleasure.

[1] hmm, the server is having trouble today, so not a good idea to give the easy-to-click URL right now :-(

Some 300,000 IPs vulnerable to this Loop DoS attack

that one in the corner Silver badge

Re: Old stuff.

And how many stories of application-level infinite loops have we heard of through the years, bringing systems to their knees?

For Out-of-office auto-reply storms, just read almost any of the "Who, Me?" discussion pages!

Hmm, given the ease[1] with which you can get lists of valid corporate email addresses, I'm almost surprised that triggering these externally isn't a holiday sport.[2]

[1] oops, "apparent ease, according to press reports"

[2] yes, I'm easily surprised. The cough? Just seasonal allergies, thank you for asking.

Vernor Vinge, first author to describe cyberspace and 'The Singularity,' dies at 79

that one in the corner Silver badge

Re: "Within thirty years, we will have the technological means to create superhuman intelligence"

>> when it released the description of Bitcoin and the world started to collect GPU farms

> Most bitcoin farming isn't done on GPUs tho. Thank Ethereum for GPU farms.

It was the creation of Bitcoin by the mysterious Satoshi Nakamoto that started the whole blockchain/cryptocurrency fever in the first place, leading to Ethereum (and all the others) and hence the creation of the mining GPU farms, whatever those farms are/were currently engaged in.

But even just sticking with Bitcoin, although ASICs are the most recent[1] way to go for the most cost-effective results (as they tend to be for any large-scale fixed process workload) it took time for those chips to be developed and manufactured: before we reached that point, BT was mined on CPUs, GPUs and FPGAs. At every scale, from single PCs to farms, including unwitting bot-farms.

Given the elusive nature of Satoshi Nakamoto and the obvious fact that his actions have benefited not human-kind but machine-kind, if this was not the work of some hidden AI then it can only have been aliens[3].

Aliens who are intent upon xenoforming the Earth the easy way, by getting us to do it for them.

[1] would say "current", but is BT still current for anything?[2]

[2] an entirely rhetorical question

[3] the Vril, it is always the Vril

that one in the corner Silver badge

Re: "Within thirty years, we will have the technological means to create superhuman intelligence"

Phase Four will be the ants.

Pterry was warning us: Anthill Inside!

that one in the corner Silver badge

GNU Vernor Vinge

Live forever in the Overhead

Microsoft's first AI PCs Surface with Intel cores and a Copilot key

that one in the corner Silver badge

Re: Emphasis on "if"

Later, after checking the Keyboard Manager settings:

> was it SysReq?

Nope, it is the CapsLock key that is mapped to the Windows key!

Makes some sense, I guess: it is on the bottom row[1], close to Alt, but I'd forgotten that it even existed, as it's hidden under my left hand and is the most entirely useless key[2] on the board; it is even dustier than the SF12 key!

[1] yes, bottom row; the place you're thinking of is where the left Ctrl key lives, as the FSM intended.

[2] your mileage may differ; remember, keys may go down as well as up

that one in the corner Silver badge

Re: Emphasis on "if"

Windows key? Wozzat? Nothing[1] like that on this old Northgate Omnikey. Meanwhile both Ctrl keys show signs of heavy use.

This backtick does get used every day (e.g. bash commands - and as a modifier in a Wiki markup) but, hey, we all use different languages to get the job done, so that is hardly a boast and no shame otherwise.

[1] not 100% true - as there are keys aplenty, I did use a Microsoft utility to remap one[2] to Windows key, but, seriously, it is so useless that I'll have to check the setup to remind myself which one; was it SysReq?

[2] if anyone has Copilot on hand[3], can you give it a try and see if it will give instructions for using the MS utility to map its key back to a Ctrl?

[3] you can report back as AC to avoid outing yourself

Redis tightens its license terms, pleasing basically no one

that one in the corner Silver badge

Re: "Software is only open source if the OSI says it is"

I'll admit to a bit of a knee-jerk reaction to the way that sentence was phrased; and, maybe, just maybe, possibly, sometimes, every now and again, I have found the OSI really getting on my wick :-)

However...

> #4 We need licenses that say "use this if you will keep it free but if you make money you must give us a percentage."

Yes. It'd be hard to get right but, yes. One of the really hard bits is making sure that the Tragedy isn't solved by having the County Council selling off the Commons (i.e. forgetting that they are the stewards and convincing themselves - and others - that they are the owners).

> #5 "Free Software" was a bad name. My proposal would have been "Software Liberty" but it's too late now.

English is a wonderful language for poets (and sarcasm) but getting a pithy yet totally unambiguous description of what all this software gubbins is about.

> #7

The BSD licence was a beautiful thing, a child born in the long afterglow of the Summer of Love - and a time when most of the people copying and using the code probably personally knew the author from conferences, mailing lists - or just the tea room in the department next door. And when the licensed code was probably short enough to actually read, 'cos otherwise it was too large to fit your box!

Not quite the same situation as Redis exists in, so, yup, with you there.

that one in the corner Silver badge

"Software is only open source if the OSI says it is"

Bollocks.

We all know that OSI claims to have created the term (yes, they "created" it, not just "adopted" it!) but even Wikipedia (known for forgetting the past because they can't find a URL for it) point out that the phrase was already in use.

The OSI have their uses[1] but gatekeeping all of open source is not it[2].

[1] e.g. I'll applaud their early collection of OSS licences in one place, which seemed to help slow the increase in "My One Licence Terms"; OTOH their lack of even references to analyses of the licences and when they are appropriate is a *major* hole.

[2] I'd have more respect if they used wording like "The OSI compliant Open Source Definition" or, even better, had managed to trademark the phrase they "created", but as it stands they are guilty of the same grandstanding and self-adulation that we roast other companies for.

DARPA tasks Northrop Grumman with drafting lunar train blueprints

that one in the corner Silver badge

Re: Railways? Nasty Commie idea.

> what's the difference between a Mag-lev railway and a rail gun *really*?

Not wanting to be pedantic[1] apparently[2] you are thinking of a "coilgun", as a "railgun" has the projectile as part of the circuit, meaning it is in contact with the rails during the entire acceleration phase. And has potential (ho ho) for lovely sparks and plasma, so pretty.

[1] oh, who am I kidding?

[2] personally, I reckon they changed the names on us, 'cos I've been using "railgun" instead of "coilgun" since the 1970's! Anyway, that first link is to a PDF on a dot-mil site, just to reassure you that I'm not just relying on Wikipedia (the next two links); plus the PDF also discusses cool stuff like giant lasers - sadly, without sharks.

Garlic chicken without garlic? Critics think Amazon recipe book was cooked up by AI

that one in the corner Silver badge

Publish only up to three books every day

That would have made Isaac Asimov slow down.

that one in the corner Silver badge

Recipes that stretch out over 2,000 days

Rabbit pie (feeds 15)

First, breed your rabbits.

Britain enters period of mourning as Greggs unable to process payments

that one in the corner Silver badge

Greggs may only be quick'n'cheerful

But they definitely get one thing right: they know what is in their products (and will cheerfully tell you)[1] and they do not randomly change the recipe from one week to the other.

This is an absolute godsend for anyone with food intolerances[2]: SWMBO knows that she can always get a mushroom bake or a sossie roll and know that we won't have to worry about unpleasant-but-not-life-threatening repercussions.

[1] and do NOT mention "oh, the law says everyone has to tell you about ingredients now": what that has done is replace having a list of the actual ingredients, *all* of them, with a laminated card that only lists the items specifically mentioned in the legislation! That does cover the bulk of the life-threatening allergies, which is good, but most places can no longer tell you anything about the presence of, say, alliums.

[2] hopefully also for those with proper food allergies, but I'll not risk making claims on their behalf

ChatGPT side-channel attack has easy fix: Token obfuscation

that one in the corner Silver badge

Specially trained LLMs designed to examine the packets

Training a Neural Net to examine packets is an approach that can be applied to any data stream that exhibits a bursty nature related to its operation, such as the route through the site that a user has to follow in order to do a specific action - the two-way "conversation" as the site sends new page contents, the user responds...

Capture a load of interactions with, say, a banking website: you won't be able to read the encrypted data[1] within the stream, but you may be able to figure out that there is a 79% probability this user is trying to get a new credit card. What you then do with that information - send them more adverts about cheap credit from Kickbacks-Be-Us?

Of course, that 'Net won't necessarily be an LLM, so won't be all sexy and newsworthy.

[1] if you can them just use that, stop wasting time with training a 'Net!

How to run an LLM on your PC, not in the cloud, in less than 10 minutes

that one in the corner Silver badge

Looking at at ollama site, my interest[1] was immediately piqued by their blog post on comparing censored versus uncensored versions of a model.

This may not be something that you want to run day after day, but it is certainly good to have the opportunity to make such a comparison yourself - and it may be even more interesting to show to your local "I know all about LLMs, I have a ChatGPT account" guy (especially if you can get more examples like the blog's demo of asking about what went on in the movie Titanic). In that case, the use of a locally hosted LLM isn't about any argument of local-over-cloud but more about "I actually get to try these non-mainstream models".

> Given that actual uses for the cloudy ones still seem elusive...

That blog - and hence the experiments you[2] could run yourself - actually seem to show just how much *more* useless the censored cloudy ones actually are than they used to be! Again, demoing that to the "right" people may be useful.

[1] Of course, your own level of interest in this may be sated just by reading that blog post and you don't feel much need to install and run the thing yourself - but wouldn't you like to be able to *fail* to replicate their results and show up these LLM-loving loons?

[2] or I could, I guess. Is it interesting enough to get to grips with Docker in order to use the image they provide, instead running one of those scary install methods?

Third time is almost the charm for SpaceX's Starship

that one in the corner Silver badge

> chief being the enormous amounts of land that has to be cleared for them

Wind farms do not take vast swathes of land - first, being giant lollipops, the big part is suspended well above the ground (this is sort of important to how they work), leaving space beneath for grazing, cropping, chasing your beloved through the spring meadow flowers.

Second, the vast majority of UK wind farms are (being) built out to sea.

Yes, they are taking seriously any concerns about the effects on sea life (e.g. marine vegetation growth on the bases leading to increased biodiversity and then to sea mammals gaining new feeding grounds).

Rancher faces prison for trying to breed absolute unit of a sheep

that one in the corner Silver badge

Big Bold Sheep Hunters

The bold hunter cradles the rifle, wipes clean the telescopic sight, kneeling on a platform at the top of a 20 foot metal tower, his blood chilled at the hideous sound of bleeting beneath him. To his left, a line of similarly equipped men, canvas hats darkening with nervous sweat. To his right, nothing; his flank unprotected, he knows his is the honour to guard the line.

Then comes the signal they have all been waiting for, to start the running and their bloody sport:

Phweeet. Phwet. Phwet. "Away lad, right, right". Phwee-eee-eet. " Arbuckle, git bloody gate open, useless boy".

Attacks on UK fiber networks mount: Operators beg govt to step in

that one in the corner Silver badge

What about that iron pillar in Delhi, the one that doesn't rust? Clearly, that is the prototype 5G mast, what else could explain it?

By the time 5G got to Northern Europe, a couple of hundred years later (a trend OpenReach proudly continues), we had a slimmer, more cellulose-based, design which was sadly subject to damage because "it was clearly the work of the devil": a misunderstanding of the purpose of the dual helical cone antennae arrangement.

Your PC can probably run inferencing just fine – so it's already an AI PC

that one in the corner Silver badge

> Does this actually make a difference to how the model behaves?

Apparently.

There was a (brief?) item about this here on The Register not that long ago (sorry, failed to find it - can't recall exact enough wording) which reported claims that you get better results by telling the model that it really likes to do something otherwise tedious, or that it is a starship captain and knows all about calculating orbits.

(I really should have made a note of that article; hopefully someone can provide the URL)

that one in the corner Silver badge

Re: No wonder hardware vendors are on board

Now there is an idea for something that might be interesting[1] to actually try out[2]: could you feed a pre-trained LLM with all the "Arduino for Beginners" articles, plug in a few I2C sensors, then ask it to read 'em and display the values on an attached OLED? Doing all of that on one's home PC, to keep with the theme of the article.

Hmm, best to keep a fire extinguisher on hand; it shouldn't be possible to make a servo running off 5V explode, but...

[1] the assumption being made that there are one or two tech nerds around here

[2] probably already been done, but so has "blink an LED" and we keep repeating that one as well

that one in the corner Silver badge

Neural Nets are good at visual object recognition - it is pretty much the original use case for them.

Whizkids jimmy OpenAI, Google's closed models

that one in the corner Silver badge

Re: If you have the weights, you have the model.

> the weighting of the data could be the model if it's just a set of probabilities, that say the word "course" is more likely to follow the word "of"

That is very much what it is. The extra complication is that the layering and simply vast number of interconnects (which "the weightings" describe) allows a bit more subtlety (the word "course" is more likely to follow the word "of" after I've already seen at least the words "opinion", "my" and "humble" in any order; more so if I've seen "you fool").

The trained model, the thing that we interact with when talking to ChatGPT, is a fixed set of weightings. Lots of them, but fixed. That incudes the results of the "carefully curated" training plus the huger bunch (the "post carefully curated input" thingy is called fancy things, like "pre-trained model" but all it means is we've taken a core dump after treating it carefully, so we can go back and start again from that half-way point).

> then surely in that use-case the models include the data?

The weightings don't need to include the input prompt (e.g. the two legal opinions) - the prompts cause the LLM interpreter to traverse the network of weightings rather than change the weightings within the stored model[1].

Crudely (very crudely) consider a simple lexical analyzer: as you read the input characters you move from state to state through a simple graph, every now and again a transition triggers an action. In a real lexer, a common action is just "start from the beginning again". The larger that graph, the more input characters you can read before having to "start from the beginning again". If you want to, you can take a real lexer and just keep gluing copies of itself into the graph, until you get something that can take in a whole 10,000 line Pascal program without outputting a single lexeme, until it sees the final full-stop when, POW, it can spit out the whole string of lexemes in one go. Sort of sound familiar?

> I'm sure OpenAI are using derivatives of that model to process more data

Exactly; they can release v3 for us to play with, and in the meantime keep on running the training process. When they've exhausted that pile of input, ta-da, V4 is open for business. While we gawk at that, in the background v5 is being slowly built up.

> and maybe will build a model in future that can keep taking in training data - but I didn't think that had happened already

AFAIK they aren't doing that (at least, not the Big Name LLMs). The experiments with letting them send out web requests and adding the results into the overall prompt are sort of a way of sidestepping the need.

[1] bit (bit!) of hand waving here, as - depending upon what sort of 'Net walker they are using - you can have some values modified as the prompt is processed, *but* you treat those like local variables: next session you clear them, otherwise the 'Net will be totally fixated on those two legal opinions. It hasn't learnt but become more limited in scope.

that one in the corner Silver badge

Re: My, what drama

> Artificial neural networks are based upon simulations of those neurons within biological intelligences, such as your brain

Not really "based upon".

Try sort-of-inspired-by-how-we-thought-they-worked-back-in-the-1950s-and-could-fit-into-our-computers-and-hand-run-models-back-then.

And then simplified even more, because then we can really ramp up the speeds of our simulations, using smaller numerical values, varying the number of layers and interconnects (in a gloriously ad-hoc fashion) until it gets faster again. 'Cos the faster it goes, the bigger we can get and the more we can shovel in, without caring if this is how real organisms actually work. And no need whatsoever to care about that anymore, because around approx. 2005 (bluntly) the hardware was available to run experiments based upon computing models, not needing to be fed from the life sciences: for example, we can randomise the bulk of the weightings, starting with noise instead of a blank white slate, does that help the systems we're building to recognise visual input ('cos that was the interesting case)? Yes, yes, it did. Jolly good. No need for iput from the squishy-stuff people.[1]

I don't doubt that there are people out there making carefully researched, beautifully measured, genuine models of signal transport within and between nerves (they probably have a terrific model of the workings of a Giant Squid neuron by now, they've have been examining those for decades now). But I do doubt that that is guiding the creation of the current big-name 'Nets, such as the LLMs. Maybe (hopefully) some proper researcher is looking at it, one whose research goal is "more human knowledge" more than "more money for the shareholders"[2].

> I have been studying biological/synthetic intelligence for more than seventeen years now

But you can, of course, dispel all such doubt simply by providing us with references (don't worry, we can read proper science publications).

[1] Hey, maybe that it is how biological organisms work! To show that is the case, all we need to do is to measure the initial conditions of all the neurons in a brain before it has been fed any stimuli and log how they change as the stimuli are applied - anyone got a brain in a jar, a hundred thousand really thin wires and a steady hand? Good grief, you biology people are so *slow*, I'll just try out another random idea on my computer - done!

[2] No beef with those guys, they need a job just like we do, just - be honest about the reasons for it all.

Can AI shorten PC replacement cycles? Dell seems to think so

that one in the corner Silver badge

We have no idea what "AI" apps will do for the user

but we are going to convince you to buy hardware to run programs that may never even see the light of day.

(Wanted to put in a good last line, like "Yay, progress" or "Won't anybody think of the salesman's bonus?" but this attitude from Dell, compared to the old days and the heady rush of new kit that sliced the runtime of your real workloads, is just too disheartening)

GPT-4 won't run Doom but will play the game poorly

that one in the corner Silver badge

LLMs can never explain themselves

> When asked to explain its actions that were generally correct in context, its explanations were poor and often included hallucinations (aka incorrect information).

(Bangs head on desk) No, of course they can't. We know this. We've known this since Neural Nets were becoming widely discussed in the 1960's!

No form of Neural Net, even if renamed to LLM, is capable of explaining its reasoning or any other part of its behaviour. We *might* get somewhere by bolting something onto/into the 'Net, but we are not there.

If you want a machine that'll explain its line of reasoning, they are available (e.g. planning systems! The clue is in the name!). Shame they are harder to scale than just shovelling random junk into an LLM trainer.

Microsoft calls AI privacy complaint 'doomsday hyperbole'

that one in the corner Silver badge

Re: Nowhere do they say what of their private information Microsoft ever improperly collected

> you need a subscription

Again, that is an issue around copyright.

For example, the majority of information (as opposed to entertainment) I acquire comes from paying a fee to get a copy of the writng that contains the information[1]: once I have learned that "'for' loops in C/C++ are just semantic sugar over the humble 'while' loop" I may be restricted from repeating that precise sentence[2] but I am in no way restricted in using that information nor in passing it on to all and sundry.

> I suspect there's a Terms breach

Putting in "you are limited as to how you use/replicate the information" is going to be really tricky and unenforceable - probably impossible: including all sorts of proofs that they "own" the information (which for a "news report" would mean that they made it up out of whole cloth). That sort of thing is closer to patent than copyright...

[1] I am old and still very much attached to reading physical books; easier and so much more enjoyable

[2] just go with me here; I know that specific example is too short and too generic to make a copyright claim stand up in any reasonable court.

that one in the corner Silver badge

Re: Nowhere do they say what of their private information Microsoft ever improperly collected

There is a big difference between the information contained in an NYT story, which is (now) public and can be freely repeated[1], and the expression of that information - the specific words and word order - which is what is copyrightable.

Spitting out verbatim (too large a chunk of) an article can be a copyright violation.

Repeating information from that article, using a different form of words - and especially if combined (even badly) with information gleaned from another source, is not.

In terms of a human, one is plagiarism, the other is research.

Not that any of this is supporting - or denigrating - LLMs, btw. Just - be accurate in thine attacks, lest ye be taken for an LLM thyself.

[1] until we get to questions of "right to removal"/"right to be forgotten", as appears to be the actual point of the complaint. Which, unless someone can provide a reference, seems to be aimed clearly at the LLM but not at the source of the information, the NYT articles, even though the latter are easier to get the info. Strange, that[2]

[2] cos MS is the Big Bad with deep pockets?

Grab a helmet because retired ISS batteries are hurtling back to Earth

that one in the corner Silver badge

The Master of understatement

> "The re-entry will occur between -51.6 degrees South and 51.6 degrees North." This is quite a large area,

Yes, yes it is.

As you don't give any range of longitude, that is pretty much - everywhere[1] that we are on the planet.

[1] not good enough with the old mental arithmetic to calculate the percentage of the Earth's surface that lies between those two, but "most" is not inaccurate.

Logitech MX Brio 705 – where Ultra HD meets Ultra AI

that one in the corner Silver badge

Re: 4K?

One good use of a higher res camera, especially attached to a (relatively) fixed location: you can clip to a small region of interest and still get a reasonable result for your viewers. You may need to run some extra software (e.g. OBS) if the Logitech stuff isn't up to the job.

Using an ROI is good for cutting out the wide angle shot that includes the window, the annoying coworker, the stoopid "motivational" poster. Or, when using it for "old fashioned"[1] sharing of sketches, just that bit of your desk.

And, if even the ROI is a bit large, it can always be shrunk down before going on the wire.

[1] no idea why sharing an image of a sketch is so derided - no matter how much colab software you are running, doodling on paper is still going to be a lot faster, easier and with generally better results for the majority of people than trying to whip up something in Inkscape or the equivalent.

Microsoft drags Windows Subsystem for Android into the trash

that one in the corner Silver badge

Any particular Android apps?

Not trying to be snarky, just genuinely curious.

For those who do like the idea of running Android apps under Windows, what apps are considered worth it? Assuming that you're not limited to just the Amazon Store selection.

World-plus-dog booted out of Facebook, Instagram, Threads

that one in the corner Silver badge

Re: Those trying to log into Meta's Facebook, Instagram, and Threads............Can't log in........

> Garbage, nothing but garbage I say!

I'll agree that Sturgeon's law works overtime for Facebook, but given that volunteer run organisations[1] find it a convenient and accessible way to contact members and disseminate news, without needing to be even vaguely technical, it is allowed to get past the PiHole[2][3]

[1] techies and commercial organisations have no excuse for using it

[2] although there sem to be sort of holes in the FB pages, almost as if something is being cut out of them

[3] PiHole, so good I had to name it twice

Odysseus probe moonwalking on the edge of battery life after landing on its side

that one in the corner Silver badge

Re: Failure is an option

Just seal the balloon's feed tube (automatic cable/jubilee clip) and cut it on the balloon's side (sharpen the clip on that side).

The balloon will then (literally) rocket away, spinning merrily.

"In Space, no-one can hear you making whoopee cushion noises"

Starting over: Rebooting the OS stack for fun and profit

that one in the corner Silver badge

Re: In the absence of files...

From the first message, I was having a hard time understanding just what problem this is trying to solve.

Now you have given two use-cases, neither of which seem to warrant such a complicated scheme:

1) " if you uninstall an application you can lose access to data without realising it."

Is that something that happens a lot and is a real killer, so much so it needs this complexity? The file is still there, you can see it is still there; presumably you recall what application you uninstalled and can just re-install it? Or just look the name and type of the file and install anything that claims to be able to read it (if you have no idea what was uninstalled, this may require a bit of searching, or just asking your friendly IT guy) but is it something that occurs ofen)? Indeed, if you are as totally application-oriented as this whole idea suggests, your thoughts would more likely be "Ah, I need this data that I usually access via SuperApp, oh dear, SuperApp is not installed, let's just install it then".

2) an idea that apparently "improves security because it prevents applications from opening data objects willy-nilly"

> An attempt to open secret_db.sql with - say - a hex editor will be impossible because you can't navigate to it.

Security for whom? We already have (unless you are suggesting throwing them away) mechanisms for providing security for the data based on the access rights granted to the User (many such mechanisms, of varying strengths); e.g. if your User does not have read-access to a directory containing secret_db.sql then you can't even see it exists, let alone navigate to it.

If an application is able to prevent data being opened then the only thing you are going to protect is the implementation details of that application's data files! That is adding a lot of complexity to the User's system to add a feature that is doing absolutely nothing for the User.

I'd also add that those two ideas can clash, badly. If you have uninstalled an application that can read a particular file - or never installed one in the first place - then methods to decide what type the file is include running the 'file' command and having a look inside with a hex dump. If the file is prevented from being opened because the correct application is missing and you can't identify the file type (no, file extensions are not always reliable) then - ooops. You really *have* lost access to that data!

> or as now when you try to open the file in your mail client the operating system suggests a compatible application

Once you have made your choice from the list of all the compatible applications that the OS has suggested (your OS only suggests one?), that isn't making the selected application "responsible" for the data; the email client just wants to let you view the data it can't render. In fact, if it is anything like my email client, the attachment as received is still in the email store, all you have is a temp file: if you want to keep that as a separate file, save it somewhere sensible now, as otherwise it'll get cleaned up along with the rest of the temp files at some arbitray time after you've stopped viewing it.

> you open a compatible application and import it

Ugh, I absolutely loathe applications that "import" your data: either they just mean they'll open it for you (in which, just say "open") or are making a note of its location for future reference (in which case, just say "noted") *or* they are moving the file, renaming it, trying to take it over completely and prevent you from ever daring to make use of it in any other program. I keep as far away from that kind of program as I possibly can. For example, having tried it, twice, I'll manage all my ebooks (of which I have many - thanks to Humble Bundle) in a simple dir tree rather than *ever* install Calibre again (unless they stop "importing" and just start "taking note")!

If that "import" behaviour is the sort of thing that you like then I can see why I'm having a hard time with this concept.

> The browser would talk to Inkscape to render the SVG

Doing things like providing a rendering service for SVG to any program that wants it is a reasonable thing to do - and, as you mentioned, there are and have been ways of doing that (OLE being one such attempt). But they are totally divorced from any idea of "being responsible" for the data.

Clearly I am not understanding what this concept is supposed to provide to the User and how it warrants such complexity - and, as far as I can see, such ambiguity over "what application is responsible for the data" and the potential for duplicating data with clones for no good reason. Ah well. If it is a good idea that I'm just not understanding then I'm sure it'll appear somewhere and, seing it in action, the penny will drop.

that one in the corner Silver badge

Re: In the absence of files...

> I observe that a number of music applications seem to have given up on the idea of, for example, selecting a music album and playing it from start to finish

And applications that take your carefully arranged files, one directory per album, some of the (newer) files even containing all the MP3 metadata (so even the files know which album they belong to) and carefully ignore all of it.

Our car radio does precisely this - it'll find all the files, put them in alphabetical order and then play them. So the Christmas albums USB memory stick become unbearable - we get every. single. version. of. "Come All Ye Faithful" one. after. another. And end the trip with "We Three Times 11 Kings".

Even worse, trying to play one story from a USB stick with a collection of Blake's 7 audio plays? All the "01 - Title Music" together, then all the "02 - Credits" and so on. OTOH if you find the plots a little simplistic then you'll probably enjoy trying to follow every single one of them at the same time!

> surely, the producer of the album has already curated it for you

Yes, and, as with the audiobooks, made sure that the Concept of the album is made clear by the order of the tracks.

Clearly, this is all our fault for not finding out what format of playlist this player can use (including whether it allows dir names within the playlists) and taking the time to write a program to create them all, because that is something everybody can do.

/rant

PS

This is relevant to the bigger picture, as highlights the point: ok, you want us to label our data - but does everything that insists on reading those labels use the same format as everything else? Or do we have to keep on adding fields when trying to move from one application to the next?

that one in the corner Silver badge

Re: In the absence of files...

> they prefer categorisation by labels.

> Witness some of the comments here

What I notice, particularly in the Reddit comments example, is that they appear to be largely relying on labels that have been attached by a third party to widely-disseminated data (in that particular case, TV and Film, using things like the age ratings).

Other sources of data can also add labels (aka metadata) on behalf of the user: a photo on a mobile 'phone can[1] be GPS tagged, then looked up on an online map to add the venue name, and the shot sent to Facebook to have face recognition applied, tagging all of your friends; whilst my boring old camera will write a UTC timestamp and then anything else is left to me. So my photos are all stored in subdirs by date, with manual dir names added to give a brief description (venue, event, whatever is meaningful to me), if for no other reason than that is consistent over *all* my photos, no matter how old (including transfers from film), and I'm used to it by now.

Similarly, I have a load of MP3s that don't contain any metadata 'cos when they were ripped from the CD we were just glad to get an audio file. So now all my music is sorted using a directory structure that suits my needs, I'm happy with it - and don't really care whether or not the files that make up the latest album that I bought, directly in digital form, have any extra metadata in them.

Perhaps a major difference between the beardies and the youngsters is that they were never faced with doing all of the organising and labelling by hand, so never *needed* to learn any other organisational skills, such as creating their own hierarchic taxonomy of subdirectories, just out of necessity?[1][2]

[1] and, if the scenario given above is realistic, the youngsters would be less wary of just throwing *all* the photos at places like Facebook?

[2] although I do admit to also using a wiki full of notes (which can be searched by text, have category tags attached to pages, ...) where I also keep references to specific locations in the massive archive dir tree - but the amount of material that has been "tagged" in that way is miniscule, mainly 'cos the effort to do that is far too high - and it would lose the metametadata that those items are *so* super-special that they are worth the extra time to add them to the notes.

that one in the corner Silver badge

Re: Hit-and-Miss

> I mean, seriously, if it couldn’t, how could so many important ideas have been originated in it?

Well, version control is all about letting you make a complete mess of things and then have some hope that you can recover

You don't *need* a VCS to create an amazing piece of software and within that originate many marvellous concepts.

It is just a *lot* safer and less nerve wracking to do that with the VCS, especially if there is potentially more than one person working on a section of code[1]. DVCS extends that, making it easier (understatement) to spread your team around; it still isnt *necessary*[2], just helps maintain the sanity of those involved.

[1] The first commercially shipped, shrink-wrap software I worked on was written by a team of half a dozen and we had nothing that could be called "version control" - we each worked on dual-floppy PCs and it was only *just* possible to compile and link the entire program on the single hard drive equipped PC that we all shared. The closest we got to version control was someone running grandfather-father-son backups. We were careful. And delivered. /macho

[2] Barring things like management refusing to allow the longer timescales due to posting floppy discs around the globe - those sorts of things control whether you'll get the resources to write the software, not whether it is possible to ever it under those conditions.