Please, stop making this about "the kids"
> The best way to protect our kids online is to protect everyone online
Absolutely. Direct and simple.
5065 publicly visible posts • joined 9 Nov 2021
> as I can tell libcurl is a library that exposes the curl program's functionality to C code.
More like, libcurl *is* all the cURL functionality, the curl exe is little more than a honking great CLI argument parser to expose libcurl functionality to the shell - as is done with plenty of other libraries (e.g. just using a simple a REPL to drive a language implemented inside a library: Lua, SQLite etc)
> you certainly know that reading from a socket and writing to a file descriptor is no big deal
True, but that is only the simplest part of the problem
> So I'm not sure who this library is for
Anyone who doesn't want to re-invent the wheel by not only writing to and then reading from a socket but also knowing exactly *what* to read and write in order to operated the protocol required, namely HTTP to get a web page (as in the example). Or FTP if you give libcurl an ftp:// URL, or a Gopher URL, or POP3 or SMTP or any of the other protocols that cURL can handle. Not forgetting how to cope with the various error responses (e.g. "page permanently moved to") or...
You could always invoke the curl exe from your app but that is a bit of a faff, IMO, compared to calling the library function.
After Something Horrid happened to my Amiga A1000 (sob) I got a SCSI card for the Win2k box and moved the chain of hard drives over. That gave Windows a massive boost, no longer seeming to come to a complete standstill when doing something disk intensive, compared to the IDE drives.
Plus, at the time there were so few different types of SCSI drives on the market that when one of them decided to let the magic smoke out (literally blew a hole in the its main IC) I was able to go to a Computer Faire down in Temple Meads that same weekend and easily buy a cheap second-hand one (cheap 'cos no-one else wanted SCSI, go figure) and just swap over the drives' controller boards.
> I couldn't look at that default 11 wallpaper. So, there it sits, installed and abandoned.
Dare I suggest opening an application and using that to cover up the wallpaper? How much time does anyone spend with the wallpaper visible anyway?
Okay, your 2GB won't be enough for anything that really requires some oomph, like The Modern Web Experience, but you can fire up a sensible code editor!
After hearing about "at will" employment practises and the layoffs at US companies (although the likes of Twitter hardly count as "tech" so guess ignore them) this does sound rather jingoistic.
Although one could hope that instead of finding the full settlement in one go there was a conversion to rental. But would you expect any company, US or Chinese, to be able/willing to take on the switchover to being in the landlord business at a time they having to lay off the staff? Nice idea, but realistically...
> Apartments were reportedly offered to core managers and R&D employees
So, not the lowest paid peons being evicted - probably not even a majority of those laid off.
> shopping trolley handles are known to have nasties like e-coli and salmonella on them.
Always have, always will do. There are those and loads of other microscopic bacteria on everything. Including me and you. Crawling up our legs, dropping from the sky.
https://youtube.com/watch?v=-kgIOWvrssA
Ah, just because the cost of running the blockchain is farmed out to everyone (want to use Britcoin? Run a miner on your electric bill! Proof of Stake!) doesn't mean you get to read it - govt spending kept under a different encryption key.
Until that gets leaked, of course.
> the way we pay for things becomes more digitalised
From the tap-to-pay token in my pocket through to the PDF monthly statements, how much more digital can it get?
> the potential for large and rapid outflows from banking deposits into digital pounds
> including the need for a central ledger to store user balances.
Ah ha - this is really just an attempt to create one single place to hold (ultimately) everyone's basic "chequing account", taking them all away from the high-street banks. That can't possibly have any down-sides.
Transaction fees, including conversions to and from BritCoin (you know it'll end up being called that, or worse - Coiny McCoinface!)? No risk of creating an effective split in who uses which (if bonk-to-pay works equally well for the customer, why would customers want to pay to convert?) so traders have to provide both services and absorb those fees as well, making for an expensive and confusing transition period.
Sorry, what was the point of this again?
> UK can't dismiss CBDCs, because its trading partners and allies are already working on their own equivalents
Well, we already pay conversion fees for cross-currency trading, so what does it matter to anyone if we're spending Sterling to buy Rupees or ERupees? We just get whichever our trading partner wants and take the conversion fees into account when deciding if the trade is effective.
How about if the two Ecurrencies were mutually convertable without any fees, that would help. They could peg them together as well, get rid of all those arbitrage costs and uncertainties as well.
One World Currency?
Apparently, one of the "extra costs" faced by publishers of open-access journals is in "preservation", i.e. making sure they don't accidentally delete your paper. This is a new burden for the publishers, because they usually dump that onto all the departmental libraries that subscribe to the journal.
So a serious answer answer to your question, we do want to put some effort into spreading around lots of copies of these PDFs (hopefully with a copy of the original LaTeX source) and arranging that all the copies get indexed (even if that is just left to WWW crawlers).[1]
In comparison to Twitter's Single Source of Deletion (aka Elon's finger on the back of the neck of the peon sitting at the keyboard).
[1] and make sure we keep around enough hard-copies of the instructions on how to build PCs capable of displaying PDFs after the apocalypse.
Skimming Google responses:
Diamond access is just like Platinum. It is also like Gold, but not quite the same.
Gee, thanks.
Dig some more: Gold, the author pays a few extra dollars - only a couple of thousand, nothing exorbitant (/s) and readers can download for free (OR just email the author, as usual). Platinum - sorry, Diamond - the huge, huge costs of providing the free download URL is borne by donations/public funding to help out the poor downtrodden Elseviers of this world.
> From the UK, plenty of cases...
All those cases you bring up are reprehensible, but what percentage of the whole do they actually represent? Not forgetting that if they were really the norm, they wouldn't be reported, outside of the local rag.
> So, not a fan of trusting them.
Got anything better? Anything realistic that you could trust or ate you just giving up on society and digging your bunker?
> it plainly stores all of its training data
And magically increased my ADSL speed when I downloaded Stable Diffusion last week, so that it could include the copy of all the training data! Oh, and it has also glued a few extra SSDs into the PC to hold it all!
What a bargain!
Shame it didn't upgrade the GPU as well, that is probably in the next release.
/s
> nuclear engines that have NO air pollution
Air-breathing to save lifting reaction mass? Taking in atmosphere, heating it up, spitting it out the backend as NOx...
IIRC it was Arthur C. Clarke (in one of his non-fiction works) had a description of a nuclear-powered aircraft being used as a first-stage (shades of Virgin Galactic) which you would spot in the sky by its lovely red/brown trail.
> It's an impressive number, but also a finite one.
It is also one that changes *all* the time, unlike current LLMs.
In your early life, you were pruning connections/weightings to get rid of the junk, in your prime you were absorbing new material faster than ever and even in your final days you'll be learning new behaviours, even if that is just how to shuffle slowly to ease the aches in your bones.
And all of those updates occur without shutting down and restarting the learning process from scratch again.
> leak the source code to a rival car company
Was that (a) "leak the Tesla code and send it to a rival of Tesla" or (b) "leak the code of a rival and send it to Tesla"?
> and become 'somehow' extremely rich shortly afterward
In case (a) - well, if the rival really likes a good laugh, they might pay well.
https://youtube.com/watch?v=nsb2XBAIWyA
OTOH being able to look back over my order history is quite useful: ah, I was right, I *did* order one of those doodads 6 years ago, but it was sent to Bert directly for his birthday. And I did buy that book and it was sent to my current address, not imagining it, so worth carrying on looking to see where it got buried.
So how much of the order data is there absolutely "no reason at all to keep"?
Forced arbitration clauses - not something you ever want to find[1] in your contracts, any contract (insurance, employment, ...)
[1] which is why the paperwork is carefully worded to obscure the clauses and hide the consequences from you: it is a kindness.
Time was, you could only make teeny tiny models that fit within the available RAM, not these humungous beasts. Which meant that, when let into the big wide world, dealing outside of the training set the cracks would show up much faster and be more obvious to everyone. Although they were a lot quicker to train (at the easy end of exponential data crunching versus linear Moore's Law) so you caould chuck 'em out faster.
Go back to the old automated-trading scams (not that anyone ever actually did this, no, nope, on my life guv): train up (teeny) models on a stock market feed. Make a fair few of these, with random variations (number of nodes, different data subsets) so that they don't all end up identical but they all re-create the historical data pretty well. Take out a great big ad stating how your amazing system managed to track the market so far (well, duh) and promise Great Riches to anyone buying it to "predict" how the markets will act tomorrow. You can even keep adding to the ad copy "certified satisfied customer reports" from purchasers who "struck gold this very week"!
Of course, everyone got a "customised model", most of the predictions were effectively random (and it has been shown more than once that random stock picking is at least as good as most pundits and traders...) and only the "big wins" are ever glorified in the press (who can be trusted to take on the burden of bolstering your ad copy). After a short time, the models are so far out of their depth that anyone can see that they are generating gibberish and the customers quietly feed their copies into the shredder (one of the great advantages of using floppy discs and CDs when buying software, you at least physical revenge). You've quietly folded your company (or, better yet, flogged the whole thing to an even wider boy - sorry, I mean "a respected trading company") and trotted back to Peckham.
But now it has all gone horribly wrong.
- the costs of the models are obscene, with the obvious repercussions (like face-saving: when you call out the quality of the goods, Rodders will look nervous and Del will give you the gab, but the boys playing with this much dosh have got nice suits with strange bulges; metaphorically, of course).
- the cracks start off tiny, compared to the size of the model and the amount of stuff it can regurgitate, so they are easily brushed off (that 10-line routine forgot to declare "loopCount", you fixed it without even thinking, almost out of habit)
- there are only a few models which everyone is playing with, so as the cracks propagate no-one will be spared from falling into the Underworld of Nightmares.
- the more people get themselves invested into something big and shiny, the less able they are to give it up. And they can't even get catharsis from the shredder (although putting one end of the Ethernet cable in might be fun, it'll still wreak havoc on anyone in the vicinity).
> So it should be with training AIs.
Well, yes - and then we'd be getting back to "actual" AIs, with internal models that are properly explained/explanatory. Expert Systems, purely as an example, can have their rule set modified piecemeal, without having to start the whole process again.
But these massive nets being sold as "AI" - there is no way to point to any of the numbers in the model and say "we can tweak that to get this result", or "removing this and adding that to the training set will change this set of numbers, and nothing else" so no piecemeal updates for you.
If a desire to fool the journals leads to these models being able to generate accurate citations then that could hopefully be added to Github's Copilot, which would fill the gaping hole of lack of attribution for the code it regurgitates.
Sadly, this won't happen, because the goal of the LLMs would be to generate citations that just *look* right but aren't necessarily accurate[1], the same as the rest of the text.
[1] Bloggs A., Chandra P. et al, 1981, p12 - 17.
Except (as already pointed out) with plain old files in /etc you can do things like version control - etckeeper for example.
Then you can check the history trivially (ok, what changed *this* time?), add comments when you actually know why and what made a change, so on ans so forth.
All with the same VC tools you know and love.
> In fact, it's arguably easier to clean up a transactional database because it "knows" who made what change when.
Ok, leaving aside the "when FAT was all that available" discussion and looking at the here and now: can you point me at the place I can find the transaction list for the Registry that'll let me see it knows "who made what change when"?
> It was needed because the only other option was FAT. I've developing for Windows since before the registry existed
None of your examples relate to the situation when the Registry was created, back in Windows 3 days when, as you pointed out, FAT was all that was available.
By the time we had a multiuser OS, with preemptive multitasking, we also had NTFS which provides the access control, file locking etc that FAT lacks.
> It was created because COM and OLE require a central repository with those features, and FAT cannot do that.
OLE needed a central repository - but, again, at the time FAT was the only filesystem available, said repository didn't require the extra features.
64 bit DLLs? When FAT was the only option?
> The registry is the Windows equivalent of a filesystem designed for the /etc tree.
Funny, I don't see anything inside /etc that needs a special kind of file system. Care to give us an example of what you are referring to? There are loads of soft and/or hard links under /etc but the Registry doesn't provide any mechanism to duplicate those (most certainly not in the days when the Registry was invented)
> It was needed because the only other option was FAT.
How does the Registry do anything there that FAT can't? Well, aside from having keys that are random hex strings way longer than 8.3?
Were you thinking of the way that volatile values within the Registry change to reflect what is going on in the system, like how loaded it is? But that would be the equivalent of the /proc filesystem - 'cos Linux keeps things like that separate from config data - and those values weren't in the first Registry either.
Or were you thinking of the way you're supposed to keep volatile program settings, like the last window positions or the last ten files opened, in your app's portion of the Registry? Isn't that the sort of thing you'd have in the User's home directory?
So far, we've got the Registry being /etc, /proc, ~/.* and I'm sure others can point out what I've missed (some stuff that belongs under /var in all likelihood). Rather a mess.
> Created to appease rights holders and integrate DRM into Windows
Not really.
The Registry came about because object linking and embedding (OLE) *needs* to have a central repository: if you've embedded a Visio image inside a Word document then Word has to know where to find Visio and how to invoke it in order to let you edit the diagram in situ. Forcing the use of an API to add keys for new OLE components was sensible - you just know that without it every third-party component would just copy its own file over, instead of appending their settings to a global file.
The rot started in immediately, when someone decided that INI files needed to be got rid off (presumably because you can write comments in a plain text INI file and doing that just made it all too easy for the User to understand).
Then it became possible to really complexify[1] things up: along comes NT and once you really have a multi-user OS where "logging in" does actually do something, you can't just treat it sensibly and store it in a directory under the User's home directory, because it has syatem-wide as well as personal INI settings. So now we get Registry Hives (so-called because of the unbearable itching feeling you get in the brain from trying to follow this mess). Ha ha, you thought you could back up all your config by saving just one file! Don't dream of restoring!
And look, we can even subvert the entire idea of having it be just a repository of (fairly stable) config items by putting in "active" data! Want to profile your system? Just read these Registry values! What, you thought you could dump the Registry, run that installer, dump again and use the diff to find out what keys the installer changed? Oh, you silly naive User, we'll never tell you what keys do what.
[1] What, complexify doesn't get underlined in red? It is in the dictionary? Good grief.
> it's as if people don't understand that building something with no design for expansion then patching each new feature on in the way that takes the least amount of time has any downsides. They don't seem to think that there's anything bad about the fact that there are three systems for doing the same thing, all of which work in different ways, have different interfaces, and are missing a few features that another one has.
Odd, I thought the discussion about Windows Registry was going on in a different thread.
It would be massive overkill to use something like a Dragon to land this material, especially given that the capsule would have to catch the payload and stow it before the landing (although that might be the sort of thing done for the first one or two returns, to give them a good chance of returning intact for analysis, excess costs being rolled into R&D).
Depending upon the materials being returned, they won't need the mollycoddling a Dragon capsule can provide - plenty of iron reaches lands as meteorites and is still extremely cold from space on the inside (ok, it had a looong time to cool, which this ore won't have). It will need some ablative material (don't want to waste the good stuff) and some braking/steering but still nothing terribly precise: long term, let piles of small payloads whack into the ground across a broad area of desert. Go out once a month with your tracking data to collect them. I'm thinking that actual landing velocity might be chosen based more on whether the landing assist package could be reused if treated carefully or whether it wasn't worth the effort!
By the time an actual return is ready to be tried, we will have had more practice at things like skipping across the atmosphere to reduce speed and fly with minimal control surfaces towards the landing zone. Maybe then something weirdly simple might be dreamt up and made to work for the rest of the braking & landing.
Projector screens are retroreflective. Nip to the local fleapit to borrow one of those for protection and you can camouflage yourself as a drive-in cinema.
For bonus points, show the enemy "Primer" and take advantage of the confusion as they try to figure out what is actually going on in that movie: https://xkcd.com/657/