* Posts by doublelayer

8287 publicly visible posts • joined 22 Feb 2018

250 million-plus unused IPv4 addresses should be left alone, argues network boffin

doublelayer Silver badge

Re: Really?

My point was that the change isn't a switch on the web UI. It's not a line in a config file. In many cases, it's a kernel code change, which isn't that many lines of code but still requires recompiling the kernel and pushing out that update. Most devices that ISPs provide don't get new kernels from them. New kernels come from the manufacturer who has forgotten all about these boxes years ago. In many cases, the update files don't even change the kernel, just the configs and libraries, making applying kernel updates a more involved process.

There are some other devices which have separate hardware for routing which may have that rule coded in, and in that case, the change is bigger. Again, not that many lines of code to remove the logic, but a lot of stages required to get someone to make that change and get it installed on all the equipment in which that hardware exists. If you think ISPs have the ability to simply make any change of this magnitude, I'm surprised to see you constantly annoyed at IPV6; all they'd have to do is push a kernel version with the existing IPV6 support turned on, after all. That is a lot of work too, but at least a lot of it has already been done. When my ISP didn't offer IPV6, their hardware (optional) had the support. They just didn't give out addresses. Now, they do support it and most modern routers, including their pre-switch hardware, will just connect and use that.

doublelayer Silver badge

Re: Elephant

They quoted /8 a month, so that would make it 1.33 years. That is unless some countries would like to stop doing CGNAT as ridiculously as some have had to, in which case you could use those in about a week. Most likely, even if we could turn on the whole block, countries that have 1 IP address per 300 people won't get very many of them. If we do all the work of enabling those, I'm sure the cloud providers will happily buy up the blocks and rent them back to everyone with cloud resources.

doublelayer Silver badge

Re: Really?

A lot of that equipment has not gotten security updates in a very long time. The only reason it's not as big an issue as it could be is that there's a lot of different equipment, so vulnerabilities can't give you access to all of it in one go and a lot of it hasn't been penetration tested enough to find them. Lots of botnets have used some subset of those routers to propagate. While ISPs can theoretically update the ones they provided, they quite often don't, might not really be able to because all they have is remote access to the configuration, not the code, and can't manage some relatively simple things without breaking their users and are therefore conservative with any changes they do make.

Adding either the 240/4 block or IPV6 support requires significant changes to a low-level part of the box. No, an ISP is not going to do that to the equipment they've already sent out unless they have a true monoculture and only have to do it once. They're probably not going to do it to the hardware they own which can't handle those ranges either, even though they don't have to send someone into their customers' houses if they mess that one up. The major difference between 240/4 and IPV6 is that a lot of their equipment will support IPV6 as it is replaced for other reasons, but most of that doesn't support 240/4 either because it was reserved for future use and discarded for speed. Adding either is hard, but IPV6 is easier, more likely to work, and more useful into the future.

China wants red flags on all AI-generated content posted online

doublelayer Silver badge

Independent of their bad reasons for wanting it, the other problem is that their request is not feasible. It is impossible for you to determine whether I wrote this comment or had an LLM write it. Sure, you can guess that, because it makes sense (I hope it makes sense), an LLM probably didn't. However, I could write many prompts and choose the most coherent response to post, and many LLMs are good at writing superficially plausible paragraphs. You could do an analysis of previous posts I've written and try to identify linguistic patterns and match them to this one, but I could have trained an LLM on previous writing or a lot of those could also have been LLM-generated. Whether or not we think this is a good idea, I don't think we're likely to get it.

The only way of getting close is to require that every post is trackable to its origin and can be compared against everyone else's posts to identify the spread of something AI-generated. Even that won't do a great job, though it would be a privacy nightmare. We can agree with China's stated goals, but in addition to disagreeing with their actual goals, we probably have to use different methods if we actually want to come close to achieving something. I'm not sure any methods will work, but I'm quite confident that theirs will not. Most likely, they won't bother to enforce this stuff because it isn't really feasible to guess whether every video on the internet has some AI-generated content in it, but they can have it in the toolbox if they want something to punish one of their tech companies with. This is a pattern that has applied to many of their regulations before; most of the things that are illegal elsewhere are illegal in China, but they only enforce that if they have another reason to do so.

I don't know what pressing Delete will do, but it seems safe enough!

doublelayer Silver badge

Re: This is a common problem

I've said this before, but it's worth repeating. That will just annoy people who do an "rm -r" and it gets turned into an "rm -i -r" that warns on every file in the directory. To get around that, they'll do an "rm -rf" which stops that. This has two bad consequences: they're not going to see the things normal rm would warn about because they put in a -f, and they're now starting to build -f into their instinctual use of rm.

doublelayer Silver badge

Re: Ouch!!!

There probably was one, but those boxes often lack important details. If you see a box that says "Are you sure?" with the title "Confirm deletion", then you'll probably click yes unless you really accidentally pressed the delete button. It doesn't say what it is deleting, just that it's deleting something. Even boxes that specifically name the thing will get some click through, but it doesn't help when lots of confirmation boxes are very low on detail.

Win 11 refreshes delayed, say PC makers – and here's why

doublelayer Silver badge

Training LLMs to the extent needed so they look even slightly useful is very expensive in power, time, and chips. That is not something that can easily be done on your computers, though if you're motivated enough, you can do something. You'll just use every computer you have and can borrow or steal for a year and get something not that great at the end.

Running them is easier. The biggest requirement is RAM. To run the headline models, you'll need a ton of that. GPUs with 100 GB of RAM are expensive, so it's not easy to run those. You could, if you wanted, buy one and run a large model. But there's yet another step: intentionally smaller models and quantization. With each of those downgrades, the quality produced by the model will decrease slightly, but it makes it easier to run them. I've done it to play around. With 16 GB of RAM and no GPU at all, I can run an LLM that generates plenty of text and code. Without the GPU, it generates it more slowly, but the response generally starts to appear within ten seconds and is complete in a minute. If you don't have 16 GB of RAM, you can quantize that model and make it run in as little as 4. In my experience, the quality is not that different from the larger models in that yes, they do tend to give you stupid and useless output, but I get that from the large ones too.

If you have something that would benefit from a locally-run LLM, you can accomplish it. The Register has had a few articles explaining some ways to run them. You don't need an NPU or Microsoft Copilot (any of them) to do it.

doublelayer Silver badge

Re: Cartel Is in Full Operation

All the 2020-era machines already support Windows 11. So the last refresh already makes another refresh unnecessary. I have seen some companies who refresh their computers every 4-6 years, and this is probably what Dell is thinking about. When I have seen that, it has always seemed wasteful and pointless to me, especially as many places I've worked use the slightly longer period of "until it catches fire or there are bits hanging off". I prefer that one most of the time, though it can be taken a little too far.

doublelayer Silver badge

Re: Five year old PCs need replacing :o

If you're actually using the AI, lots of retraining. Training that sounds like "after it wrote the email for you, read it over. If by some miracle it actually looks like something useful, read it again. If it still does, show everyone. Either one of them will see the big problem in it or we can all witness that an LLM managed to do something accurate for once".

But if it's just Windows 11, not so much. There are always users who need retraining when an icon moves, but that's not everyone. I've run Windows 11 for years now on corporate and personal machines and, although there are some differences between it and Windows 10, it's not anything so massive that users en masse will be confused on how to use a computer.

Begun, the open source AI wars have

doublelayer Silver badge

You may not know why everything that is there is there, but you at least know what is there. That is the point of having the source.

Let's consider a parallel to software. Microsoft Windows is very similar to LLMs in a lot of ways:

Difficulty to actually use the source: If you had the source, it would be hard to use. Building that takes days on a big set of parallel processors.

It's difficult or impossible to open source: Even if I was named total controller of Microsoft tomorrow, it would be very difficult and expensive for me to get the Windows source code open because there are lots of components they don't own and lots of license snarls to untangle, to say nothing of trying to distribute it to everyone who might want a copy or to provide documentation.

Modifiable: As CrowdStrike has recently demonstrated, I can write some code and embed it deep into the kernel. That's very similar to how I can make some modifications to an already trained model to better tailor it for my purposes.

Modifications distributable: Kernel-connected programs can be distributed in source or binary forms and installed by other people without needing to get Microsoft's permission first, though if I don't give them permission, the users have to work harder to install it.

So does this mean that Windows should be considered open source? The only thing I'm missing here is the ability to see its source code and make modifications that specifically involve changing that source code, and I couldn't feasibly get that code or easily use it if I had it.

I think the answer should be obvious. All these factors can be very relevant to whether someone considers it worthwhile to open source something. It is not relevant to whether they have.

doublelayer Silver badge

Whether a model is too large to feasibly open source doesn't change how this works. There is no requirement that they be open source. I have some large software which is free, but it's not open source. It's still useful. Large models which are distributed under generous terms can be very similar. The problem is that one of the freedoms provided by open source is missing: the freedom to know what is in the program you're running. The FSF refers to this as "freedom to study how the program works", so I disagree that all four freedoms are there. So far, only one seems present. Without the training data, the prompts the model is based on, the ways it was trained, etc you have a black box, very similar to what I have in the binaries that I'm allowed to use but I don't know what's in them.

In practice, open sourcing a model could be so hard that it's not worth doing. Retraining it to confirm that the binary you have matches the source data, or at least to make a binary that does for you to do later, may be infeasible. Neither of those realities changes what it means to be open source. This should be evident when we compare it to a small model. If a writer of a small model with a gigabyte or two of training data refuses to let me see, let alone modify and retrain, that training data, it is not open source and, as far as I can tell, they might have all sorts of extra unadvertised stuff in there.

Without this, the freedom to modify is restricted. Yes, I can use other methods to change how the model works. You can argue that some of those count as modifying. Several of them are more akin to building a system around the program, the way that if I call a binary from my program, my program isn't a modification of that software, just a user of it. However, I can't modify everything. My ability has been constrained, and not necessarily by my access to resources to use. It has been constrained by the unavailability of the source to this program.

doublelayer Silver badge

That is a major problem for large models, and it makes it harder to open source them. For that matter, it makes it harder to legally make them at all. This does not change the definitions of open source in the slightest and there are very close parallels in other open source code. For example, there are lots of databases that have been painstakingly written by companies that are useful in a variety of programs. One such type is linguistic data. Open sourcing that is really hard because it took a long time to make it, so you usually have to license it for very narrow uses with very restrictive license terms. I wouldn't get to claim that my software is entirely open source, but just not those databases you need to run it. At best, I can say that my software is open source but won't work, as well or at all, without these closed source libraries. That is what I would do, and people would deal with that if they wanted to run using that more expensive data. I wouldn't lie and claim to be open source anyway.

For smaller models, it is easily achievable. I recently dealt with a model which I thought was badly trained and generating bad results. So I obtained the training data, which came to about 20 GB and was permissively licensed, modified some parameters, and retrained in a different way. I can't do that without the training data. That model was actually open source. For the headline models such as the LLMs or those that generate images or video, 20 GB is tiny as training data goes. For my model, which extracted data from images, it was actually quite large. This makes no difference to the definition. If it is open and it has 100 TB of training data, then if I bother to get that many disks and that much network bandwidth, I have to be able to get and run on that training data. It makes distributing that a lot harder, which is another reason they'll probably choose not to do it, but it doesn't change what is required to have an open model.

doublelayer Silver badge

It depends how large a model is. Lots of small models can be retrained from the training data by one person. However, I think that is unimportant. Whether people choose to train or not, you still need the training data or they don't have all of the stuff that goes into a model. Trying to call something open when something that crucial is over is similar to this argument about open source software:

Faux-open guy: It's open source.

Me: I couldn't compile and run it.

FO guy: But you didn't, did you? You downloaded the binary release and went with that.

Me: But if I had downloaded the source, it wouldn't have built the entire application, just a couple libraries that connect to the rest of it.

FO guy: But if you had the rest of the source, it's large and it would take hours to compile, so you don't need that. It's open source.

No, it's not. The model without training data is not open either. It's just a free as in beer model. I can bolt stuff on to a closed-source free model just as much as I can to one they've called open.

doublelayer Silver badge

It's not pointless. Lots of models have training data available. I have several right here. True, none of them are LLMs. The ones I have are more narrow ones that can recognize interesting information from images or model specific actions, but if I want, I can download the training data and the configuration used to train the model, fire up my GPU, and a few short days or weeks later I'll have built my own copy of their model. Releasing the data is easily done as long as you have permission to it, which may be one of the reasons why some people who want to call themselves open don't want to.

OpenAI's latest o1 model family tries to emulate 'reasoning' – tho might overthink things a bit

doublelayer Silver badge

Re: Chat GPT Apples

I'll try this once more, but it is clear that you're trying to excuse obvious errors and unintentionally insulting people in the process.

Me: If the program meant that you took two away, leaving three, and then something happened to the three, leaving two, then it would say so.

You: Why would you assume that? Moving goalposts?

Because it clearly did say "If you have 5 apples and take 2 away, you would have 2 apples remaining." That was contrasted with the "take away" = "I have the ones I took away" branch. There is no room here for additional actions affecting one of the remaining apples. It could have said it: "If you have 5 apples and take 2 away, maybe someone grabbed one of the remaining ones while you were taking yours, and then you would have 2 apples remaining." It did not. It subtracted wrong. LLMs calculate wrong all the time.

Various types of people like to or find it difficult not to interpret this question in many ways. I have done it. Others in these threads have done it. The main difference is that, even though we've all come up with different interpretations of various apple-related subtraction problems, they either result in a "not enough information" answer, or they state an interpretation and then correctly solve for the number if that interpretation were correct. Nowhere did they simply state that it was unclear, so the answer was definitely 42 and refuse to tell you how that happened. Autism does not cause people to do that either if they're actually trying to solve the problem. While I have no reason to think that I am on the spectrum, I know many people who are, and while they may make fewer assumptions than others would, they would do the calculations right.

From your posts here and elsewhere, you clearly like LLMs as a tool. I have found them frustratingly unreliable. Sometimes, their answer is useful and relevant, but very frequently, very simple errors make their output useless and the time I spent getting the answer wasted. Who knows, maybe you're just excellent at making them not do that in the first place. The problem is that your eagerness to justify obvious errors as not errors suggests that you may not be the best user of them either but refuse to admit the quality issues in the output you are putting to use. You're unintentionally insulting people with autism by suggesting that they would do the same kind of shoddy work, when many of them would not and, in my experience, be more conscientious about the quality of their work than others.

doublelayer Silver badge

Re: Chat GPT Apples

People with autism do not work the way you're trying to argue they do. If the program meant that you took two away, leaving three, and then something happened to the three, leaving two, then it would say so. If it was swamped by the possible ambiguities because the problem didn't include the sentence "No other actions occurred on the apples concurrent to the taking away process", then it would have rejected the question for ambiguity and not given an answer. It got the calculation wrong. Stop trying to come up with arguments, incorrectly suggesting that a class of people is incapable of simple arithmetic, to justify that.

doublelayer Silver badge

Re: Chat GPT Apples

They said the ambiguity was intentional. The calculation error, however, was not related. Autism may or may not mean that ambiguous questions get questioned or rejected more, but it doesn't make arithmetic errors the fault of the question asker. If it only considered the meaning "take away" = "I have that subset", then it could just have answered 2. When it also considered the meaning "take away" = "I don't have that subset", it does have to answer 3 to that branch or be wrong. It was wrong.

Incidentally, there is another interpretation of the question. I have five apples. I take two away. At the end, I have five apples. Two of them are with me, and three of them are back there where all five used to be. I didn't give away or throw away, so I never said that I would cease to have either subset.

Feds pull plug on domains linked to import of Chinese gun conversion devices

doublelayer Silver badge

Re: Repeal the de minimis provision

That's where it would stop working. If I want to import a small item, I have to find a company willing to resell it to me, get the manufacturer in China to sell it to an exporter, the exporter to put it in a shipping container, the container to be passed by customs even though it probably contains a thousand individual packages with different things in them, wait for my distributor to sort out my package, charge me their markup for the action of receiving a box off a ship, then send it the rest of the way. None of that is at all related to the price of sending boxes through the mail. That's the point that the original post inaccurately conflated with the rest of the regulations, because it is entirely independent of the rest of this. Others have described what happens when you have this company in the middle. Either you can't get an item, you can only get it from someone who has applied a 200% markup on it because they're the only person who has imported a box of them and you can't manage it yourself, and you certainly can't use whatever supplier you want because, if you're lucky enough to find someone importing them, they've already decided on one supplier.

There are two problems that are worth solving. One is that we're paying more than China does for sending packages. That's worth fixing, and although it's a tricky diplomatic experience, it's unrelated to the rest of this. The second is the existence of dangerous products. Given all the dangerous products stored in bulk locally in Amazon warehouses, the customs exceptions for small packages isn't a simple answer to that problem either. Most likely, there will still be dangerous items and we won't make a dent in them unless we spend a lot more money in individual enforcement actions against their sellers.

doublelayer Silver badge

Re: Shutting down the web sites is not the answer

Depending on the location, it can be more severe than that. Usually, it's written to "if they would not have committed the crime without the actions by law enforcement". The US can be one of the stronger countries for it, where entrapment has been redefined to "if their lawyer can argue that they would not have done this without the police involvement". That's not a guarantee that that would work, but almost certainly, most of the people arrested for using the fake site would try the argument.

doublelayer Silver badge

Re: Repeal the de minimis provision

I am not an expert, and you suggest that you know a lot more than I do, but I'm not sure all the problems you've named are related to the provision they're talking about. For one thing, how much the UPU says shipping a box should cost seems wholly unrelated to whether customs will scrutinize the box. You could remove China from the list of countries whose shipping gets subsidized without affecting this provision at all. I think doing that makes sense, but by bringing it up, you have slightly harmed your other points because that seems unrelated.

The other problem with your description is that many of the things that we want to buy from China don't come in bulk in a container because not that many of them are made. If they are custom design or low-volume items, then they won't bother to have a local distributor or warehouse, make enough that they can have local supplies wherever their customers might be, etc. They might, but probably won't, do that for the United States in the hope that hundreds of millions of customers might make it realistic. That's definitely not going to happen in small countries, which will find it almost impossible to buy the things in the first place. We have to balance that against the problems of dangerous or counterfeit goods sent one at a time, but I'm not sure that should always come on the side of making small shipments infeasible for everyone.

Post-CrowdStrike catastrophe, Microsoft figures moving antivirus out of Windows kernel mode is a good idea

doublelayer Silver badge

Re: Bad response from Crowdstrike

But low-level code basically doesn't. If you're writing in assembler or in C, a lot of important things do not raise exceptions for every invalid situation. With more clarity, C doesn't have exceptions. Some libraries have their own things that look kind of like exceptions, but they all work a little differently from each other and a lot of code you work with will not have those either. Good functions, including kernel ones, do check for and report errors, but you have to check that manually. If you don't, execution will proceed. You can't do a global check either. For instance, if you try to allocate memory and there isn't memory available, you have to check that right after trying and if you don't, your code will continue to execute until it tries to use that memory that never got allocated.

That's just talking about organized system calls which return errors in a way other than exceptions. There are lots of other things where you don't even have that. If you're dealing with locations in memory and you refer to a location that doesn't exist, isn't yours*, or doesn't contain what you intended it to contain, that's not an exception situation. You will get, respectively, a crash that can't be recovered from, a crash that can't be recovered from, and the wrong data which will probably lead to more bad pointer arithmetic somewhere which will, hopefully soon enough, give you a crash that can't be recovered from instead of a bunch of shredded data and then a crash that can't be recovered from. The only way to do this is to put more effort into not messing up and assuming that you still have, so trying to find out where before the users run the code. CroudStrike did not test their file in the way they needed to. It was a QA problem, and one I have a feeling they've fixed now, but they needed to learn that configuration files that get executed are code and can break things just as badly as the code that looks more familiar.

* When you run as the kernel, basically all of RAM is yours. While a program running in a user account, even root, will only crash itself** when trying to access memory used by another program, the kernel can stomp on memory used by any process. This is one of the reasons why you can't recover from an invalid memory use in the kernel. By the time you know you've had one, the chances are high that several other things are hideously damaged and you don't know what and where they are. Continuing to run is likely to cause more problems than not, so it requires a restart.

** Most of the time. There are some ways to access RAM that is not yours, and predictably, bad things happen when that goes wrong.

Cambodian senator sanctioned by US over alleged forced labor cyber-scam camps

doublelayer Silver badge

Re: Euphemisms, Repeatedly

In some cases, you could argue that there's a slight difference, though not one that any victim would really care about. For instance, if they are theoretically paid and permitted to leave, but they have been told that they have a debt and must pay it off. This is a common tactic. In turn, this takes two forms. Sometimes, it's actually real in that they can scam enough to pay off a "debt" and then be released. More often, the operators find ways to increase the "debt" whenever that might happen, meaning the payment is basically fake, but they still go through the motions.

I'm not sure "forced labor" is much of a euphemism either. It's pretty clear about what's happening to people and how unwilling they are. When attempts have been made to make a euphemism out of slavery, the slavers tend to have to obscure their purpose more than that. For instance, the "vocational training" required of some Uyghurs in Xinjiang which is as forced labor as this is is a euphemism worthy of the name.

Of course the Internet Archive’s digital lending broke the law, appeals court says

doublelayer Silver badge

I don't think that's the part they disagreed with. I think they, and I, disagree with the part "in theory a very good idea."

The problem is that there are lots of cases where a company owning some intellectual property is the most logical and only workable solution. If a company hires me to write some code, it isn't very workable for me to own the copyright to the code I wrote for them. It doesn't work for them: how can they sell a product based on the code they paid for when it's split up among anyone who ever touched it? How can they prevent someone else from doing the same? Could their competitor find each of the coders, pay them a small amount, and get legal copies of the same thing the company already paid for? It also doesn't work for me. In almost all cases, I'm going to write my code along with some other people, not all of whom might be there anymore. Can I use and modify the work of someone who no longer works there if they still own the copyright to that part? Do we need to track them, or their next of kin, down and ask for permission for me to view it? I think that, in that example, the company owning the work they paid me to make makes a lot of sense and is a perfect parallel to similar jobs; if I make a physical object at my employer's request, my employer owns the object.

So if a company can own something when they've paid me to make it, it also makes sense that they can buy it from me. To use another coding example, let's say I wrote some code on my own. I've been selling licenses, but there is a company that would like to use it. I no longer want to support it, keep updating it, dealing with licensing discussions. I could try to give them a generous, perpetual license then just stop, but then what if someone else wants it? They could try to track me down and get a license, but I might not be easy to find. I could sell the code outright to the company and allow them to issue licenses, and they could pay me more for the extra power than they would have for the more limited license. I've been part of this from both sides. I've had code I didn't want to sell anymore. Often, I did the easy thing and either open sourced it or just stopped acknowledging that it ever existed, essentially giving up on ever getting more money from it, but if I had a buyer, I'd happily have sold it to them. I've also tried and failed to track down a copyright holder for some code that I would have wanted to buy a license for, but because the person who originally worked on it got bored, you just couldn't because the holder was uncontactable and for the most part unidentifiable.

Both my examples are about code, but similar conditions apply for most types of copyrightable work.

doublelayer Silver badge

I'm tempted to say "read the thread". The original post described that the IA had lent out multiple copies:

"If you have a physical book you can only lend it out to one person at a time. From 2020, the IA were lending out multiple digital copies for each physical copy they held."

The second post suggested that this was not the case:

"They were lending one digital copy per physical copy owned and sued for that."

I replied to clarify that, yes, the court did decide that 1:1 was also not allowed, but that the suit was during, and at least partially because of, the 1:many. I.E. the first poster was correct and the second poster was not.

The IA has been doing CDL since 2011 and has been calling it CDL since 2018. I'm not sure it's ever been clear whether that's allowed or not under copyright law, but it is so close to the line that it could have been ruled legal. If I got to write copyright law, CDL, with some extra provisos around the "controlled" would be legal. Their attempt to do a 1:many arrangement was very relevant to every aspect of this. It certainly led to this lawsuit which has now ruled against CDL as well. It probably helped make the case against CDL. It may have done significant damage to the Internet Archive. It probably has harmed the IA's image and its ability to influence copyright standards, which is quite important as the IA has been one of the organizations that has helped improve the status of the DMCA, something that is probably less likely to succeed now.

doublelayer Silver badge

"Your feelings, frankly, are irrelevant to the question of whether the publishers should agree to license IA to resume the paired-copy lending, or some other mutually-agreeable scheme, which is what IA is currently asking with the petition."

I'm not sure they are as irrelevant as you say. As an author, their opinion might not matter very much given how many authors there are, but if enough authors disapprove of whatever arrangement comes along, they can push back on it. If, for example, authors decided that publishers were wrong and CDL is great, they could refuse to work with the publishers in this case whenever possible. That seems unlikely, both that they would take that opinion and that enough of them would put that opinion above their commercial interests, but it is possible.

Almost all of this is related to feelings of some sort. A petition similarly has no legal weight, but it's designed to indicate to the publishers the feelings of those who benefited from the IA's programs. Copyright law is unlikely to change, but if it does, it will be the feelings of creators and consumers of copyrighted work that accomplish the change. A lot of those feelings will be attached to money, but people don't argue for a law by saying "I want more money" or "I want this book for free". They tell a story about what those things would do for them in order to sway feelings in their direction.

doublelayer Silver badge

If I am an artist and paint something the day before I die, my family, or whoever I intended my property to go to, gets the painting and can sell it.

If I am a carpenter and make something the day before I die, my family gets that thing and can sell it.

If I am a businessperson and complete a task and get paid a big bonus right before I die, my family gets the money and can spend it.

If I am a celebrity and am paid to endorse things, and I endorse something the day before I die, my family gets that money and can spend it.

If I'm a programmer and have been employed to write something before I die, my family gets the money that I have saved from that labor and can spend it.

But if I'm an author, programmer, composer, etc and write something the day before I die, my family should get nothing?

doublelayer Silver badge

In the US, where most of the copyright restrictions happen, the company just gets a time period. Currently, that time period is 95 years, which was repeatedly extended on Disney's request. What that time period should be is certainly something we should debate, and I think 95 years is too long. However, there's no ambiguity involved.

doublelayer Silver badge

That was the original idea. It is also the idea they were using most recently. It was not the idea they tried to use in the middle. In 2020, right before this court case was filed, they established what they called the "Temporary National Emergency Library". That was a program of letting anyone get a copy of any copyrighted work they held, no matter how many physical copies they had. They kept this up for three months, only stopping it when they were sued (this particular lawsuit).

Their reasoning was logical: in a pandemic, it is harder to go to a library with paper books and some had closed or put restrictions on who could get books. Libraries relaxed that pretty quickly around where I live when they determined that COVID was not going to survive long on a book, but for the first few weeks, that was indeed a problem. However, just because it helps someone doesn't make it automatically legal, and the Archive should have known that. I am convinced that, without this, they might not have been sued at all and they might have had a much easier time of it if they were.

doublelayer Silver badge

Re: Contradiction in terminis

There are cases about that, but they will take a while to get to court. Even this one took a couple years to get its first case and we're about four years on when the appeals ruling came in. AI companies also have a lot more to spend on lawyers to delay the process. I think that, if they're enforcing the law correctly, they will have to rule against the AI companies on the same basis that they ruled against the Internet Archive, and I hope that they do.

I think the IA's case was quite a bit stronger than AI companies', but that doesn't mean I think the Archive was destined to win this. The problem is that CDL was never clearly legal, that if it is fair use was probably the wrong way to justify it, and that the Internet Archive wasn't making their case easier. I think a rigorous application of CDL should be legalized, but the IA did not help make that case when they used weak DRM making it relatively easy to make more copies and when they decided to lend unlimited copies in 2020. That latter decision was disappointing to me because, when I read that they were doing it, I was almost certain that it would harm them in a trial and it was likely the reason why this one started in the first place. Had they stuck to CDL, publishers might have avoided a trial they might lose because losing it would justify more unrestricted lending terms for ebooks than the systems publishers currently use with libraries, but when the Archive tried to go unlimited, they gave publishers a big incentive to try to smack them down and some valuable ammunition to do it with. I'm sad, but not surprised, to see how it has gone.

OneFileLinux: A tiny recovery distro that fits snugly in your EFI system partition

doublelayer Silver badge

Re: K.I.S.S.

A USB disk is much cheaper and larger, and you can put whatever you want on that. I don't think there are any security policies allowing booting to a floppy that don't allow booting to a USB drive, so unless you're working with something that doesn't have USB ports, then just use that way. Optical drives are another option that works just as well.

As for this approach, it's interesting, but I wonder how useful it is. I'm guessing that, in order to make it that small, they've excluded lots of things that I have on my recovery USB drives. For a while, my recovery drive was 1 GB in size (the benefit of that one was that it was too small to write other things to so I never erased it). That got lost so now it's a 4 GB drive. In both cases, I could easily add in whatever tool I wanted and keep that for later.

Top EU court crushes Google appeal against $2.65B Shopping antitrust ruling

doublelayer Silver badge

You want a product, and it tells you some places where you can buy it. How it picks the places to include, how it ranks them, and whether it even found what you were looking for are not specified. Other services attempted a similar thing, and Google often pushed them down the page, both by putting a shopping block on top whenever you did a search that looked like you were trying to find a way to buy something and by moving them down in the search results that appeared below that.

In my experience, many of the "where to buy this item" services weren't very useful. They often reported prices from a small number of stores, missing cheaper places to buy it, and if you actually liked the prices they found, you could easily find that it wasn't available anymore or had some extra terms tacked on that made it unusable. Then again, I only tried using them a couple times, so maybe I just got unlucky.

Feds urge 3D printing industry to end DIY machine guns

doublelayer Silver badge

The other problem, and probably the largest one, is the description of the part you need. I have not looked and don't have a gun, so I can't prove the article correct, but if it is, here's their description:

"These devices, also known as "switches" or "auto sears," can be smaller than a USB flash drive and made of just a single piece of plastic."

There's nothing that can be controlled when the pieces we're talking about are that simple. One person who knows what they're doing could print thousands, so all you need is one such person attached to people who will distribute them and there could be only one printer involved. That printer doesn't have to be in the country as small plastic things get mailed in all the time. Meanwhile, people who want one of these things probably only want one or two, so it's not like they'll set off many alarms if they get their own printer or borrow another one.

I don't have a solution I can sell to US politicians. I am quite confident that this one won't work.

FTC urged to stop tech makers downgrading devices after you've bought them

doublelayer Silver badge

Re: Car thing

Yes. The younger generation is brilliant: they invented the concepts of debt and renting. They didn't exist before. Older people have never had either, even when they were young. Now all the young have to do is get the patent royalties on those and their problems will be solved.

I'm not sure why so many people like making a generational thing out of this. Lots of older people buy things with subscriptions, either willingly or not. For instance, some people I know, aged 59 and 63, who purchased those security cameras for which you have to pay every month to use them at all. Their children didn't make them do that; they chose to do it all on their own. Just in case they were unaware, I explained the subscription terms. They were aware and chose to accept it.

To the extent that there is any generational thing here at all, it's because it is easier to make a subscription out of devices with modern technology and the manufacturers are taking advantage of that. It's easier to have a camera that only works with the manufacturers' servers and phone app now that the users have phones it can work with and a connection to keep the cameras and the servers in contact. That would be a more convincing argument if there hadn't been many similar things before which used the same terms. Before there were cameras connected over WiFi to your phone, there were still home security systems, often paid for monthly and managed by a company. It's not just those. Televisions were rented. Cars were leased. People chose that option, or were sometimes not given another option, even before the year 2005.

Subscriptions work because people sometimes choose them. I generally try to avoid them whenever possible, but I'm also comfortable doing some of the work that a subscription involves. If I ever decide I need a camera on my house, I'll do the network setup to get in contact with it when I'm away. I may even build the device, probably with a Raspberry Pi in it. A lot of people who want that are not willing to do that, and when given the option of a single-purchase camera which requires them to learn some network admin or one with a convenient app that just works, they choose the subscription. Some people are just not as frugal as we are or value things differently, and age is not really a factor in it. The important things are that we make clear when something is a subscription and when they're not so that people can make an informed decision and that we prevent companies from switching from "you bought it" to "just kidding, you didn't, and now you have to pay again to keep using it".

doublelayer Silver badge

Re: This might backfire on consumers

I'm sort of fine with this. If they need ongoing revenue to make something work, I can understand why that is. Sometimes, there really is no way to provide the advertised service without expensive stuff such as a bunch of servers to run software which the embedded chip in the product can't run locally. The important thing is that they explain that, in clear words, right on the product page not in a terms of service document. People should be able to know up front what the hardware costs and what they need to pay to keep it working, which will allow most of them to decide that the monthly payments are not something they want so they will not buy this product. The problems come when they think they've bought something and end up getting a subscription, either because the manufacturer hid that one was involved, or because the manufacturer didn't have one when they bought it but added one later.

What do Uber drivers make of Waymo? 'We are cooked'

doublelayer Silver badge

1 - Expensive and very time consuming. It's worth it, but people sometimes want to get there today, not in five years. When those years have passed and the new route exists, that will be great for the people in that area, but chances are that there will be a new one that could use a route.

2 - Expensive depending on the demand.

3 - It depends how many people want to go to the places you're talking about. If it works as it often does, where the trip on the public transit is pretty fast but then you have a long walk ahead of you, the benefits during the part where the dedicated rights of way are speeding it up can be wiped out in the walk later on. I generally don't mind that, but that's when I can spend extra time commuting and am not worried about how quickly I'll arrive.

4 - I suppose it depends how much groceries you're carrying. I could easily carry a medium amount on transit where I live, but the trains I've been on before won't be convenient if I'm trying to carry groceries for a longer period or a larger family. We're packed in too tightly and most of the things I'd buy are easily damaged.

Raspberry Pi 4 bugs throw wrench in the works for Fedora 41

doublelayer Silver badge

Re: no Real-Time Clock

Likely the same thing that the Pi would do a lot of the time: use the wrong, cached time, which is still later, and boot successfully.

But yes, one of the tradeoffs of the strict verification is that, if there is no RTC, the power is lost so the cached time is out of date, and packages were verified during the running session, then they will not verify during boot. Those prerequisites are pretty rare with a server that generally has an RTC that works most of the time and generally isn't losing power unexpectedly given that they often have UPSes and automatic shutdown scripts if power is going to go down. That's less common with a Raspberry Pi which doesn't have an RTC at all and is most often connected to a less reliable power source with no prediction if it is going to fail. With every program, you end up building in assumptions and tradeoffs.

For instance, the same RTC lack can mess with programs that start before a Pi gets a network connection. It's well-known as something you have to consider. Some things that use wall clock time may report odd behavior when it switches from saved, wrong time to real time. Results get reported as applying to huge time periods. Scheduled tasks end up running because the time they were supposed to run was in the middle of that period that really doesn't exist. For better or worse, a lot of programs that are running as normal userspace programs have decided that they will have a clock available and if the clock acts oddly such as skipping entire hours, that's the user's problem. Not every program needs to write special cases for that not being the case, especially since most of them will be unable to tell between a clock acting oddly because the hardware makes it necessary and a clock acting oddly because there's an actual bug or hardware failure causing it. The user on a server will deal with this by a) disabling the verification so boot completes, b) adjusting the clock before the operating system is booted, c) replacing the faulty RTC battery, or d) preemptively doing A because they've decided they don't care. That doesn't mean that it should be preemptively disabled for everyone because some people may choose to do it for themselves. One of the typical attitudes of the Linux community is that you should be trusted to know how something will affect you if you change it and to let you get on with it. Defaults are defaults for a reason, but you're free to change them as you wish.

doublelayer Silver badge

Re: no Real-Time Clock

"Imagine that on a server that loses Internet at the wrong time - one remote reboot, it doesn't pick up the time immediately / quick enough, and the whole thing just stops? Ridiculous."

Except that server will have an RTC, which even if it's not entirely accurate, means the time will be later than when it last booted, not earlier. The Pi usually saves the time when it shuts down, meaning that the clock should always be wrong at boot but at least not report that time went backwards, but doesn't always. If I install something and lose power, its clock will be earlier than the installation time. Verification of packages is a deliberate decision, and verifying times when doing so is a very normal part of that process. Disable it if you like, but don't pretend that it's always a bad idea.

White House seizes 32 domains, issues criminal charges in massive election-meddling crackdown

doublelayer Silver badge

"I don't think the Russians could (or would) have been much help in the invasion - as they'd have had to commit their troops to both under allied command, and use the allies supply chain."

I don't see why that would have to be necessary. They could easily attack Japan from the north and west while the other allies, primarily the US, attacked from the south. Other than coordinating aerial action, they could operate somewhat independently if the goal is simply destroying everything until there is no more resistance, then repeating that further ahead. I admit that my picture of how this might have gone is probably biased by an alternative history that did consider this*, but I don't think it's that unrealistic. I agree with most of your other predictions. A more militarized cold war that likely would get warmer, if not a World War III less quick and fiery than the term generally describes.

* The book in question is Joe Steele by Harry Turtledove. Joseph Stalin's parents moved to the US and he became its president, and because he's a dictator, physicists don't want to give him nuclear weapons so there is a delay in making them. They still make them, but not until after the war has ended. Japan ends being split between US and Soviet spheres of occupation similar to real world Korea. It's not the most realistic of premises, but still a fun read.

doublelayer Silver badge

Yes, different, but how? It's an interesting hypothetical. I'm not sure if you're suggesting that there would be less conflict between the US and Russia if there had never been nuclear weapons, but I think it would be as bad if not worse. This happens if you assume that nuclear weapons never were invented at all, which means you start your alternate history with how the war against Japan ended quite differently, probably with a lot more Soviet involvement. It also happens if you assume that we all simply stopped being able to make them at some point, though the details differ quite a bit depending on when that is.

Admins wonder if the cloud was such a good idea after all

doublelayer Silver badge

Re: Get over yourselves please

The other problem, other than the snake oil, is that a lot of people want a single choice solution. They want the cloud to either be good or bad, so even without a marketer trying to pretend that something does what it doesn't, you have many people attempting to simplify a complicated problem. The finance person likes the Opex option, so they say that cloud must be better than anything else, so let's switch all our computers to that. Admins, as many here have demonstrated, have some, possibly accurate, reasons to be skeptical of the cloud, but they decide that it means all cloud options are always bad. Sometimes, their reasons are perfectly accurate, but sometimes they're also faulty. For example, one admin I knew who disapproved of cloud for, in my view, the same reason that he disapproved of Linux: he eventually learned how to administer a Windows server without setting it on fire and didn't want to learn anything else ever again.

In many cases, you actually have to calculate it out. What do we want to do. How much does it cost to do it with cloud supplier number 1? How much with supplier 2? How much right here in this building? How much in a colo nearby? What extra things do we gain with each option? The cloud probably provides us faster networking without having to do extra work. The local server room likely decreases our costs to add more systems there. What risks are there with each option. And now that we've done all that boring stuff, we can actually eliminate some options and make a decision. A lot of people want to skip this and use worse reasons, or no reasons at all, to pick one.

doublelayer Silver badge

Re: Cloud is a financial model not a technology

That is a concern, which is why a good admin will have backups so the servers can be restored and design to scale down to fewer servers if that's needed. In many cases, if you don't have the money to run your servers and you need them to have your company work, you have lots of other problems, so owning the servers won't save you. For example, if you can't pay the monthly cloud bill, are you still able to pay any bills, including payroll, the ISP, the phones? Having your own servers behind a disconnected network link isn't much better.

doublelayer Silver badge

Re: "The real issue as ever is vendor lock in."

Sovereign data doesn't have to stop you. If your country is large enough, you can probably store your data in different parts. For the big cloud providers in the UK, that's usually London and Cardiff. It also tends to apply to only certain industries or types of data, and the rest can store in Dublin or Amsterdam as they please.

"Pre-cloud, storage was managed to save cost. You stored the bare minimum and archived the rest. Have people become lazy, just storing everything because they can, and because they are not footing the bill?"

Yes, they are lazy, and they are footing the bill. Often more with cloud storage than without it when you grow large enough. That wouldn't change if running on prem. Managing data to figure out what you need now, what you'll need soon, what you'll need in a basement just in case, and what you can throw out is hard and expensive. A while ago, it was expensive in employees, but the employees were somewhat cheap and the space to store the paper was the limiting factor, so lots of people were hired to move it. Now, employees are a lot more expensive, the amount of data they have to consider has grown, and disks are much cheaper. So they just keep storing it.

By the way, the "1Tb drives cost less than £50" factor is related to why a lot of people use cloud storage after all. The drives are quite cheap, so if you want to archive some data offline, they can make a part of the solution. Running those drives costs more, and I'm not talking about electricity. I'm talking about drive bays to hold all of those as hot storage and the servers those bays are connected to just to store and retrieve data on them. If you have lots of data that you want to be available whenever someone who may not be in the same building wants, you'll be spending a lot more per drive to do that. You'll also need lots more storage for redundancy because those cheap hard drives break. There is an area somewhere between "That server holds big drives in RAID where we store all our data" and "We store so many petabytes that we have a team who design and build custom servers for massive storage needs" where finding someone else who does the latter and paying them to store the data starts to make more sense.

doublelayer Silver badge

Re: Cloud is a financial model not a technology

The comparison to property clarifies some reasons why people might prefer a more expensive cloud. You rented at first because you couldn't buy. There are people who want to start a business that will use some equipment but they can't buy, either the equipment itself, or more likely all the facilities needed to operate it. Fortunately, servers are very easily rented. You also pointed out the major difference between them, which is that buying property is likely to gain in value most of the time, whereas buying a server almost certainly will not. Buying equipment early will probably not help you if you're counting every penny, and if you're not well-resourced, you may have to do that.

That doesn't mean cloud is always better, just that it can be. There are things you can do relatively cheaply with cloud that are expensive to do yourself. One easily understood example is multicontinent systems if you don't have offices all over. Employing a local hardware team is pretty expensive. Not hiring them and just putting your normal hardware team on the next flight when they're needed is less expensive until it isn't. Setting the region in the cloud console to a different country is easy. So if you need that, that can be a cheaper way to get it. As usual, someone actually has to figure out what you need and how much it costs to get it from the various places, which is something a lot of people want to skip for some reason.

doublelayer Silver badge

Re: Rainy clouds?

Probably some of both, and in the areas of increasing price we can also add the price of workers. Somebody needs to replace broken things and write the software to administer the hardware, whether in cloud or not. When running on prem, the price of the latter is often outsourced to a company from whom management software is bought, but they're probably increasing prices too. It's likely that prices to run a server anywhere has increased, and that when they increased for cloud providers, those providers tacked on a bit more when they raised the prices to account for it. I don't like it, but it is what I can expect from any company; I'm sure those who have higher prices to run their server rooms have passed that on to me and added a bit more as well. I can't stop it, but at least I should not be surprised when it happens.

doublelayer Silver badge

Re: Cloud is a financial model not a technology

Because they're money now and money continuously, and those are different. Not necessarily as different as accountants like to treat them, but different nonetheless. Have you ever rented somewhere to live? Why didn't you just buy a house? It's just paying for somewhere to live, after all, so surely it's the same. There are sometimes benefits from renting, and sometimes substantial downsides because you've paid a lot more than you needed to and don't have anything at the end, but you can't treat them as the same because there are times where the one that seems the more expensive is the only option that gets you what you need.

What is this computing industry anyway? The dawning era of 32-bit micros

doublelayer Silver badge

Re: ARMed and Ubiquitous

I still have to give them credit for doing a pretty good job at decreasing their power consumption figures significantly. Not to phone levels, but there's a reason why the low-end X86 boxes nearly all use Intel parts rather than AMD. That market segment may not be the most alluring, but they did find a demand for their low-power chips and they can be surprisingly good.

doublelayer Silver badge

Re: ARMed and Ubiquitous

I wonder what would have happened in practice if it had been Apple's ISA of choice at that time, because they didn't have billions to throw at their in-house chip designers who have a decade of experience on that ISA and their home-built versions of it. They would have had much less money and they'd have had to give it to someone else to do the chip design work or try to buy ARM. How long would it take before they said something along the lines of "we could extract a lot more performance per watt from this chip if we had the time and money, but we need raw performance now, so let's forget about the power problems and just put more fans in this Performa".

If they had, it's possible that ARM might have focused on the desktop market and become a competitor to Intel and AMD in the late 1990s, but that would mean that it wouldn't have been such a big player in embedded. I wonder if that wouldn't have been worse in the long term for ARM if it found itself wedged between X86/X64 on the high end and something like a better SuperH for embedded. Almost certainly less software would have been written for it, fewer compilers would target it, and there would be less industry experience running it if it was mostly an Apple chip than if it's needed whenever you want to build something handheld.

doublelayer Silver badge

Re: ARMed and Ubiquitous

I wonder how much of this is speculation based on what we would want rather than what is realistic. Apple could have used ARM chips in Macs at the time, and yes, that would have been decades before they actually did. At the beginning, those would have been fine. However, would that necessarily have kept ARM strong in comparison, or would ARM have become the new PowerPC: good enough, but there is something faster that eventually needs to be switched to?

It is relevant that the current ARM chips came from the enhancement of lower performance, lower-power mobile chips. In fact, might a 1990s-era push for faster performance for mostly desktops have harmed ARM's power usage, something that was important to getting it in every mobile device in the 2000s and 2010s? It certainly did for X86 as Intel discovered when nobody wanted to put an Atom in a tablet.

Telegram apologizes to South Korea and takes down smutty deepfakes

doublelayer Silver badge

"Why are the makers of the Deep Fake software not being held responsable ? Surely they should have been the 1st bunch people to have action taken against."

Because such software generally exists in three forms:

1. Software specifically targeted at one type of deep fake, often sexual in nature. The people who make that generally don't put their names on that. They make it, they release it, and copies end up passed around the internet. Maybe they've got malware in them too, who knows. Tracking down the original author(s) who might be in any combination of countries is not easy.

2. Software which can generate lots of kinds of images whose protections against illegal ones are either absent or weak. They're owned by large companies that already ignore many laws, such as the one that says you're supposed to have licenses for training material, so even if you identify them, they're not about to let you enforce judgements on them. They'll call up a lawyer who will say that they're not responsible for what someone did with their software, that their software was not intended for any illegal purpose, and go find the user and figure it out. If charges continue to exist, the lawyer files documents and it takes months or years to complete, the way that almost none of the copyright trials have even gotten to trial yet.

3. Open source software which isn't intended to do this but which gets modified. Either it gets modified thoroughly by some person, in which case go back to option number 1 but the author(s) did less work to get the program, or it ends up being relatively easy to break protections and get illegal images. If it's that branch, then government could go after the person who made the open source model, but it would end up angering people who see someone who didn't want that to happen and has no reasonable ability to stop it or defend themselves being punished for what someone else did.

If you have a good way of escaping this triad, let us know.

'Error' causes Alexa to endorse Kamala Harris, refuse to discuss Trump

doublelayer Silver badge

Re: Conspiracy or ineptitude.

If I had to guess, I'd assume that there were things in the "why to vote for Trump" list that were in a prompt as potentially divisive things that you're not supposed to talk about, so that triggered the response, whereas the training data didn't have as much on Harris because she wasn't the candidate when the model was trained, so it got past the filter. I couldn't know, though, because lots of things are conceivable reasons and it is almost certainly LLM-based, where any protections you try to put in fall apart half the time and any attempt to make them consistently hold a certain opinion is similarly flaky.

An oversight or a deliberate setting would both make a lot of sense to me, but I still think the most plausible one is a combination of both. I doubt they deliberately set the prompt to refuse to speak about Trump but support someone else. The deliberate settings could be on more specific issues that are likely to come up when writing an answer to that question because those are what comes to mind for the prompt writers when they're asked to list political topics that we don't want bad press about. An oversight is not testing to see what actually happens when you add that filter and just assuming that, if you've put something in that rejects political questions, it will reliably reject all political questions. We will probably never know for certain. There's a reasonable chance they won't ever know either, even if they try to investigate.