* Posts by doublelayer

10360 publicly visible posts • joined 22 Feb 2018

No more waiting for lines: New Windows keyboard shortcuts output em and en dashes with ease

doublelayer Silver badge

Re: ?????

The em-dash (—) is functional, used to separate clauses from the rest of a sentence. The en dash (–) is mostly useless, and wherever it is most typically used, such as ranges, a hyphen would do exactly the same thing. Nobody is going to be confused by 1914-1918 but find that 1914–1918 makes it so much better. Thus, those are flavor, stylistic choices that you can use but don't matter if you do.

doublelayer Silver badge

Re: Great news

More conveniently typing em-dashes is "fully using" your OS now? We've managed quite a long time without that. Even if that is what you're saying, you realize that it's not going to be a bolt-on tool, because it will be integrated and turned on by default in non-beta releases soon. Not to mention that you could have used various methods to make that easier to type if you had cared before, such as creating a custom keyboard layout with it or making these some of the defaults in the Windows symbols/emoji selection thing that has been built in for some time. Your complaint makes no sense.

Reg hack attends job interview hosted by AI avatar, struggles to exit uncanny valley

doublelayer Silver badge

Re: If an employer asks you do to this ...

Let's assume I would actually get the benefit of the saved time. Let's say that it even gets tacked onto my salary. It won't, but let's assume it. My answer is still no. There are two reasons:

1. It doesn't work. Summarizing my answers to prewritten questions does not allow the employer to understand my experience or skills. They can't ask clarifying questions, decide that something indicates a useful indicator and go into detail on that, decide something shows a worrying indication and confirm that it means they shouldn't hire me, etc. A qualified human interviewer can do that. Whether I get the job is now more random.

2. And that means that my colleagues are also going to be more random. If they save a little money on the hiring process and end up getting worse candidates, it's not going to be a saving for long.

OpenAI eats jobs, then offers to help you find a new one at Walmart

doublelayer Silver badge

Re: At OpenAI, we can't eliminate that disruption.

I didn't mean to stop. I'd also take that, but there are two reasons I suggested that they keep doing it.

The first is that we already have had enough promises on this that, if they just stopped talking about it, we'd be saved having to hear annoying statements but not the continued desire to replace people with AI. They've already pushed this thing up to a sufficient speed that if they stopped pushing, momentum and inertia would take it forward anyway. If, on the other hand, they keep pushing with all their strength, it will hopefully move more quickly to the barrier at the end of the road and crash more spectacularly which should hopefully mean fewer people willing to try the same thing again.

The other reason is that actually stopping this would require that they not only stop saying these stupid things but withdraw their services and shut down their companies. They can't and definitely won't do that, so arguing for it won't help us. Of the actions they could take, I think continued hype is more likely to end this than being honest that they had lied about the capabilities and potential of their technology for as long, because people who like and promote these have already had plenty of opportunity to recognize how unreliable the things are.

doublelayer Silver badge

Re: At OpenAI, we can't eliminate that disruption.

Admittedly, the best thing they could do to reduce the disruption is to keep pushing LLMs as the replacement for all workers. A few companies will fire their workers and try to use LLMs, bad things will happen to those companies when important things get hallucinated, and eventually others will take notice and stop. I don't think they have the same idea, but they do appear to be following the plan anyway.

I would hope that people would find this out before trying it at full scale, and a few places appear to, but unfortunately not enough for a general understanding that LLMs will always be unreliable so should only be used when you have a plan to handle unreliable outputs.

FCC plans to kill Wi-Fi on school buses, hotspots for library patrons

doublelayer Silver badge

Re: Control flow of ideas

To clarify this a little, what they've cut funding for is not internet connections inside the library, but library hotspot loans. Some libraries had programs where they would lend portable cellular hotspot devices to people so they could access internet connections from places other than the library building. I'm not sure what you had to do to qualify for that loan, and it's probably specific to each library system. Schools could also do that, probably for students who did not have home connections and schools that wanted to have the remote option or internet-based homework. That's what they can no longer fund through that program, though they are not forbidden from keeping the service around if they can fund it from other sources.

doublelayer Silver badge

Re: Control flow of ideas

Not really, because all the things that would affect the mirror would still affect the switch-off. There would still be a multiday "warning period" while TTLs expired where effects would be blunted while people implemented backup plans.

But it also wouldn't be very easy. As root servers go, the US government runs a lot of them. Three of the thirteen are operated by government sources, but not the same government source. To switch the .uk zone off, you probably want to coordinate the action from NASA, DISA, and the US Army Research Lab so they switch theirs off simultaneously. That's already hard, but they all come under federal jurisdiction, so you could manage it. The rest of them are going to be harder. They're run by private entities who aren't going to jump to executing commands just because they were told to with no legal justification.

Verisign runs two of them. That makes me wonder why they're considered logically separate, since all the other servers consist of redundant installations and infrastructure, but for whatever reason, they're there. If Verisign messed with them, that would permanently remove them from trustworthiness in any internet infrastructure. Trustworthiness in internet infrastructure is the only thing they do that generates money. They would do a lot to avoid taking this action because of the mortal harm that would come to them if it ended up happening. That action would either prevent this from happening or give people a lot of warning so they can switch to non-US servers. I also think they would have a lot of trouble convincing ISC to do that, since it's a small group of very motivated people who know exactly what would happen.

Running through this hypothetical was fun, but also, I don't think it's a risk we're going to encounter. Most people wouldn't understand that this is an option. Most who did, especially anyone with the knowledge to come anywhere close to accomplishing it, would understand it's a weak option. I'm not sure how we would get to a situation where anyone decided they wanted to try it badly enough to have any effect.

doublelayer Silver badge

Re: Control flow of ideas

RIPE already operates one from the Netherlands, and there are Swedish and Japanese operators of root servers. Of course, the US-administered ones are global, including many non-US facilities.

Also, operating root servers does not allow you to censor things. Let's say that I have taken over all the root servers and now run the entirety of the system single-handed. I want to censor something hosted in the UK. My options for doing that are to drop all addresses in the .uk namespace or to not. I do not get to pick and choose. The UK name servers are not operated by me, nor can I decide to remove some of their responses from your view since I merely tell you where to find them and you talk independently to them. Okay, so to deal with that, I will set up a mirror of those servers and direct people to that instead of the real ones. For one thing, the TTL on requests to root servers is long, from days to a week. When I make my change, you won't immediately switch over because you or the DNS servers in your path won't have reason to request information from the root until the old information expires. You know who will notice, though? Nominet, who operates the DNS zone for all .uk addresses, who will see the flood from my mirror and will notice that it isn't normal, then call in people who will recognize this for what it is and raise the alarm. UK-based ISPs would then change their DNS settings to avoid my corrupt roots for that zone. In the meantime, Nominet might well block me to reduce the traffic and prevent me from trying to poison things. DNS is not that weak.

Apple iOS 26 set to dump 75M iPhones on the e-waste pile

doublelayer Silver badge

Re: Hmm

Where did that come from? I couldn't find that statement, but the closest information I could find was a report that 8% of purchases last year were replacing a phone less than a year old. Other information also suggests that 36% of them were replacing a device 2 years or less after purchase. So if Apple actually said that, people don't seem to think they're right.

I do have reasons to question both sources of numbers. The only way to have the accurate numbers is to track iCloud registrations of every device. I don't think Apple is likely to be doing that for a PR exercise. The other ways to estimate it that I can think of are looking at trade-ins, which would likely underestimate the number because people will probably get better value by selling on their year-old devices rather than trading them in or just surveying people which leads to all the typical sampling risks.

doublelayer Silver badge

Re: Refurbished?

Oh, that sounds great. What phone would that be? Does it lack the major hardware problems the only two open phones generally obtainable* both have? Because I've been hoping for the mobile non-Android Linux phone for a while. Surely, if you say it's out there, it actually is.

* By which I mean the PinePhone and ... actually pretty much just that. The Pro model of that has been canceled and the Librem 5 is still on nearly infinite back order.

doublelayer Silver badge

Hence the "most". From the developers' perspective, whether people choose to buy another phone is the user's problem. The developer has to ask whether it's worth it to jump through the hoops to have an older version supported when Apple makes it tricky with SDK age requirements when a lot of their users could update without replacing their hardware. A lot of them decide that it is not. If they decide that, there's little the owner of an old device can do to change that.

doublelayer Silver badge

It depends what that software does. If it's a local program that does a task, it will probably keep working. If it interacts with other services, they eventually stop allowing old versions to do that because they've changed the client and most users can update to the new one for free. This is especially true if they changed the client to fix a security problem, because then allowing the insecure one to continue operating would mean they could be blamed for people who didn't update getting attacked. Not all apps will do that, but it mostly depends on what kind of apps people spend most of their time using.

doublelayer Silver badge

Re: "75 million iPhones"

They do have a recycling program, and they also have a trade-in program. An iPhone XR is currently worth £90 (undamaged), though if you wait for the next model, they'll almost certainly drop that a little.

Trump tells Big Tech: Your power woes? Totally fixable

doublelayer Silver badge

You can't bank on a eureka on fusion next year. People have been hoping for that since 1950. If we get it, that would be great. If we want to spend some money to hope at getting more of it, that is justifiable. But you can't plan your energy policy around that any more than I can plan our server needs by assuming that someone will invent the pocket supercomputer; as much power as all of AWS in one box, retailing for £250 and consuming 8 watts, in a year or two, so all we need to do is wait until then and then have two of those, one for everything and one for redundancy. If someone does, we'll buy one and build around it then, and if there's reason to think that someone can build that if we give them some research funding, it's a gamble worth taking, but we have to plan a realistic option if that doesn't come to pass.

And the Green Peace article you mentioned is about fission, not fusion, so continues not to make your case that anyone is trying to ban fusion. As it happens, I agree with you that their complaints about fission are overblown or mistaken, but since I already said that in the last post, you knew that, and you heard my question about fusion, it's irrelevant.

doublelayer Silver badge

You used a lot of words there to not answer their question, specifically whether your previous statements about fusion were about the kind of fusion we don't have. Because if it does take seventy years to get it, then your statements about its benefits aren't relevant. It will be great if we get it, but there's a possibility that no matter how hard we try, it turns out to be impractical forever, and that even if it isn't, while we don't know whether it works we need to do something else to cover the intervening gap. You appeared to suggest fusion as a viable option, and now you're giving us a "maybe in seventy years". Those aren't compatible.

Also, I'm not seeing people trying to ban fusion. Are you? I've seen people arguing to ban combustion-based electric generation and fission. I do not agree with either of them, though there are some regulations I agree with. But if you insist on claiming that people are trying to ban fusion, why do you think that doesn't make you sound clueless?

DDoS is the neglected cybercrime that's getting bigger. Let's kill it off

doublelayer Silver badge

Re: Guilt by tenuous unwarranted association

I think their email starts with their ISP-provided address, but then travels through their mailserver at Linode although operated by someone else. I admit I might be wrong, but that's my understanding. They are indicating to something parsing the headers that the message was sent from another source but since the MX records still resolve to a Linode address, overeager filters are dropping it.

doublelayer Silver badge

Re: I don't much like this idea at all...

From the average consumer's perspective, how are they going to fix this problem? How are they even going to identify the problem?

A lot of botnet nodes are computers or phones running malware. So first thing's first, run malware scans on those. To do that, open your antimalware program and, since the average consumer doesn't run this frequently, update the definitions, except you can't do that, because the internet has been shut off. And it might not work anyway. DDOS attacks are among the most basic ones, which does mean that the kind of people who want to do them often go for a preexisting program to do it. It also means that, if I want to do that and want to go undetected by standard antivirus software, I can write a new one from scratch in about ten minutes, and so can everybody else.

But we've been talking about routers and IoT things which the average homeowner can't do much about. How are they going to identify which among the things they have is doing this, especially if the software has received a relatively basic patch which can be summarized as "if internet went offline, don't DDOS or even contact the servers for several hours so it's a pain to try to get them to find us".

These questions are important. If it's us with an unworkable solution which will cause big problems for people at a whim versus a problem most people never see*, who do you think is going to win? There are two approaches to making this happen globally: get ISPs to do it, or to expound, make ISPs anger their paying users to help people who don't pay them. Good luck. Or it can be mandated by legislation. To get this passed as legislation, we have to be a lot more convincing.

* One problem making this convincing is that DDOS attacks are not things the general public ever feels. The last attack I remember even getting noticed by the public was the attacks on Dyn which was nine years ago, not deemed remarkable then, and quickly forgotten. Ransomware, on the other hand, was the cause of hundreds of attacks which people experienced consequences from, and little has been done about that. It's hard to convince people a problem is a big one with that kind of experience.

doublelayer Silver badge

Re: I don't much like this idea at all...

Not hard at all. Here's the workflow:

1. I create a website that is intended to take this action. If I want to be sneaky about it, I'll use someone else's.

2. I create a simple page which includes references to hundreds of files on that site. If someone loads that page in their browser, their browser will request all those resources.

3. If it was my site, I report the flood of several hundred requests all at once from this IP address. If it was someone else's, I may have to get some more requests to make them take notice unless they've done as suggested and automated their reporting.

That kind of link wouldn't actually make a DDOS unless I can get a few thousand people to click it, but it does do a very good job of making them look like they're doing it. If there's another step where a real DDOS must exist for this to count, then I could use actual nodes to create an attack, then send my target a link. The reporting system will conclude that they're just one of the nodes in that botnet.

AI code assistants make developers more efficient at creating security problems

doublelayer Silver badge

Re: Hold on there

Except even then, you can run a syntax check very quickly through the files. It's true that Javascript and to a lesser extent Python won't catch anything near as much as C, C++, C#, or Java compilation. Those latter two won't catch a typo in a variable name during compilation. In Python's case, every line is checked for syntax errors before the first one gets executed, and Javascript can have that done to it easily. We also have unit testing which is intended to find that kind of simple bug, among other things which it is also intended to do but often doesn't do as well.

I don't know what they're referring to with a decrease in syntax errors. The only idea I have is that I've seen studies that take badly written code, paste it into an LLM, and tell it to fix the code. Syntax errors generally get fixed there. This isn't very relevant to me because in a file I've just written from scratch without testing anything, there might be at most one or two typos which I might notice myself and if I don't the compiler will point out to me, whereas the last example file I was shown with an LLM doing it had about sixty of them.

Atlassian acquisition drives dream of AI-powered ChromeOS challenger

doublelayer Silver badge

Maybe Liam will be happy

Given the many calls from our own Liam Proven for another browser-only OS, maybe he would like this one. He didn't mention AI, but this would at least remove the Google login requirement, so maybe that's enough. Unfortunately for pretty much everybody else I know, another Chromium-based browser that gives me no local options isn't something I've heard anyone else asking for.

If they're going to do it, I think they must focus on the business case with lots of management tools. Nobody else is going to adopt this. They need to aim this at the IT admins who want something easily locked down. Of course, the IT support people who spend a lot of time in Jira* aren't going to like being stuck with them, so that's something they'll need to overcome, but I still think that is their best chance.

* There are many types of people who can probably do many of their tasks entirely from a browser and use something Atlassian writes to organize it. There will definitely be a subset of people who can't work with that and another one who can if they change everything about their workflow but it drives them all insane. I specifically mention IT support because, although some of them probably could use something like this, they have the closest proximity to the people who can find a reason why it won't be adopted and most of those I know aren't fond of Chrome OS or something with the same restrictions.

GNOME Foundation boss exits after just four months

doublelayer Silver badge

Re: Steven is not the right fit ..

It's a commonly used phrase for any reason why someone isn't doing something. I've used it, for example when turning down a job someone suggested. I think they would interpret "I'm not the right fit for this" a little more kindly than "That sounds terrible. Good luck finding someone willing to endure that". I don't think it's related to diversity or related characteristics.

doublelayer Silver badge

Re: "Mutual"

That's one interpretation, but the other common one is that one side wanted to get rid of the other side and the other side didn't want to explain it. That includes both "we wanted rid of them and they didn't want us to talk badly of them" and "they wanted out and we didn't want them telling others how crap they think we were". It doesn't actually have to be mutual as long as the two parties decided that was an easier story than going into detail.

Programmers: you have to watch your weight, too

doublelayer Silver badge

"It helps marginally in a few very specific edge cases."

Specifically, general desktop use. It doesn't help if you've got a lot of CPU-intensive stuff, because it doesn't give you any more cores. But when you have a lot of programs making intermittent use of things, it does help with that. Or in other words, it's a specific edge case which is what a majority of computers experience most of the time, and before it was economical to do what both AMD and Intel are doing and just shove in lots of cores, it was better than not having it.

Of course, the original statement wasn't about hyperthreading. It was about cores, which absolutely do improve the performance available for most use cases, with the primary exception being things that could be but aren't parallelized (there are some nonparallelizable tasks out there, but they're not the most common). Interpreted literally, the original statement suggested that cores didn't help at all, which is not what you were trying to say. That more cores have diminishing return is a very different statement, and since pretty much anything has a diminishing return at some point, I think we knew that.

doublelayer Silver badge

Re: Misleading summary of security implications

If I misunderstood, I'm afraid I'm going to have to blame you for writing badly. If I write "If people had listened to me, we could have been saved a lot of trouble. The thing exploded today.", I'm implying that I predicted the explosion and had a way to prevent it. People did not predict the XZ backdoor because it had nothing to do with the problems they're bringing up, even now that you've added extra, incorrect justification.

The problem that led to accepting the help of an attacker is only one: the lack of other contributors. That lack was not related to the quality issues brought up. Any program, whether well or badly designed, can have that happen. Many do. Nor were the design deficiencies responsible for the lack of contributors. If people were refusing to help with the coding because of those, they would also not agree to use it, and yet the reason the attack was dangerous was that people were using it. Compression software is often maintained by few people, whether it's Igor Pavlov for LZMA and 7Zip or even Diaz Diaz himself for lzip. People who use compression often don't contribute to the libraries that implement it, whether or not there are format problems within. Either of those could have accepted the help of a seemingly motivated assistant who turned out to want to insert malicious code. You've linked these things, initially without justification, and now with incorrect justification. They are not linked.

Format problems have no correlation with maintenance capability; someone can have a bad design they think is adequate and successfully maintain it, or someone can have an excellent design and for other reasons not have the time or interest necessary to maintain an open source package. We've seen plenty of that before. I know bad open source software which was maintained for decades because the people who created the design were tenacious about not letting Linux updates break their thing but less tenacious about improving their design, and I've seen software I liked and thought was well-structured die because its single maintainer got bored, got a more intense job, get medical problems, etc. In a couple cases, I tried to assist or take over to preserve those things, which is only good if you trust that I won't turn out to be a malicious attacker.

And if you're going to claim that I once again misunderstood by interpreting two sentences next to one another in the same paragraph as having a relationship, I have my own XKCD for this: XKCD 1984. From your response telling me I misunderstood, you're saying exactly what I think you were saying, that the design problems with XZ were predictive of the attack (they're not) and that people should have seen the former and prevented the latter by eschewing XZ (which wouldn't do anything about the problem). I still think that's wrong.

doublelayer Silver badge

The removal of IR35 was posited right up front. Those goalposts were there the first time I asked you for your opinion. There are places where IR35 does not apply. They are called the world outside the UK. I have a feeling, given your posts, that if you moved to somewhere else, you would find some other reason why people are unfairly restricted there. I'm not sure how reliable I'd consider that view. If positing the removal of IR35 is too hard, although I know you'd be calling loudly for it if it was ever proposed, then imagine you're living in a different country where you find the market more fair.

The dichotomy you posit between workers and employers has led you to make a lot of moral statements, not just legal objections to one piece of UK tax law. I was interested to hear your moral philosophy. We all know that you're very free with it on pretty much all occasions, but evidently not when asked for it, only when you can make your pronouncements vague and generic. Although you've written a lot of text here and elsewhere, I still don't have that good an understanding of that part. I know you don't like IR35. I know you see massive differences between workers and employers. But I don't know what you think is fair, and since that's what you build all your specific points on, it's a question I thought I'd give you an opportunity to explain.

doublelayer Silver badge

"First, IR35 hasn’t been abolished."

And the UK has not named me Emperor Without Review. It was a hypothetical. I think you know that.

"If I hire someone, I obviously have to pay enough for them to agree. That’s not the same as paying what they’re worth."

Hence my question. Imagine that you are going to hire someone, and the legal obstacles you name have been removed. You have the option of paying the general level that others are paying for the same skills and getting someone. That's the minimum, because without that, nobody agrees to work for you. You could, though, choose to pay more on a purely voluntary basis. One complication is that in a job market, paying more can often get you a candidate with different skills, but for the question, imagine that those are not skills you need so you're getting the same level of used skills whether you pay market rate or higher. Would you, knowing that you did not have to? How much of the value you are getting from having their work available to your business would you give to your employees?

This is a relatively common philosophical question with plenty of useful standard subparts. For example, everything is worth a different amount depending on who is using it. I would value a laptop more than the average person because the kinds of things I want or need to do require one. However, I pay the same amount as they do to buy one, assuming that we end up with the same level of hardware. Should I be paying more because I need it more, or should I be paying a standard amount that reflects the costs and profit margin of its manufacturer? When it shifts to workers, should I pay more for the same activities if I have a more lucrative operation? If a business is losing money, should it pay less? You've raised this issue, so I'd be interested to hear what you think the moral requirements are and which things are not required but you would do anyway.

doublelayer Silver badge

Re: Antoine de Saint-Exupéry

But that's begging the question in the original sense. That they don't want you to use JS is clear. The question is whether they have a good enough reason, and the comparison between using json data or not is relevant to whether any and all Javascript usage is that bad. If we decide that it isn't, then their advice becomes useless because that statement doesn't tell us why that's wrong or what the alternative should be, which means it's not doing a great job of convincing us that that's a problem unless we already thought so.

I speak as someone who doesn't really like Javascript and has built several web applications without it. And some of the time, doing without it is annoying on the programmer and the user. I have a site where you can move objects around. I built it without Javascript. This means that any time someone moves something, it requires a full page refresh, database queries, and execution of a templating system on the server to make the pure HTML page that does that. That's not efficient on anyone involved. Why would Javascript that does that automatically and would allow a smaller server to serve more users be a bad thing? Because that's what a lot of people will be asking, and they're not going to accept "Don't use the thing I don't like and that's all I'm saying" as sufficient logic.

doublelayer Silver badge

You didn't answer the question. IR-35 has been abolished (I run the UK now). You have a business. You hired a person to write code for that business. How much are you going to pay them? The amount necessary to get someone to voluntarily work for you and deliver the quality of work you desire, or a different number? If you selected different, how different?

doublelayer Silver badge

Misleading summary of security implications

This article refers to another one that makes major claims against the XZ format, and their claims are either correct or plausible (verifying some of them requires a lot of low-level knowledge about the XZ format and how it's changed in the past). But Liam has screwed up badly when describing what any of those things mean. Here's what he said:

It makes many specific cases against the code of the XZ compression tool that Linux uses in many places. If the world had paid more attention, then a great deal of inconvenience could have been averted. Last year, a backdoor in xz caused issues in many distros, including a delay to the Ubuntu 24.04 beta program and wider concern in FOSS circles.

That is entirely wrong. The problem that affected XZ was not due to any of the problems that are raised in the article. Those problems are about whether little changes to the format could break your file or make it compress badly. The backdoor, on the other hand, was deliberately inserted by attackers. Critically, they could have included exactly the same backdoor in gzip, a program and format the article compares positively to xz, with the same effect. The people attempting the backdoor did nothing to mess with the format. They included some code that messed with SSH when it called it, and SSH can use either gzip or xz for compression purposes, so the same backdoor in either would have been effective. They tried to put theirs in xz because it's newer and would be easier to hide.

By getting this wrong, Liam is blaming people in the Linux community for not paying enough attention to complaints about formats which, if they had, would have done nothing at all. The actual problem is not a new one: that it is easy to commit bad code, either by accident or by design, when there aren't a lot of people reviewing the code and, since this was discovered before it was released to the public, that there are major benefits when people do. Not only did Antonio Diaz Diaz not predict the attack on xz, he was not trying to.

White House nixes NASA unions amid budget uncertainty

doublelayer Silver badge

Re: I'm not a Yank but...

The whole concept of an executive order isn't structured because it's not a specific process. It's just a command from the President to do something. If that's within the President's powers: "Send some of the diplomats from the embassy to negotiate with that country", it works. If it's not: "Find that guy I don't like and have someone shoot him", it's illegal. While the term "executive order" suggests a neatly packaged process, and the existence of numbered documents helps to imply that, there isn't really that obvious a process.

The only question is whether the power to declare tariffs is the President's. By the Constitution, the answer is no, but neither is it explicitly denied to the executive. Congress, however, did grant the executive emergency control over tariffs. Trump has decided to argue that the US is in an emergency, so he can use that emergency control. So far, two courts have decided that he is wrong and can't, but there are some more courts, so he is appealing this in the hopes that it will be overturned. If enough members of Congress wanted, they could remove this power because they granted it in the first place. They are not trying.

In the example of NASA, it seems that membership in the group where national security restrictions apply is something the President can decide on his own. Again, that would have to be a power granted to him by Congress, and if he just made it up, he should lose in court. The original structure of most of this was that Congress was supposed to have most of the power, but for some time, Congress has decided that various parts of the executive, whether the President directly or someone who follows the President's commands, should be able to take action without them, so they have been voluntarily relinquishing some of their powers. When presidents asserted other powers, that has often gone unchallenged as well. This empowers the executive without having been the initial design or necessarily being permanent.

doublelayer Silver badge

Re: I'm not a Yank but...

And when was the last time when the monarch or monarch-appointed representative overruled the democratic government by decree? Because if that actually happened, I'm not sure the results would be as obvious as you claim. The last one I can think of was fifty years ago when the powers were used to remove the Australian PM, which was rather unpopular including an unsuccessful, but close attempt to make Australia a republic. Written documents can be the subject of court rulings, which frequently tell leaders that what they tried to do is not allowed. That happens in the US frequently, although if the Supreme Court there was showing a desire to do that, that would be a more convincing argument. Still, a document which is clear which can be interpreted by independent people is something I'd feel a bit more confident about than hoping a person who did nothing to earn power would take reasonable action with mostly untested authority.

Laravel inventor tells devs to quit writing 'cathedrals of complexity'

doublelayer Silver badge

Re: There was a very old term for this

But what kind of efficiency, because that's generally the problem when discussing this kind of thing. For example, when you say "Use the l[e]ast amount of code to get the most amount of results", the literal interpretation means code golf. I'm sure that's not what you mean, but a less literal interpretation is not to write for the future. Write the code that achieves the goal you have right now and let expansion be later's problem. Also not a good idea.

Most considerations of efficiency focus too much on one resource. For example, it's popular to complain about RAM usage having increased, and most of those complaints are wrong, something that old-school programmers know better than anyone. They were faced with many conflicting resources which were significantly limited and had to decide how to manage them. If that meant that you waste a bunch of CPU cycles so that you can fit all this data into RAM in a compressed way, that might be the only way to get the program running on the computer they had, so they did it. The same is true if it was RAM they had to use more of to get speed out of the slow CPU at their disposal. They had to pick between those tradeoffs lots of times and they had to choose correctly each time or their software wouldn't run in the environment it needed to.

When someone uses a ton of RAM today, they might be doing it for speed, because being able to use a hundred megabytes when you probably could fit in five can make some types of actions much snappier, but even when they're not, they may be optimizing for another limited resource, with one of the important ones being developer time. It's expensive and in short supply, whereas until you get to a certain level, RAM often isn't. Libraries versus writing it yourself is relevant to that tradeoff. Even if we ignore the many ways that doing everything yourself can mean building the same untested code when someone else's tested stuff exists, there's still time spent building code that is already out there. Every manufacturer of an embedded device could write their own code to access the hardware. Every application could be written in assembly. We built operating systems and compilers to save us time, and to protect us from making mistakes. Libraries can be the same. In both cases, that doesn't mean that you should use anything and everything; it'd be a bad idea to build your embedded devices off a bad kernel (people who build them sometimes need to learn that), and a library might introduce more risks than it solves. Neither of those are guaranteed, and efficiency is not a good argument against them unless you have specific numbers about what inefficiency exists in the specific library you're considering and why the resource it's using more of is among those you need to focus on most.

doublelayer Silver badge

Re: There was a very old term for this

And on the other side, we have people who think that the right approach is to write everything yourself, which means that we end up with twenty different implementations of similar but not identical subsets of the same thing, most of them containing the same bugs, which they will eventually find and need to debug and fix manually. Meanwhile, someone who did that in a portable way found and fixed those in 2004 and has built a lot of useful functionality that will eventually be added. The only question is whether they added enough other stuff to counteract the benefits of having something mature and under active use elsewhere.

Google kneecaps indie Android devs, forces them to register

doublelayer Silver badge

Re: FDroid?

We don't know Google's terms, but because they didn't write everything in their repository, I wonder if Google might object to that. The idea is that Google wants to identify all developers, and if they didn't identify me but my code can be installed, logic suggests they wouldn't be happy about that. The problem is that logic left this process as soon as they started claiming they cared about security, so it's hard to know what they want and how they would react to any of the suggestions we make.

doublelayer Silver badge

Re: So you can't code for your own device then ?

Why do people do this? Microsoft has done a lot of things that deserve complaint, but they have not done things like this. Microsoft has had code signing for a very long time. They could have made that work with only Microsoft-issued certificates. They didn't. They could have made signing mandatory. They didn't. They could have made either of those the default and required a switch in the settings or even the registry to change them. They didn't. In the same way, they helped develop Secure Boot. They could have locked out non-Windows OSes that way. They could have pressured their OEMs to make it mandatory. What did they actually do? They ensured that non-Microsoft bootloaders and operating systems could boot with it enabled and it's optional on all machines that ship with Windows, including the hardware Microsoft makes themselves.

If I compare commercial operating systems, Windows is the one where I'm most confident I can run software of my choosing and the one that makes it easiest to remove and replace it with something of my choosing. Mac OS comes a not too close second. Nothing else is in the same area. Why does Microsoft get the blame for something almost every commercial OS does except theirs?

AI web crawlers are destroying websites in their never-ending hunger for any and all content

doublelayer Silver badge

Re: “ All I know is I see regular waves of multiple companies' AI bots raiding my site. ‘

That would help with a few of them, but the problem is that many bots try to bypass them. Some bots read robots.txt and find that they're blocked. Others don't, but you can implement a filter on similar means which detects when a bot has announced itself and blocks it then. The problem is that quite a few, possibly all, of the bots then come back but identify themselves as normal browsers instead. Now that they're no longer announcing their presence, simple filters aren't as effective. Now you need to profile their activity to try to guess whether this is a bot flood or not and only ban those bots that are engaging in it, and that's not so easy to do on your own systems either. There are methods you can go to, but it's no longer as simple as inserting a basic config into your webserver.

doublelayer Silver badge

Re: A monetized internet might be the only way to solve this

The initial problem we were trying to solve was bot access, and this would not work. Whether it would work for helping more sites exist without ads is a different problem, and I don't think it would, but we can get to that. In the area of bots, though, a per-connection charge isn't going to work. At £1 per connection per month, they could easily farm their connections to a couple thousand bots and run their scans for a couple months. They're going to spend at least a hundred and probably thousands as much on GPUs. They can easily afford that and the sites that get a small fraction of that revenue will not care.

For funding sites, it would provide some funding, but consider how much it would really do. How many individual sites do you think you've visited all August, counting sites where you viewed one page and may not even have read the whole thing? I would have trouble calculating even a mildly close estimate to that. How much of that revenue would go to search engines? I perform a lot of searches, but that's a small number of page views compared to sites where I browse to a lot of pages, so if we paid website owners by number of pages viewed, the search engines would get little. How much would that compare to the per-search ad revenue they get now?

Let's consider DuckDuckGo. It recently gave us some revenue and market statistics which make these calculations work well. It does not give us a number for UK market share, but it claims a 2.11% market share in the US, higher than its global one. That would mean 7.176 million active users, using a basic calculation of everyone in the United States, although since infants or prisoners don't do many web searches, that's probably an overestimate. £1 = $1.35, so these users will be contributing $116M per year. Let's assume that web searches account for one in fifty page views, again probably a significant overestimate. This gives DuckDuckGo annual revenue of $2.3M. Their global revenue now, using advertising, is $100M. I don't think they'll be cutting the ads any time soon.

doublelayer Silver badge

Re: A monetized internet might be the only way to solve this

Oops, I made a calculation error. One thousand unread messages per week will earn you fifty currency per week, not ten. Even nicer.

doublelayer Silver badge

Re: A monetized internet might be the only way to solve this

It's infeasible. The email idea was infeasible because the obvious thing to do if you're paid five cents per email you receive is to create email accounts and subscribe to every mailing list in existence. Spambots would be trying to eliminate you from their lists, but normal marketing earns you five cents per message. Subscribe to a thousand marketing lists that send a message every week and that's 10 currency units (depending on whose cents these are) per week. You don't even have to read them.

But web micropayments work even worse. How are you going to get that back, as you've stated you intend to? You could host your own website and try to get people to read it. Other than that, it looks like you're out of luck. I don't think we should be paid to comment here. Also, the same gaming that would happen to email would happen online instead. From the page makers point of view, that means the more clicks the better. The Register used to have long articles split into multiple parts. Now, they just put the article, however long it is, on one page. I prefer that, but that's half the page click revenue right there. Less scrupulous sites could easily turn a single page view into ten or twenty clicks, and you're going to stop being happy to pay those bills when they stop being small and you still can't make them back. That's to say nothing of the privacy and security aspects, though there are a lot of both.

Whether or not we could make AI do it, that idea can't happen and any attempt at it would not be accepted. Voluntary micropayments have been tried a few times. Even those have proven unpopular failures.

How Windows 11 is breaking from its bedrock and moving away

doublelayer Silver badge

Granted. Apple can take deserved credit for testing theirs thoroughly and not releasing anything partial. Microsoft can be blamed for releasing the version that only worked on some things and improving it by stages. They still have it, though, which a few posts here either are unaware of or deliberately ignoring.

doublelayer Silver badge

Re: Intel couldn’t license Arm cores and build chips themselves.

"As a consequence I can run the same distro on an Intel laptop of a Pi and the same apt install command will install the Intel or ARM version of the application and its dependencies as appropriate, I'm not sure Windows could manage that."

Which part don't you think Windows could manage? Picking the processor architecture you chose from the packages it already built? It definitely can do that. Anyone can do that, it's one switch statement. Microsoft's got their command-line tool retriever, WinGet. That does that. But you probably meant that the running the same distro part is the impressive bit, in which case you're right about that and wrong about the basis.

You can run the same distro on a PC and a Pi for one reason only: the Raspberry Pi is extremely popular so distro creators go to large efforts to make sure it has compatibility. If that means distributions get held back to fix something Pi-specific, that is justified because there are so many Pis around. Try running a distro of your choice on some other ARM board and you'll have a different experience because compatibility is actually hard. That's why there are three categories of ARM SBCs out there: mainline boards where you should at least be able to patch things in if you have the knowledge, Raspberry Pi which can do whatever it wants because it's popular enough that people will work around it, and the rest of the boards where you have to be much more careful. Windows on ARM has a similar problem which is one reason they're only running on Qualcomm SoCs so far, although I have to give Microsoft credit that they're only working with standard firmware, which also means that one of the types of machines where Linux on ARM is easier to get started is those devices already built for Windows on ARM.

doublelayer Silver badge

And Windows on ARM has X86 emulation just like Apple has Rosetta. If you're going to give Apple credit for that, you have to give it to Windows as well. You're also giving Apple too much credit for their emulation. They had it in some versions of their OS, then they removed it. If you wanted to run some PowerPC software five years after the last PowerPC model was released, not that crazy if you kept yours around for that time, sorry, that compatibility was removed in 10.7 and you can't use it anymore. Apple relied on most users replacing their software before then. They're likewise planning to remove their X64 emulation in 2027.

This is a bigger challenge for Windows because there are many people who use it because it can still run some piece of ancient software. While I know some people who have the same situation with Macs, the only person I know who is still doing it has a collection of PowerPC Macs he still uses because newer things won't run the software. Specifically, he works in audio production and became attached to specific pieces of software. He could probably use something newer but has decided that he'd rather stockpile PowerMacs than learn two decades of new interfaces.

doublelayer Silver badge

Re: Windows is a dinosaur

Most of the problematic stuff has already been fixed when they made modern Windows run on ARM, which it does well. By now, they've identified and likely removed most things that might have crept in that's X86-specific. That doesn't mean porting it to RISC-V is just changing a compiler setting, but it's a lot easier than you're claiming. And before we pat someone else on the back, check what adding RISC-V support to Linux looks like. It's not simple there either. There are lots of patches, mailing list fights, regressions, and manufacturer-specific editions to get there. Linux is ported to more things because it's used on more things, not because it's got the architectural trump card that makes it easy.

Cloud computing has become so normal, it's invisible

doublelayer Silver badge

Re: Unanswered questions

"I'm curious if "All cloud" includes private clouds, from VMs/VM hosts upwards."

It really depends what argument you want to have. Cloud is not a very well-defined term. Some use it to mean "outsourced infrastructure which you rent from someone else". Some use it as "computing that you provision as needed rather than having dedicated machines for specific purposes". I've had debates between people who thought each of those was good and each was bad. The people who thought that running virtual machines across hardware was a bad idea, preferring having specific hardware for each set of tasks, are losing that argument, at least in popularity. Those who only argue about whether you rent or buy can point to large groups of people doing either as they like. I'm sure that someone can come up with a third definition for cloud and can still argue about that one too.

"Are there any unbiased analyses out there?"

Tons, but they're unbiased in that they aren't trying to sell you one or the other. There are no analyses that don't end up reflecting one person's or one company's experience with their limited use cases. Of course, you can find biased versions that aren't trying to sell you something, because this is an area with strongly held opinions even for those without a profit motive. For example, there was an analysis from a company that had used AWS as a massive outsourced hard drive and was therefore paying every month to store petabytes of data and would have to pay again whenever they wanted to read from it. They found that cloud costs way more than anything else and you can save money by not using it. That's not bias; they were correct and probably have honestly held opinions, but unless your use case for cloud is as a massive managed hard drive, it's also not the most relevant to whatever you might be doing. If I gave you my experience, it would end up being the same thing. A useful data point, especially if you're trying to do a thing that works similarly to what I'm already doing, but that's all it can be.

The air is hissing out of the overinflated AI balloon

doublelayer Silver badge

Re: Came here to see ...

If only all that revolutionary always right tech was available when people are willing to pay lots of money, because my employers pay the big AI companies for big AI models, and we don't get that. For example, it winning top coding competitions. For one thing, there's not a lot of actual coding competitions. There are several metacoding competitions like the obfuscated C contest, code golf, etc. There are some hackathons that are open to the public. But the important thing is that people there are developing different things. There aren't many contests that actually test one programmer against another, and the main reason is that few people with skills would compete, because people hiring programmers want people who can either do a good enough job or can do something in a particularly tricky area which requires a lot more specialized knowledge. In real life, we have code generation, and we have cleaning up after it. If it was so good, why do basic employees have to read over and correct it when it's writing small utilities? Some of its output is valid. That's far different from the quality you claim.

Handling language. That's great. And so many too. So it should do a pretty good job at translation, right. I mean probably not literary translation; that's tricky, but translating some simple factual statements should work. As it happens, I also got a chance to see that in action recently, because I was localizing something to French, which I don't speak. The person who was going to do the translations was delayed, so I made the first version with AI translation as a stopgap. What did she say when she reviewed that? "This is useless, I've done it from scratch." Before you suggest it, this was not her trying to keep her job, because this was an open source project for which neither she nor I was paid a thing. And French is a language with plenty of training data. Language translation is fine for understanding a website you want to read, but if it's not good enough for translation of simple sentences in a common language, why should I expect it to do well with one with little training data which nobody at the AI company is qualified to judge?

And on that Math Olympiad performance, if that problem solving ability is so strong, why can't we run that model? It hasn't been released. I'm not actually sure what I can do with that anyway, but if I come up with a use case, I can't run the model that's capable of it. This is an issue because last year, similarly confident statements showed up claiming that a silver medal performance had been won at last year's Olympiad. What actually happened? The silver medal was truly and fairly won as long as the model didn't have to comply with the time limit and got some help parsing from some professional adult humans working in AI who understand both complex mathematics and how to prompt their LLM well. The articles I've seen suggest that the time limit was in play this year, but they're not too clear on what other conditions the thing had, and since you can speed up the model by throwing more computing at it, I have reason to ask. GPT5, on the other hand, isn't generating valid proofs when I ask for them. If I find a use for a proof-generating machine, I don't have one, and I'm wondering if maybe OpenAI doesn't really have a good one either.

doublelayer Silver badge

Re: Dot Dumb

And if you reread the thread, you'll see that's what they were responding to: "So you think the damage will be contained to AI bubble stocks? Perhaps, but that's famously what Federal Reserve chairman thought with regard to subprime mortgages in 2007."

Good morning, Brit Xbox fans – ready to prove your age?

doublelayer Silver badge

Re: "it's blaming the UK Online Safety Act."

I'm having trouble understanding why chatting to other people in a multiplayer game or related activities automatically counts as adult content. This means one of two things:

1. You're right, it doesn't, and this is Microsoft lying.

2. Microsoft also didn't think this counted and got notified that someone who has the opportunity to make things annoying did and was going to require this.

Given how many stories I've already seen of the Online Safety Act applying to things where no logical link exists between them and what the act ostensibly is there to block, option 2 seems the more likely to me. If you have a law that can apply to random things on a whim and has penalties for noncompliance, expect that lots of people will have to add the blocks at a moment's notice when their unrelated services are suddenly deemed covered.

Putin on the code: DoD reportedly relies on utility written by Russia-based Yandex dev

doublelayer Silver badge

This is not special

This library is no more an injection risk than any other. While the dev would theoretically be more easily coerced to poison it by the Russian government, the change would also be more obvious from a small library (therefore large code changes to add something malicious are more noticeable) and single developer. The problem is not that this little piece of open source code is written by someone in Russia; there's far more code written by people in Russia. The problem is loading code into anywhere sensitive without having some reason to think that it's secure or that there are precautions in place if it turns out not to be.

If Russia wants to do a supply chain attack, they have a lot of NPM modules they can choose. Many of them are much larger and not well-maintained, so if they can trick the one developer into accepting their updates, they could do similar things. They could easily decide that fast-glob looks like a nice one to do it to, unless articles like this one result in more scrutiny on that package or its removal. There's a lot more important aspects than a dev living in Russia to determining the risk or damage of that happening.

Who are you again? Infosec experiencing 'Identity crisis' amid rising login attacks

doublelayer Silver badge

If you allow the user to set their own username, they will almost always choose an ID that's not at all unique and for a good reason: the unique part should be in the password. Anyone who does that will have the same level of security by doing that to their password. Anyone who doesn't isn't going to miraculously do it because you require a username.

Fine then, we'll choose usernames for people and require them to do that. I've had a couple services like that. Here is the username I was issued by one of them. If you can identify the service that did that, have fun trying to log in*. The username is: iat2riyowj35pn27

If you're going to give someone a random, non-hashed, non-secret username which they are going to forget and need sent back to them in plain text, you'd be better off having an extremely strict password policy instead. They'll be similarly annoyed, but at least the random part would be attached to the thing you're not storing and sending.

* Not only does that account not exist, nor does the service.

Trump made Intel an offer it couldn't refuse

doublelayer Silver badge

And the point is? Yes, there are a lot of chips that aren't made by TSMC. They usually aren't very fast because they don't need to be. Which means that, when you need the fast ones, there are not tens of thousands of places that make those. If you have decided that access to fast chips is critical to your national security, then the existence of manufacturers of slow ones is irrelevant. If you have decided that access to fast chips is unimportant, it's also irrelevant. If you've decided that access to slow chips is also important, only then does it become relevant, in which case you would try to make sure more IC manufacturers were based in the US.