The Register Home Page

* Posts by doublelayer

10898 publicly visible posts • joined 22 Feb 2018

Wikipedia's overlords bemoan AI bot bandwidth burden

doublelayer Silver badge

Re: If you can't beat `em. . .

Oh, what an excellent idea. We really must get on to Wikimedia and tell them to make that. It could look something like this. But you know what would be even better, how about they make a version that can be accessed like a web server so you don't have to change your code at all to scrape it, and it can all be done on a local computer. To make downloading as cheap as possible, they could use mirrors and individuals. That could look something lie this.

This is not like the copyright problems LLM creators also have. Wikipedia doesn't mind having bots access their content. They mind having so much bandwidth usage on their servers when any bot creator who put about five minutes into researching this could use either of the solutions I linked to. Those files include the images and video as well as the text, although if you just want the text, they both have already split that out for you.

doublelayer Silver badge

I don't know about that. LLM companies waste a ton of power on a lot of computers. They could afford to do small proof of work puzzles for their page views. Especially as any challenge a site uses has to be quickly solvable by a low-end, several years old consumer machine. It would help, but I'm not sure it would help enough unless the puzzles were large enough that individual users were suffering slow load times and high heat and power output from their general application. In turn, that would embolden people who lock access to their services behind apps, because if you use that, at least you don't have to do a puzzle for every image on the Wikipedia article you loaded.

doublelayer Silver badge

Both. Wikipedia doesn't use them because it is intentionally open to requests from all types of systems, and bot defense blocks plenty of humans who do something slightly unusual, hence why using a browser other than the big four or one of those that's not the right version will get CloudFlare to complain. And yet, CloudFlare's protection mechanisms are fragile and not that hard to bypass. It's annoying if you are a small bot creator who just wants to attack one site, so it sometimes works, but if your business is scraping the entire internet because you think you own it, you can bypass those protections relatively easily if you put your mind to it. I've bypassed CloudFlare's blocks with a bot before and it didn't take very long. They have multiple levels, my code probably wouldn't have done well against any one but the one the site I wanted to access was set to, and that was a few years ago so if I dug out the code and tried it, it probably wouldn't work anymore, but it worked then and I could act in a very bot-like manner without being blocked.

Introducing Windows on arm. And by arm, we mean wrist

doublelayer Silver badge

Re: Windows on Arm has been around since the Surface RT

Windows Phone may have improved significantly, but that doesn't mean that it had what was needed to last in the market. By repeatedly changing things and breaking compatibility, they annoyed users and developers, most of whom would never return. That might also have improved their software, but without sufficient developers, they'd never get the third-party software that users wanted, and without enough users, they wouldn't turn that around. At some point, they had to give up on a failing product, even if that product had a better interface and design.

I'd be interested if you think they had a reason to expect that it would have grown in popularity had they continued. I never used Windows Phone myself because of those above factors. By the time that I heard praise for its interface, Microsoft was fresh off abandoning users of Windows Phone 7 and 8, and I didn't want to buy something for it to be abandoned again. Update lifespan had been annoying me about Android, and Windows Phone seemed worse. I probably missed quite a bit about it, but as a fan, did you see things that could have been helpful to its longevity other than a nice UI?

China hits back at America with retaliatory tariffs, export controls on rare earth minerals

doublelayer Silver badge

Re: Some time ago I made a few posts about the USA being 'number 1'

Why do you assume that this article or this paper is painting US tariffs in a good light? While this article only uses the word "weaponize" for China's actions, do you see them praising the US's? How about the many other articles describing how damaging those tariffs are expected to be and demonstrating that the US is also using their tariffs as a weapon. I think you are mistaken in ascribing that opinion to The Register or to this author.

Not that you need to work hard to show how the US is weaponizing tariffs; the announcements by the people putting in the tariffs make it very clear that they are intended as offensive actions against people seen as competitors at best, enemies at worst. Nearly every discussion of relations between the US and another country involves a tariff threat to make that country do something desired. Admittedly, that's just one of about four things the US appears to think a tariff can do, and tariffs are not great tools for any of them, but they're really not hiding the intent to use them as a punishment. In fact, even when they consider a use of tariffs that's less often seem as a punishment, they still phrase it like that. One could make a pro-tariff speech about the self-sufficiency and local prosperity they are intended to create, and many who support them have made such an argument before because it sounds the most optimistic, but the US isn't making those statements central to their announcements, instead focusing on all the bad things bad countries have been doing to them, mostly without clarification. They have made it ridiculously obvious how they see tariffs and the people on which they are placed. While some journalists will probably defend these or change the arguments to look more sympathetic, I have not seen any on The Register fail to note the statements made or the likely results.

On the issue of AI copyright, Blair Institute favors tech bros over Cool Britannia

doublelayer Silver badge

Re: "text and data mining"...

I've been in your position, and I do wish that companies would be more accepting when people offer to help improve their services. I would also wish to weaken copyright protection for some types of compilations. For example, when Oracle and Google were arguing over the copyright to function definitions in an API (rather than their implementations), I was firmly on the Google (they should not be copyrightable) side of that argument. It sounds like I would want what you copied to be freely available as well, but that would be a blanket policy, not one triggered by their action or inaction, and if my assumption is incorrect about what the data entails, if the publicly-available sets contained more original work for instance, I would have the opposite opinion. The challenge is that I cannot bring myself to accept that their decision not to is severe enough to cancel copyright over it. I derive this unwillingness from two mostly independent reasons.

The first one is that, there are a lot of complications whenever an external person offers to help with things. I find your descriptions believable and I stand corrected that you had no commercial motives, but I've had experience with the alternative. I work in security, which means that my employers are frequently open to submission of security problem reports from the public, and I have reviewed these. They occasionally turn up useful things, which is why we do them and offer to pay people, but I've also had to deal with many people who offer things that are not security problems either because they are attempting to get a payout or because they don't understand how systems work well enough to know what we could fix and what things have a security-related outcome. This means that I've frequently had to decline submissions. And no, I'm not the guy who declines real security issues because I don't want to fix them; those companies don't have bug bounty emails in the first place. I have submitted problems to those people before, though. Using anything where declining an offer of help is sufficient evidence would require a lot of work to filter out unreasonable submissions, and I am not comfortable assuming that would happen.

The second is that I generally oppose restrictions on copyright which are about an action. Something should be covered or not based on simple rules, rather than attempting to control what the creator does with it at all times. Many such regulations have been suggested, usually by people who would really prefer that copyright would be eliminated but they don't find many to agree with them. If you don't update your website, that doesn't make your work less valuable. It's quite possible that if you did update it, your work would be more valuable, but copyright protects work because of its original value, not to mandate the creation of any available additional value later. I may be annoyed by people squatting on things they have, but I don't think that qualifies me to punish those who do; it certainly doesn't for people who do that with physical or financial things, so I don't see why it should if those happen to be copyrighted works.

doublelayer Silver badge

Re: "text and data mining"...

I generally sympathize with your view, but I can't entirely agree with it. A lot of people try to make the argument that their minor change was transformative and therefore should mean they get free, unlimited access to use, reproduce, and sell others' work without compensating those others. My ideal law would do quite a bit to restrict copyright holders from making unreasonable claims, but they should also protect those who created the work from unreasonable claims.

I find some of your claims unreasonable. For example, the claim that, if they don't make improvements to their site's layout, they should lose their copyright, and an acceptable proof that they have failed to make those improvements is that you have offered to work for payment and they have declined. Not changing the UI is not a good enough reason to cancel copyrights. There are plenty of reasons not to pay you for improvements, including not thinking you are charging a reasonable fee for the work, not thinking your UI improvements are a good idea, or thinking that your changes will be difficult to maintain. All of those are logical considerations. There are already protections in copyright case law specifically for search indexing, which also means that your example may not be affected already.

Amazon's Project Kuiper satellites now boarding the rocket to relevance

doublelayer Silver badge

Re: Amazon subsidised satellite comms: A loss leader?

I doubt it. Alexa could afford to be sold at a loss because they expected people to buy their music service at the least, because it can't do much else and advertises it whenever asked to play a specific song if you don't have it, and profit margins on that must be huge. They weren't taking that big a loss because the hardware is just a bunch of mics and a WiFi module in a box, and while processing all the recordings on AWS added to the cost, speech recognition in bulk isn't expensive.

Satellite is very expensive and there's not much they can do to earn it back. Even if you posited the most invasive Amazon advertising possible, they can only get so much out of that. They can't really make it dependent on any other service they have. It might have a little positive effect on Prime Video when rural customers who didn't have the bandwidth want to stream things, but that would be the same for others' streaming services too. I wouldn't expect low costs because they don't have a way to earn them back, and risking that much on the hopes of some advertising is a risky prospect unlikely to work well.

System builders say server prices set to spike as Trump plays customs cowboy

doublelayer Silver badge

No tariffs on data

Until someone tries to figure out how to do it, tariffs only apply to physical things, not software or data. Therefore, if I was a company considering putting a datacenter in the US, I'd consider instead putting it in Canada or Mexico. Especially with Canada, they have plentiful access to power and lots of high-bandwidth links to the US networks. That wouldn't work for things that are incredibly latency-sensitive, but a lot of the internet isn't. Avoiding those tariffs would make for a large incentive to invest that DC cash in a different country, quite the opposite of the intended result.

Raspberry Pi not affected by Trump tariffs yet while China-tied rivals feel the heat

doublelayer Silver badge

Except that "original" varies depending on your perspective. Really, it just means "manufacturer that isn't me". The manufacturer that builds something around the Pi was the original manufacturer of the bit with the interface and probably the power management. From the Pi's perspective, the other company is the OEM because they make the boards the Pi has to connect to. From that company's perspective, Raspberry Pi is the OEM because that's what they must interact with.

From my perspective, the place I'm most likely to call the OEM is the place that manufactured the thing last. I would categorize manufacturers before them in the chain as component manufacturers. So I think the version in the article is correct from the reader's perspective as well as RPi's, but your comment is correct from the other side.

Nvidia’s AI suite may get a whole lot pricier, thanks to Jensen’s GPU math mistake

doublelayer Silver badge

Re: Nvidia rules the market

A monopoly can be created or maintained by unethical means, E.G. by paying to damage the competition. A monopoly can be created or maintained by external control, E.G. one company given the license to use wireless spectrum and nobody else gets it. But a monopoly can exist without either of those. A company could just produce the product that people want to buy so frequently that their competitors can't get the business. That's not illegal, it's not impossible, and it can still lead to monopoly. Having become one, the company concerned may act in any way it likes, whether that is embracing their dominant position to radically increase prices or still trying to improve and acting ethically. All that's required to be one is having almost all of a market, however that came to pass.

Nvidia is not a monopoly in GPUs of all kinds. People still select AMD for many use cases. However, they may have one for some cases if the software concerned requires CUDA. Markets can get very specific and where you divide them depends a lot on your perspective. Consider a parallel: Windows isn't a monopoly and has never been in the sense that there has always been some other operating system you could install on your computer, but it is a monopoly in areas where no other operating system will run the software the buyer needs. It can act like a monopoly in that market while not acting like one for all OSes if it can create a separate strategy for the two submarkets. If a lot of AI people or people doing work would not consider an AMD chip, then Nvidia has the opportunity to act as a monopoly on them until they change their minds. I am not among that set, but from discussions with some who are, AMD is not usually considered because the software they intend to run may not work if CUDA isn't available.

LLM providers on the cusp of an 'extinction' phase as capex realities bite

doublelayer Silver badge

Maybe you have indeed found something the LLM can manage well enough and a way of using it that extracts that quality. It is an unpopular opinion with me because I've heard several people say it and most of them have generated bad results and their version of "as good" means there's text in the file that looks like what they were supposed to do. I do not know how to generate consistently good results with LLMs, and neither do most of the people I've seen use them, but some of them are convinced that it probably is, or that if it isn't they're going to pretend it is to get their work "done" faster. Without knowing you, I do not know whether you've allowed quality to slip and would justify this or whether the doc is as well-written, and importantly accurate, as you would have done by hand.

doublelayer Silver badge

Re: Canary in a coal mine or the ramblings of a dead parrot ?

Read more comments from the same source and you'll see a lot more of them. It's something in the weights for whatever generates the associated word salad along with "methinks" and the username on whatever it replies to.

doublelayer Silver badge

Re: Conclusion probably right, comparison probably wrong

The question in the report was whether it could sustain more than three players, which is what I was responding to. My conclusion was that it could, but possibly not long from now, demand might decline so much that even those shut down. However, I doubt that will happen either. There are enough people who use the models that have already been generated that I expect some people will still continue this long after the bubble has popped. The big companies may no longer get massive server clusters to build another model, but for a lot less money, you can still build more and more wrappers around the models you have. They cost less to run than they do to build, and if you make them small enough, you can make the users run them and still charge them for doing so.

I've seen plenty of negative uses of LLMs. It can be tempting to think that maybe one day the investors will pull the plug and they'll all stop, but this is not very likely. As long as people are still willing to pay for that, someone will still charge them for running it. I don't expect people who are already running LLMs in their workloads to stop until those LLMs cause single, very big problems that are publicly linked to the use of that model, rather than the numerous but small problems we get today. I'd like the shortcut out of that, but I don't see it and I'm pretty sure loss of investor confidence won't be enough.

doublelayer Silver badge

Conclusion probably right, comparison probably wrong

The comparison they've made is that the cloud market could only sustain three players, and the LLM market is similarly costly. That comparison is faulty for the simple reason that there are a lot more than three cloud companies. They're the big three, not the only three of importance. There are other large, international ones. Those three might not worry too much about the likes of Oracle, IBM, and Alibaba as competitors, and they're probably right, but those three are quite large in themselves. There are a lot more medium-sized cloud providers out there, many of them also profitable. They probably won't rise up and overtake the big three any time soon, but they don't have to, meaning that if those companies stumble, competitors will be there to change the market.

So if their comparison is so flawed, does that mean that LLM companies will follow the same trend? I don't think so. A cloud provider can be a place with a couple of datacenters and some automation, probably mostly open source. They can scale slowly if need be, and they can relatively easily merge their business with another provider to grow their user base. None of those apply to LLMs: you can't build a tiny one and expect any users, you can't make small improvements to a model and expect it will get much better, and two small models added together won't make anything. In the end, their conclusion that a lot of companies might run out of funds seems reasonable.

Except that might not give them the same result. The prediction that models are just too expensive to build was made less clear when DeepSeek announced their cheaper approach. It's not clear how honest they were about how cheap that really was. It does mean smaller companies have a way of building a model of their own without raising OpenAI levels of investment to do so. So, after all, I don't think the bubble will pop for the reason this report suggests. I think it will pop when investors eventually come to the realization that lots of companies can really make new models, but no matter how many new models they make, they never make one that doesn't make mistakes and they can't find enough people to pay for the ones they already have.

Intel's latest CEO Lip Bu Tan: 'You deserve better'

doublelayer Silver badge

Not exactly. If you want to run an LLM of that size, that's probably going to be your best bet. However, that is normal RAM that the GPU can access. It's LPDDR5X, not the GDDR7 that you get with the latest NVIDIA cards. That's going to run better than a normal CPU because the GPU is able to access it, but it is still not the same.

Also, it isn't only Apple that can manage that. AMD has the same thing available.

doublelayer Silver badge

If running a 500b parameter model is what you want to do with your machine, the CPU you need is different from the CPUs anyone else needs, or maybe it's irrelevant. The most limiting factor trying to do that on typical consumer kit is that you need lots of RAM. Even if you quantize it to FP4, that's 125 GB of RAM, not exactly the typical amount included in the average desktop. But Intel isn't making that, so ignoring that elephant for the moment. GPUs are faster than CPUs at running these models, but I assume you're not considering that for the RAM shortage since, while I can get 128 GB of RAM into a desktop if I pay enough, getting 128 GB of VRAM is not very likely. NPUs may eventually bridge this gap, and AMD's tend to be faster than Intel's right now.

That means that the CPU you want is exactly what you said you didn't, low performance per core and lots of them. Running an LLM benefits a lot from parallelism, meaning that the more raw compute the CPU can put out in total, the faster you'll get your results. There are a lot of tasks that don't get that, either by intractable problems with the algorithms involved or because they weren't written that way. That's why a lot of things benefit from having a smaller number of cores that can obtain much nicer single-thread benchmarks. LLMs are not among these uses. As increases in single-thread performance slow down, both Intel and AMD have been finding ways to increase their core counts, and in many cases, you get higher benchmarks on both from Intel processors with faster performance cores and more numerous efficiency cores at the cost of much higher power and heat levels, whereas AMD often gets higher numbers for performance per watt but has been doing some power inflation of their own recently. In both cases, running such a large LLM is not something a lot of buyers are looking for, so most consumer chips are designed for what they are doing with their hardware.

RISC OS Open plots great escape from 32-bit purgatory

doublelayer Silver badge

I'm not sure it would have real PR benefit. Most of the people who are gong to license their designs aren't going to care. As nice as it would be to see them fund another OS, I think it would be very hard to sell it to their finance department. I'm interested to see it happen, but I think it would take a lot of effort to make it functional and stable. That difficulty would probably also make it hard to sell as a PR exercise, because if it doesn't happen, ARM has to choose between adding more money and hoping that does it or having the less PR-friendly article about how ARM spent a lot on something that doesn't boot or falls over often.

Congress takes another swing at Uncle Sam's software licensing mess

doublelayer Silver badge

Of course the managers can track what software they bought licenses for. The problem is that there are thousands of managers who bought some thing, and they sometimes didn't have to because the manager above them already bought it but they didn't know that. Alternatively, they did need it, the managers above them didn't buy it, but it would have been more efficient had they bought it in bulk with several other managers they don't speak with, but trying to set that up would have taken a lot of time and paperwork whereas applying for budget approval for just the few licenses they needed was easy. Or they bought it, but they could have done just fine with something cheaper or free. Or they couldn't have managed with something open source because they have no developers, but if you add up all the people who are in that situation, it suddenly becomes profitable to have some developers employed centrally to fix or improve that software if only everyone could use it and send their suggestions.

It's not surprising that most of these things go unnoticed in general operation given how big the institution is. Unless you proactively search for inefficiencies like that, they'll just build up tiny ones at a time, and it happens all the time, at least from my experience, in sufficiently large private companies. In fact, I've seen similar things at surprisingly small companies as well, for example the company where one person was buying new laptops for their reports while another person was having me erase some laptops and put them in a closet because they had extras, in a company that had about fifteen employees. Fortunately, the person who was buying them asked me to recommend specifications, and I was able to take some out of the closet and suggest them.

doublelayer Silver badge

Re: DOGE is the one FINDING this!

LibreOffice would save even more if all you need Microsoft or Google for is a word processor or a spreadsheet. If you need other things that are included in those subscriptions, you'll need more open source software. Your confident assertion of simplicity shows the problem with it. A lot of improvement can be made by shifting to open source software, and it's likely that it exists for almost all the things the various departments and agencies need, but if you try to do it on a simplistic, single approach without considering what specifically the needs are, everyone is going to hate it and Microsoft will be back as soon as they can get rid of you.

I've seen attempted migrations to open source software be torpedoed, not by the software, which was perfectly able to do everything the users needed, but by adherents of open source who refused to see problems with it, even when those problems could have been worked around. If you need to replace Office365, you need to ask the people which parts they're using and which parts they don't, and only then can you even give them the list of things they need to do the same thing.

Malware in Lisp? Now you're just being cruel

doublelayer Silver badge

Re: More a failure of anti virus software, I feel ?

There were some malware that cleaned themselves up after doing something, but it wasn't that common because it wasn't very useful. After all, most of the reasons to install malware somewhere benefited from staying there for a while, whether it's ransomware, a botnet member, looking for bank passwords, or messing with users. Even those that just scanned once for something interesting would probably stay around to increase the chance of spreading. The most likely versions to self-delete would be things targeted at a specific victim that didn't need to spread, but even those were unlikely to do so because, if you were targeting someone specifically, you probably wanted to do plenty of things to them.

China cracks down on personal information collection. No, seriously

doublelayer Silver badge

Re: Just for a change

Oh, those are illegal in the US too. Just as they're illegal in the EU. Now they're illegal in China, except they were already illegal in China years ago. Like the other countries mentioned, the problem was that enforcement was very spotty. A few abusers got investigated and a few of them paid tiny fines, and everyone else continued to act with impunity. China's said several times that they were going to get real serious about this, trust us. They've been saying that about a bunch of other things. They've yet to get very serious about any of them.

Microsoft walking away from datacenter leases (probably) isn't a sign the AI bubble is bursting

doublelayer Silver badge

Re: "gigawatts" capacity ?

Not necessarily. You may want to be able to change the output frequently. For example, I might specify a team of five programmers who I intend to migrate between projects. Their productivity may be different from project to project, but that will depend on the specific people and tasks involved. If instead I try to assess the price for completion of the project, I just have to guess what function of X programmers and Y days will arrive at that result and how difficult, expensive, and every other input it will be for each point along that curve. I can know what the intended output is without being able to exactly compute what inputs are needed to obtain it, which is the unfortunate reality for managers everywhere.

With DC capacity, it makes a lot of sense. If you know the power and heat capacity of your build, but it turns out that nobody wants to train LLMs by the time you've built it, then you know how much normal compute you can pack in there. You may now lose money on the build because you had planned on people renting more GPUs, and maybe you even bought the GPUs and now can't successfully rent all of them, but you have the number you need to figure out what else you can put in there and still make money from it. You also know how much you can scale most of the time. If NVIDIA produces a new GPU that's twice as efficient, you can probably about double the amount of compute coming from that DC. There can eventually be other limiting factors, and you'd have to calculate overhead from other boxes, and maybe you don't want to do that anyway, but it is so often the limiting factor that it is often the number you need to calculate everything else.

doublelayer Silver badge

Re: "gigawatts" capacity ?

"And comparing my 2013 CPU to the leading CPUs today, it IS clear that POWER does the work. My 3220 2-core is 53 Watts, I see faster CPUs sucking 90 130 190 Watts."

There are those, but there were those in 2015 as well. There are CPUs that are much faster while maintaining the same TDP. For example, the Intel Core Ultra 9 285T has an official TDP of 35 W and benchmarks 179% above yours on single-core performance, which isn't far from your 3x requirement. It benchmarks much higher on multicore, but because it has 24 cores instead of your 2, so that may not count. You could argue that this is because it can burst up to 112 W, which is fair, but it won't stay there very long as heat comes up, and the performance wasn't only measured at that peak when it could operate that high.

But if you're particular about that point, let's instead use the AMD Ryzen AI 9 HX PRO 370. Not quite as fast, only benchmarking 147% higher, and it only has 12 cores. However, the default TDP is 28 W and it can only go to 54 W, so no bursting and probably more efficient than your Pentium most of the time. Single core performance may not have stuck to doubling every eighteen months, but it was obvious it wasn't going to, and performance per watt if you can use multiple cores has maintained that rate so far.

EU OS drafts a locked-down Linux blueprint for Eurocrats

doublelayer Silver badge

Re: Monocultures are vulnerable

"The default behaviour is that data is securely contained to the application and authorised users, exporting to standalone files that can be emailed around or copied to a USB stick needs to be specifically implemented by the system, instead of being something you have to try and block on a case by case basis."

And, as a result, means it now has to be managed on an app-by-app basis, which is much more complex and annoying than on a system basis. IT departments already do things to manage things like forbidding USB access. You don't need each application to do it for you, and if you do it at that level, the result is annoyance if you want to run two applications on the same file because application number one won't let you access the file outside of it.

doublelayer Silver badge

Re: Why even have a local disk?

So email is persistent, but the rest of my files aren't? I do not understand why this is an improvement. If email wasn't persistent, then you couldn't read any if you had local access. If files were, you could work on them too.

I think this is the problem that Chrome OS, and every Chrome OS-like system will always run into. At a different scale, it's the problem that every cloud-based application runs into. There are some minor advantages possible with everything on a server, and there are some much larger ones possible if everything is local, but some local things and some remote things often gives you the downsides of both and the benefits of neither. Many of the advantages of remote data only work in specific scenarios that not every user is in, so these systems tend to evolve to having local options anyway, moving you into the both downsides area.

doublelayer Silver badge

Re: Why even have a local disk?

What are you going to put on the local disk that helps if you're at home or on the road? On my work computer, I can do some work without a network connection because my code and my text editor and my plans are on there, but it sounds like all of those are supposed to be remote. I can work efficiently from a mobile hotspot because most of the big data is already on my disk and I just have to pull modifications, but again, it sounds like that would be remote too.

Unless you just want the OS to be immutable and the user would otherwise be able to install applications and store data freely, I'm not understanding what the disk does for you. If you are planning to have that, then your immutable system isn't much different than existing ones. I can replace my kernel, and if malware did it, it would be a problem, but malware capable of doing it is capable of doing anything else it wants already and the malware that I'm most likely to get will not bother to replace it.

If my original idea was correct, then having a disk won't help at all. You'd still need one because your terminal OS has a browser in it and those need updating frequently. It would be prohibitive to replace ROMs for that or to pull updates from a server every time it turns on, so some persistent storage would be required. But having one that the user can't use at all wouldn't change the situation. I'm still not convinced it's a good idea except in some narrow use cases. If you need a computer which is blank when investigated, then it could be useful, but there are other ways of having a clean computer that may be more reliable.

doublelayer Silver badge

Re: Perfect for running the Laundy

My problem with the idea is that all-remote data often makes the data fragile. For example, let's say I need a bunch of credentials to access things because my employer hasn't got a full SSO setup. They want me to use random passwords and a password manager. Where are my passwords? Options:

1. On my computer: then it's not stateless, because at least some information is stored there.

2. On their servers, and I have to authenticate to get to it, which makes it easier to attack because the attacker does not need access to my computer.

There's a reason why many businesses prefer number 1. Having a dumbish terminal also leaves you with a dependency on the network. You might be in the admittedly large camp that always has network access or can't do much work without it anyway, though not everyone is, but one major problem even in that situation is that some things are sensitive to latency. There are tasks that aren't well handled by such a device.

Top Trump officials text secret Yemen airstrike plans to journo in Signal SNAFU

doublelayer Silver badge

Re: Oops

The US should not claim any credit about ending slavery, as they allowed it to continue for a very long time. This, however, was the opposite, someone trying to claim credit for Europe ending slavery when it really didn't. All the powerful European countries had slavery at the time, it took decades after US independence for them to abolish it on paper, it took a few more for them to stop trading people with countries that hadn't, and in most cases, it took until the 1900s to end the "it's not slavery, honest, we just make them work and don't pay them and it's only in the colonies" approach, not that the US can claim they avoided that either but the quantity was less because they didn't have as many colonies. If a US person claims they were the vanguard of ending slavery, they are making an incorrect and offensive statement. So is someone saying the same thing about the UK.

Raspberry Pi Power-over-Ethernet Injector zaps life into networks lacking spark

doublelayer Silver badge

Admittedly, most USB-PD adapters will only supply 3A at 5V. To get higher power, they increase the voltage. The Pi can't take that, so 15 W is all you get. 5V at 5A is just not a normal kind of power to get from a USB-C cable, but it's what the Pi needs. POE would probably have the same problem except they must be designing their POE injector to convert the voltage, so it might work better.

Microsoft tastes the unexpected consequences of tariffs on time

doublelayer Silver badge

Re: Just out of interest..

It depends where we're drawing the line, but very occasionally. Generally, I think it's where the thing being demonstrated is in something very complex but is a small task, bypassing most of the complex stuff. For example, to show you how to get to and replace one component in something with a lot of them. You could do the same with a very long sequence of steps and diagrams showing each part of the process, but that could be harder to follow than watching someone perform the action. Someone intending to learn everything about the process would be better served by a manual, but someone trying to accomplish the one task might find the video more useful.

In most cases, the video is much less useful, even when it actually shows someone doing a task. When it's a video of someone reading instructions, it's often entirely useless even when the instructions are not.

doublelayer Silver badge

Re: Computers have screen recorders you know?

Bug: When a serial device is connected over USB and sends a certain string, data can be injected into protected memory. This probably allows command injection, but so far, all I've achieved is causing a kernel-level crash.

Video through screen recorder: The video shows the desktop, then cuts out because the kernel crashed. How useful is that?

Video through external camera: It shows me plug in a USB cable, then the BSOD appears. How useful is that? For example, if I'm wrong and it's not what the device sent but a driver is wrong, how would the video help you identify this?

String to be submitted through serial port: You can test it for yourself and see whether it works and what conditions apply.

This was just one case. Many cases of security vulnerabilities either have no significant results that a video demonstrates or the video would just show the user doing obvious things already shown in their report. There is a reason why most bug reports have reproduction steps in them. A video could be useful in cases where reproduction steps are complicated, where the case requires things the person checking doesn't have, or when the results are hard to explain. If these conditions apply, the reporter may provide a video for extra data or the reviewer may have a good reason to request one in a follow-up. If those don't apply, making the video wastes the time of the reporter and the reviewer.

doublelayer Silver badge

Re: It is unclear.. what problem the video was intended to fix

Having dealt with bug bounty reporting systems, the flood of low-quality submissions in several ways is a major problem, but a video would almost never help.

There are always a lot of terrible submissions when the possibility of money is on the line. I would get things along the lines of "your web server is running Nginx, Nginx has this CVE", which often was a different version or we had already patched it, but even when it wasn't, you don't get credit for discovering something someone else discovered. Maybe if it was something we had failed to update in a while, but if the CVE came out yesterday and we haven't patched yet, that doesn't count. Or we'd get something like "If you enter a bunch of junk in a query parameter, the page looks weird" which would count as insufficient input validation in the client-side script on that page, but that isn't a security issue and a lot of people would argue that it's not a bug worth solving. Crucially, those people are the people I'd have to go to with it and I have actual security problems to shout about.

In none of these cases would making them submit a video help. Someone in the hopes of a payout who doesn't understand what counts will have no problem showing the output of curl -I to say "look, Nginx, just like I said". They will find it easy to record their screen to show how typing gibberish into their browser causes the UI to go wrong. Neither video provides any more information than a proper report would. I suggested trying to reduce this by adding a box for possible consequences of exploitation, but AI can easily generate a few generic paragraphs on that topic, so that probably won't work either. At the end, there's little that can be done other than have someone do the boring slog through the submissions to find if there are legitimate ones in there, and if you're unwilling to do it, just take away the payment commitment; that will drop a lot of quantity right away.

OTF, which backs Tor, Let's Encrypt and more, sues to save its funding from Trump cuts

doublelayer Silver badge

Re: Open Source has a USGov achilles heel

It's as open as it ever is: anyone who wants can access the source and modify it freely. The people who maintain it now will still be doing it. Open source is difficult to fund, and when it gets support indirectly from government action, that adds to the instability of funding, but it doesn't diminish the openness.

Now, in both the cases mentioned in the headline, these systems aren't just open source projects. Both use plenty of open source code, and those codebases are truly open, but both maintain large systems that they run themselves. Let's Encrypt is mostly the certificate authority itself, not the software used to implement it. Tor has a higher proportion of code in it, but the Tor-operated network is the major one, and forking the code but not operating in that network would be less useful. In both those cases, the resulting system's openness or lack thereof* is not the responsibility of those delivering funds to the project.

* Let's Encrypt is arguably less community-based than Tor is, but it is very difficult to compare them because they do completely different things. In addition, while there are several mostly compatible definitions for open source or synonymous terms, there is basically no definition for open with respect to a distributed system.

Google admits it deleted some customer data after 'technical issue'

doublelayer Silver badge

Re: Drama's /crime programmed

Unless they have any evidence saying you weren't in the same place your phone says you were, and instead of eliminating location evidence as unreliable, they spin it as you deliberately leaving your phone behind to create an alibi. That would work against you. Not tracking the location at all might be the more reliable, because lack of evidence is harder to spin against you.

China bans compulsory facial recognition and its use in private spaces like hotel rooms

doublelayer Silver badge

Re: Duh, wut?

"The Party wants to prevent its citizens to go after each other as this leads to unrest. But the state has no restriction on any behavior, any behavior at all."

I don't think they've even got that far. The party has total control, yes, but I don't think this law will be enforced against companies. I think it's more policy as scenery. For example, China has a lot of strong worker rights and privacy legislation, and their environmental laws aren't bad on paper. They look quite progressive if you just look at what they allow. If you look at their pollution levels, how most people are working, and how private data is actually held there, they don't look so strong. The reason: they don't enforce most of the legislation they pass. If they're otherwise angry with someone, they can use those laws against them. Otherwise, just because the law says so doesn't mean anyone cares.

I don't know why they bother passing such laws. Originally, this was a propaganda technique, one that many communist countries engaged in. The Soviet Union, for example, put a lot of effort into proclaiming that they were big on equality and human rights, which let them attack the west for its many violations of both, except that they were no better and often worse with their own records. However, while China does use these as propaganda, I'm not sure anyone is paying enough attention to notice. Maybe there is another reason they do it, but I don't know what it is.

doublelayer Silver badge

Re: In other news, today's Times >

We're not decrying the Chinese government for bringing in this law. I, and at least some others here, are reacting to the fact that they violate the law they passed themselves all the time and will continue to do so. I, unlike some of those posts, also don't think they have any intention of enforcing this, even on private companies that don't work for their government.

Now take a guess whether I'm pleased when countries I support more often do the same thing. Do you think I'd praise the action? In case you're unaware, no, I don't and I want them to stop. I will do what little I can to get them to stop. Most of the time, they do not bother listening to me.

Asahi Linux loses another prominent dev as GPU guru calls it quits

doublelayer Silver badge

Re: Don't buy closed hardware in the first place

You have several justified criticisms of Framework. You go on to harm your point by faulting them for not doing something that no manufacturer could do, E.G. allowing you to use the same RAM when using a CPU that uses a different type. Now, you are quibbling with definitions of standard when the original issue is that you faulted them for doing things with a battery which they have not done and are deliberately avoiding. You stated the situation incorrectly, i don't know why, but as I think you entirely understand how their batteries work, I'm not sure elucidating further is helping either of us.

doublelayer Silver badge

Re: Don't buy closed hardware in the first place

"Yes, I could start out on a lower end CPU in a given family and then upgrade to a better one in the future, in the same family...e.g. start out on something i3 class (because for some reason my budget might be tight) then move to an i7 in 6 months to a years time...rather than being stuck on the i3 until I can afford a whole new laptop...that's the difference socketing makes."

And that is exactly what Framework provides. From the context of the previous comments, the discussion was whether you have to replace the RAM if you do that. For your use case, no, you don't, because the same generation uses the same type. If, however, you replace the old I5 using DDR4 with this year's Ryzen which uses DDR5, then you do have to change the RAM but you would have to with sockets.

"They aren't all standard. Whilst the variance isn't massive, it is there...and connectors are even worse. There are laptops that use standard size batteries, but they don't use standardised connectors. Connectors are where laptops tend to differ the most."

And Framework's laptop uses one battery with one connector, hence standard. They've made a few generations of boards, including Intel and AMD boards, and they use the same battery and the same battery connector. How did we get derailed from this? This thread is entirely about what Framework does to be upgradeable or not. Using the same battery connector is one of the advantages I see in it because it means I don't have to search to identify what connector I have and hope I can find a battery using it eight years from now; if Framework's still around, then I can get one from them, and if they're not, the connector design is at least available to others.

doublelayer Silver badge

Re: Don't buy closed hardware in the first place

Your critiques are falling into either obvious or obviously impossible categories. For example:

"Some of those items aren't a given either...a new board might require different RAM...": Yes, it might, based on what RAM type the CPU uses. Would socketed CPUs do anything different? No, they would not. Thus, that does nothing to argue that the mainboard approach is any less upgradable than the unavailable alternative with which you're comparing it.

"a battery with a different voltage": What part of standard design makes you think that's an option? The board is responsible for converting the voltage from the battery to whatever the CPU needs, which is likely the same voltage the other CPU needed, but just in case it isn't, that's what the other components are for. The battery will stay the same.

Also, the main reason I bought one isn't to upgrade the board inside it. The main reason I chose it is that I am often content to run the same old processor for a while, but some other parts are more likely to fail while I'm doing it. I've had the experience of looking for parts for an old machine and finding that they're not easily located or the replacement process involves dismantling everything. I wouldn't be surprised if, when the one I'm using eventually gets discarded, the mainboard is among the original parts whereas other consumables aren't.

I agree with you that these aren't the most cost-efficient models out there. I could have gotten the same performance for less money. I could have even gotten more of some specs for less money, although most of that "more" consists of things like touchscreens which I don't care about. I've dealt with computers, mostly others' but sometimes mine, that lost things like charging ports or keyboards that were too tightly integrated, and the computer this one replaced was replaced because things like that had happened to it, not because the CPU was too old for me. Since I expect to be replacing some parts and keeping this one for a while, I am gambling that the parts availability will allow this one to last longer than the cheaper alternative. We'll see if I am right. I am not a Framework apologist; they've done some things I'm not a fan of, but my experiences with the competition haven't been very good.

doublelayer Silver badge

Re: Don't buy closed hardware in the first place

Most of that is correct, but some of it is exaggerated. For example, when upgrading motherboards, the chassis is not all you keep, unless by chassis you mean literally everything except the processor and the board it's attached to. The battery, the screen, the keyboard, the storage, the wireless card, all the other little bits in there stay there. The processor isn't on a socket, but it's not like the board it's on has a lot of other replaceable components that you'd normally find. You do have to buy new USB-C ports and M.2 connector with the new CPU, but those things are not very expensive compared to the parts that stay. That is what other laptops cannot do. One of the main reasons that is an advantage though is, by having standard form factors, it becomes possible for others to make a board. Some company could produce a board that, for example, could take a Raspberry Pi compute module. Expose some USB-C ports and connect to documented and open sockets* and their board can be dropped into laptops with all the rest of the components built in. That is what isn't easy to do with any other laptop because you'd have to reverse-engineer the internal connections and because that model might be designed completely differently next year and all the effort is wasted.

The ports are dongles, true, and they're not anything special. The important part is that they're dongles that are designed into the computer, making them fit as if they were integral. I chose only USB connectors, meaning that I have what looks like a normal computer and I will use dongles when I need other ports, but if I wanted an HDMI connector, I would have one that fit just as nicely. The problem with dongles is that they are separate, meaning that people leave them behind, they stick out and have to be disconnected when traveling, and all the small inconveniences that come along with that. The "dongle" approach also helps in that they're replaceable. I've known people with computers with broken USB and HDMI ports, but if that happened to a Framework, they'd simply be removed and replaced. I would have liked if they could build them differently, for example if I could put more than four of them into my computer because I often find that more ports is usually better. Still, your comparison seems to miss why that's been done.

* If someone did make a Raspberry Pi CM board, the largest challenge would probably be connecting it to the screen, because I don't think it is an HDMI one. You might need extra hardware to connect those up.

doublelayer Silver badge

Re: An Interesting Article in IEEE Spectrum not entirely irrelevant...

I have read that article, and while the points are basically correct, I find them rather simple. Basically, they point out that it isn't easy to identify or prove who is the absolute best at what they do, if there is such a big distinction in the first place, and that you can't guarantee to keep them. Therefore, you must not assume that you will always be able to find and attract some and must plan for more normal levels of productivity and skill. Most businesses already know this because they don't always find and hire people who are perfect at generating code or whatever else they're generating, as much as they might want to. Most aren't even trying, because all the very knowledgeable programmers I know are uninterested in going to work for the kind of company that builds the necessary but boring applications that are most common; there's no way for them to use their unusual knowledge in a place like that and that's what they enjoy.

Other than that, though, the article seems to run out of ideas. I would think that the most important element is how to improve individual productivity if it matters so much, but that's not mentioned. Team performance is great, but there are a lot of tasks in programming, and I'm sticking with programming not because it's the only job where this applies but because it's what the article talked about and what I know, that are not easily parallelizeable. The article is wrong about this, claiming that:

Everyone uses the same software delivery pipeline. If it takes the slowest engineer at your company five hours to ship a single line of code, it’s going to take the fastest engineer at your company five hours to ship a single line of code. The time spent writing code is typically dwarfed by the time spent on every other part of the software development lifecycle.

No, if the slowest engineer takes that long, then they'll produce less code. Maybe they'll turn out to be great reviewers, testers, organizers, or maybe they're just dead weight. The fastest engineer will produce code faster, and as long as the simplest amount of separation has been put in place, they'll be able to do that without waiting on the slowest engineer because the two bits they're writing should be independent of or compatible with one another. You often can't speed up development of a small module by having three people write it, but you can speed up development of three modules by having one person write each of them as long as you can be pretty sure that the modules can be written without coordination. If they don't recognize that, they may be arguing against looking for 10x engineers for a bad reason, "they aren't useful", rather than the many correct ones, only some of which they mention.

Show top LLMs some code and they'll merrily add in the bugs they saw in training

doublelayer Silver badge

Re: Obvious result, bad methodology

"It takes real intelligence to decide that the code needs fixing. likely to succeed... or to fail. It takes intelligence to decide which one it did, so what's the point?"

What's the point of telling it to, or what's the point of my criticism? I did not try to answer the former, which I will handle below, but the point of my criticism is that their paper leaves open an option for an AI adherent to claim that they mistated it:

Researcher: We put in some code which has bugs in it, and the AI put in those bugs. AI is unreliable.

Adherent: I told it to fix the bugs, and it fixed them without having to be told what the bugs were. AI is great.

AI isn't great. It's success at patching a bug that's described right next to the bug doesn't prove that it can fix other bugs. It's generation of buggy code in a preexisting example doesn't demonstrate that it will generate buggy code when actually used. In both cases, prompting it to generate new code will prove how good it is: it will produce buggy code on its own, it will not fix them automatically, it will not consistently fix them when told to. That demonstrates a problem that this paper does not. That makes other papers better than this one.

And what's the point in telling an LLM to fix bugs? There isn't a lot of point. It might work, it might not, and chances are that if you're intelligent enough to figure which one it is, you could have fixed the bugs yourself. I suppose it could be a random thing to try if you're having one of those annoying "there's a bug in this but I can't see it" situations, but if you're writing code professionally or with a lot of experience, the chance is good that the bug you're not seeing is not going to be as simple as the obvious typos in this set, and thus the LLM is unlikely to find and fix it. But there is a situation where there is a very good reason to tell an LLM to fix bugs: you're trying to prove that the LLM is better than it is. Thus, I expect that those with a financial incentive to see it used will use plenty of cases of that and I don't encourage giving them the setup for flawed defenses of flawed technology.

Dept of Defense engineer took home top-secret docs, booked a fishing trip to Mexico – then the FBI showed up

doublelayer Silver badge

Re: Maybe...

Because connecting an SD card is a very fast notice sent to the monitoring software which is probably configured to block that anyway. They occasionally need to print things, so that's still an allowed, though logged, method of getting files out of the computer. To me, the solution would seem to be either not letting paper go in or out of the place where the data is available or just not using printers and requiring everyone to read it on a screen, but there are probably people who are used to using paper copies and won't accept that being taken away, whereas nobody was able to just copy what they wanted onto USB or SD cards and so it's not hard to deny it to them.

M4 MacBook Air keeps ports modular, locks tight – still a headache to repair

doublelayer Silver badge

Re: You can lock and unlock software locks at will

Do "digital handcuffs" and "software locks" sound very different to you? Well, of course, since that's the point of your post, but why is that a point of your post? The important factor is the lock, which is how handcuffs work anyway, and there is definitely a lock here, a lock to which neither you nor I have the key, so why is this terminology debate relevant to you or anyone?

HP Inc settles printer toner lockout lawsuit with a promise to make firmware updates optional

doublelayer Silver badge

Which, from a trustworthy company, would look reasonable because I'm sure the stuff degrades eventually* and, if used, could cause problems. However, every printer company in existence seems to be untrustworthy, so I assume they stop working on any possible excuse that will send you to buy another cartridge.

* I know this is a problem with inkjet printers. I'm not sure how toner ages. I print so infrequently that I don't have a printer, though that means I have to occasionally commandeer the office printer.

doublelayer Silver badge

Re: HP

Remember that this is the person who invented time travel yesterday. I'd plan to wait a while for this, unless the plans can be sent backward to today using that ansible.

Political poker? Tariff hunger games? Trump creates havoc for PC industry

doublelayer Silver badge

Re: "President Trump's ongoing trade war ..."

While the people who pay the most are US residents, it does hurt others. Canada didn't put retaliatory tariffs in place just because they don't like the 51st state thing. A lot of Canadian businesses that used to sell to the US are going to suffer somewhat as a result of this. The problem is that Trump thinks three things which are all only partially true and probably better described as mostly false:

1. Punishing Canadian businesses will mean more US businesses will make what the Canadian ones used to make. It isn't that easy.

2. The pain is mostly felt by the Canadians. It's not. They get less demand, which isn't fun, but the US gets less supply and higher prices, which is generally less popular.

3. This tactic is likely to cause more problems for the Canadian businesses than US ones. It's the other way around because of retaliatory tariffs, meaning that China, and soon, EU countries are virtually guaranteed to buy from the Canadian business over the US one if they were otherwise identical pre-tariff.

Still, with actions going back and forth, I think multiparty conflict is an appropriate description.

CISA fires, now rehires and immediately benches security crew on full pay

doublelayer Silver badge

Re: Rehired. On full pay.

"Clearly not a cost savings/efficiency measure then."

They don't have a choice. The court said it was illegal to fire them that way, and if they want to comply, they have no choice but to continue to employ them at the same pay levels as before. They can choose whether they work or not, but they couldn't hire them back on less pay without violating that court order.

Of course, it is easy to question whether the original firings were intended as a cost-saving measure as claimed, and there are plenty of reasons suggesting that was not at all what they had in mind. Hiring them back was not what those people wanted to do though, so you can't really show the costs as a failure of the plan since it's exactly the opposite of what they had planned.

US tech jobs outlook clouded by DOGE cuts, Trump tariffs

doublelayer Silver badge

Re: Absolute Fairy Story

That is true, and if you could access the raw data from the government, it would be a much better source. However, most of that is protected quite correctly by privacy legislation. If you are hired for a job that was never advertised, the tax authorities will know, but someone looking at ads wouldn't know that it had happened. For example, they often release numbers of people who lost their jobs and are using unemployment programs, which is a reliable number because they know exactly who that is, but it doesn't give you enough information to know what the people were doing beforehand; they know that too but they don't release it. Therefore, estimating the number of tech people who lost their jobs recently is not very accurate when using public information even though there is a dataset that could tell you. I'm not sure having the more accurate data will produce enough benefits to justify the privacy implications, so I'm fine to live with less accurate research.