* Posts by doublelayer

10181 publicly visible posts • joined 22 Feb 2018

Self-driving car maker Musk's DOGE rocks up at self-driving car watchdog, cuts staff

doublelayer Silver badge

Re: Self-driving is a fallacy

I think you've mixed three points there. I agree that current tech is dangerous, which is why we don't allow it to be used except in testing and why Tesla has several safety lawsuits against it. However, just because you find the feature worthless doesn't mean everyone does. I would, in fact, value having a driver so that I could spend the time doing something else. There are plenty of situations where having the capability would be useful to many people, even if you aren't one of them. I think the combination of those two points means that you may have jumped to an answer about whether this will ever be possible which may be correct after all but is poorly reasoned. It might turn out that our tech never becomes good enough to drive sufficiently safely, but it won't be because nobody wants it. There's a very big gap between "it doesn't work" and "it can't work", and "I don't want it" doesn't prove where something falls between them.

Tech CEO: Four-day work week didn't hurt or help productivity

doublelayer Silver badge

Re: Experience a friend had with 4 day week

Depending on what you're doing with the extra hours, shortening the week may not really do anything. That's why it's so important to understand whether they went from 5days/40hours to 4/32 or 4/40.* If you find that you're having to work extra hours to achieve the results they set and they shorten the week but don't adjust timelines, you might end up having longer days anyway. It'd be nice if having an extra day off would improve my productivity so I could achieve the same thing on 80% of the working time, and that might actually happen for me from time to time, but I don't think I can count on that. It's not as simple as counting worker hours to determine how much I'll get done, but there are things I have to do which will only get done if there is time to do it in, and if I'm expected to do it and work fewer hours, something will have to break. In my experience, what breaks is that I end up working longer hours anyway but I don't want to, which isn't a good thing to do routinely.

In other four-day pilots I've seen, improvements in productivity have often been ascribed to reducing inefficiencies like unnecessary meetings, and those would certainly help. However, if they aren't willing to do that or if you're in a job that already manages meetings well**, that won't show any improvement to contribute to it. I also have my doubts that a company that temporarily cancelled unnecessary meetings will stay with that policy because it's a clear indication that they started unnecessary meetings before.

* Or whatever normal weekly hours they were using before. The question is whether they're reducing the hours or just rearranging them, and if they are reducing them, do they adjust any timelines accordingly?

** There are at least some jobs like that. I've worked at places that managed to keep the meeting count and duration low and make most of them relevant to the job. Certainly not all of them, but it happens. This is definitely more enjoyable, but it's hard to improve efficiency because there is less wasted time that is easily freed.

doublelayer Silver badge

Re: Office vs making stuff

"everyone who isn't making something physical with their hands isn't doing anything" is a hilariously daft and easily disproven argument.

That is correct, but not very relevant because that wasn't their argument. They were saying that remote working, and to a lesser extent flexible schedules, don't work better for everyone because some jobs can't be done that way at all. Physical jobs are an example of things that can't be done remotely, and some that involve everyone being present simultaneously don't really work with flexible schedules where different subsets of people are present from one day to another.

It would be easy to argue that this point from them was somewhat obvious, because in a discussion of whether remote working is a good idea, the "unless remote working is impossible in this case in which case this argument does not apply" could be reasonably assumed. Or you could simply acknowledge it and put a limit on the "everyone" you are referring to. Or you could disagree with it and explain how they are more practical in those use cases than the statement suggests. However, you chose not to make any of those arguments. You chose to put words in their mouth which they did not say and argue against it on that incorrect basis. You've also got a contradiction in your argument, simultaneously accusing them of saying that people who don't do physical work are not doing anything and looking down on those who do physical work. They said neither of those things.

Infosec experts fear China could retaliate against tariffs with a Typhoon attack

doublelayer Silver badge

Re: China only needs to wait a few days.

I think the concept makes some sense, but the level the US set on it was a lot higher than it needed to be. There are small deliveries where the funds earned from the tariff are lower than the cost to calculate and assess them even if you value the time spent slogging through the tariff code system to find the category the thing should fall in at zero because a government employee isn't doing it. Delivery companies have often dealt with this by adding a healthy multiplier and, in some cases, not telling you what the fee they're going to charge you is until the package has already almost arrived. Improving the tariff system so it can be realistically navigated by an individual consumer would be the best solution to that, but since I don't think it is going to happen, I accept a low exemption where it is skipped as a reasonable alternative.

I don't think the difficulty faced by retailers is particularly related to this, though. Those who are actually retailing tend to import in bulk, saving a lot more per item than they pay in tariffs, at least they did in most countries before the US decided to change that. Individuals buying items one at a time pay more for that one item. There are a lot of retailers who mark up items they import significantly in return for basically nothing except being located in the country, and that's not a great business model. A lot of the other packages being sent are for niche items that most retailers don't import because only a few people want to buy one, and retailers who try tend to effectively be drop shippers, importing one item from the manufacturer, having it shipped to them, applying a new label before sending to the customer, and charging a lot for the privilege. I don't care much about the survival of those companies because, from my perspective, they don't do anything.

Satellite phone tech coming to your mobe this year – but who pays for it?

doublelayer Silver badge

Re: What matters is how Starlink bills them

That is one important factor, and the other is how they bill the users. If I'm a T-Mobile customer, what do I get by paying them the extra amount every month? Do I get to send unlimited messages, or is there a cap? Do my messages go through immediately or do I need to fiddle around for a long time to get them to send? Do received messages come back through the satellite reliably or do I have to do strange things periodically in the hope that someone has sent a message to receive? Theoretically, some of that should be knowable by people who have tried the beta, but I don't know and I expect that whatever it is now, it will change both for the better (quality and speed will probably improve) and for the worse (they will want lots of money for it). the two sides of how much someone pays and what service they get in return will determine how many people even consider buying it.

Musk's DOGE muzzled on X over tape storage baloney

doublelayer Silver badge

Re: Optical media stable?

I don't, and for two, very good reasons. Tape drives are expensive. If you want to buy an LTO drive and use it to write like three tapes, you end up spending a lot more for that than buying hard drives or even SSDs to store the same information. If you want to buy one drive and read thirty tapes, the numbers start comparing better. So when you have massive amounts of data to store like decades of government records, tape starts making sense again. The other problem is that it's not random access, so retrieving the data from the tape takes longer. Also not a problem for their use case, but not so useful for me.

doublelayer Silver badge

Re: Minor correction

That only matters if I'm doing this inside the US, and I don't know who makes DVDs but I wouldn't want to bet on the US not having tariffs on those too.

doublelayer Silver badge

Re: Minor correction

I think I'd have a better chance getting a robotic loader for disks rather than DVDs. I can't say I've had to do either, but I'm aware of the disk-loading libraries you refer to, whereas I'm unaware of anything that could store and load the 218 DVDs I need for nonredundant storage of a terabyte. I imagine that those five-disk CD changers will need a bit of an update to be adequate to this task. If we don't have either, then I would much rather have to load 43 hard drives into a bay, of which there are many hotswap-capable ones, than to load 218 DVDs into a slot, wait for them to be written, and keep track of their order later. Neither would be fun, but one is faster, more parallel if I've got a few free bays, and I don't have to worry too much about ordering.

doublelayer Silver badge

Re: Minor correction

That is true, but I don't give them a lot of credit for that as a lot of systems can apply compression before sending the data to the media. I could compress something before writing it to optical or other media too, and although that would increase the CPU load because I'm making the original source of data do the compression rather than being able to outsource it to the drive, it generally wouldn't increase it enough to change the speed with which the data is written or read. It's also more adaptable to other types of data. If I have data that I can compress better than 2.5X, then I'll be able to store even more per cartridge, and if I'm storing data that's already been compressed, chances are that the LTO compression algorithm will not further compress it. I think it's most honest to compare like with like, which means raw capacities.

doublelayer Silver badge

Re: Nothing to beat it on $/Gb basis.

It seems that, if that happens once a month, having a single hard drive with a hundred thousand scans on it might still have a lot of efficiency benefits compared to loading a hundred thousand tapes. If it's fewer scans, then you don't even need a reserved hard drive for it. How expensive must it be to either manually cycle the tapes through a drive or to buy and maintain robots to do it, and how much tape reading overhead must that add to the process of testing the new scanning step?

doublelayer Silver badge

"has no one had an LTO tape go bad before? Or had one broken / mangled by a bad drive?"

I've had lots of things fail. I've had optical degrade to unreadability. I've had spinning disks fail without warning. I've had SSDs that were treated completely normally and started throwing SMART errors when they were really not that old. None of those prove that the technology is bad. DOGE claimed that tape was outdated simply because it is tape, which it isn't, and did not specify what they replaced it with. Maybe if they had an actual technical argument with details about why what they did made an improvement, it might convince us. Since they frequently claim credit for fixing things that weren't broken or where they didn't actually change anything, we tend not to assume that they have a good argument when the only statement they made is incorrect.

"And without details, why is “a service run by disabled vets” important?"

We don't know whether or not it was important. What we do know was that the money they claimed to have saved they didn't actually save, thus they were lying. The providers being disabled vets doesn't prove whether the service was good or not, and you can dismiss that as an emotional argument. However, it's an emotional argument that some agree with, which could be countered with information about why the service should be canceled, which DOGE chose not to do. You can't prove that the service was bad any more than someone opposed to its being cut can prove that it was good, meaning we're back to the one thing that is provable: the money that wasn't actually saved.

doublelayer Silver badge

Re: Minor correction

That's why you have more than one of them. If you're comparing to DVDs, the price comparison lets you have a lot more copies on hard drives than it would take to store the lot on archival DVDs. Let's see exactly how much, storing exactly one terabyte. At $8 per archival DVD, a terabyte is $1,743. I can buy 1 TB hard drives, if using US dollars, for $40, probably lower if I searched for more than thirty seconds or was buying in bulk, and buying in bulk is exactly what I'm going to do because, for my $1,743, I can store 43 copies. Which do you think is more likely to survive? 43 redundant versions of which only one needs to work or 218 disks, the failure of one of which is sufficient to break everything?

doublelayer Silver badge

Re: Minor correction

What optical format do you prefer? There's Archival Disk, which is indeed designed to last fifty years and it holds a grand total of 1 TB per disk. Compared to the 18 TB per LTO9 cartridge, or even the 6 TB per cartridge if they're still using LTO7 from 2015, that's not a lot of bytes. It has one other problem though: it's been discontinued and it wasn't replaced with something newer. Other optical media I'm aware of has significantly lower per-disk capacity. So what specific optical disk do you want to use, and how do you expect it will handle the efficiency problems that tape libraries are designed to respond to?

AI entrepreneur sent avatar to argue in court – and the judge shut it down fast

doublelayer Silver badge

Re: So does he own this SW or is he using it?

My understanding is that his service is another AI lawyer that will make up stuff, but he didn't get to use it because it doesn't work. The service he used instead was just making a video of a pre-written statement, which he generated with someone else's LLM.

Dev loudly complained about older colleague, who retired not long after

doublelayer Silver badge

Re: Differently-competent developer

Not unless you're being very broad by your definition. Object orientation does not require exceptions. They do often come together, but there is no requirement to do it. Nor do either objects or exceptions require interpretation the way it's generally used; a different process which must be present in order to execute the program.

What you appear to be going for, assuming I understand you, is the runtime which directs program flow when an exception occurs, considering that an interpreter. It isn't what most people call an interpreter. I don't think it counts at all; it's completely static and can be implemented as a very basic stack climber. On that basis, something with functions that maintain a call stack is interpreted, since that is also implemented by the runtime and is about as complex.

doublelayer Silver badge

Re: I'm not

It shouldn't be the job of team members to identify when a colleague is holding everyone back. That should be the manager's job. If the manager is not doing that and the team is being held back, then maybe you take on that job anyway. And if it turns out that one person is repeatedly trying to do that to others, the suspicion from their colleagues and management will be real and justified. It generally isn't the job of an average programmer to coach their colleagues either, but we do it when needed because it makes the team run better.

From the sound of the story, the colleague who wasn't completing the task wouldn't have accepted being pointed in a different direction, even assuming that their problem was lack of knowledge rather than not caring or choosing to do something different. Of course, this is if we accept the facts alleged in the article, but if we don't, we could make up many different scenarios and point blame at almost anyone involved.

doublelayer Silver badge

Re: Poorly defined task

The version from the article is indeed vague, but if we assume the storyteller wasn't just making it up, the problem with its constraints was solvable in two hours. We can assume hostile management only as easily as we can assume anything else that the story does not require, such as the older colleague being unable to complete the project because their computer was shutting down every three minutes but they didn't report it. I think the vagueness probably comes from it being boring to try to include enough details in the article; we don't want to read about the structure of the file and whether they already had reader code for it and exactly how the value retrieved had to be reported.

I've also dealt with the moving goalposts problem in completing a task, either because the people who created the task couldn't be bothered to write down all the information they had or didn't know something important when they wrote it. It's caused most of the worst preventable problems I've dealt with.

doublelayer Silver badge

Re: Differently-competent developer

Most interpreted languages are hybrids that compile to a bytecode and then interpret that. Thus, it can be completely accurate to say that Perl or Python or various other languages didn't compile. It couldn't make it to the part where the first expression is actually executed. If it did start executing, but then failed out, that would be a different level of problem, and possibly more acceptable, although if that was expression number 3, probably not much better.

Signalgate solved? Report claims journalist’s phone number accidentally saved under name of Trump official

doublelayer Silver badge

Re: An authentic failure

"I've never used Signal, but if it doesn't provide any trust model for participants, then it's not very secure."

Signal's trust model is that they verify phone numbers. Therefore, if you send a message to a phone number, it should be encrypted for and readable by the person who controls that phone number alone. They don't verify that you entered the right phone number. Contact selection is up to you. This is the same trust model used in most other communications, where encryption is based on the identifier, whether that's an email address, username, or phone number, and selecting the identifiers they want to communicate with is the user's responsibility.

Phone number verification has its own weaknesses and you could argue that Signal has some gaps there, but that's not relevant in this case anyway.

doublelayer Silver badge

Re: An authentic failure

You can set a username in Signal. There are two things you can do with that:

1. You can associate your real name with your number, which makes it easier to link those than you probably want.

2. You can associate a pseudonym with your number, which may help if you don't want to give out your number, but not very much.

If you do neither of those, then you just appear as your number, and it is up to the user to associate that number with something to identify it, the way everything else with a phone number works. Signal does not try to compare the contact name with something remote to verify the number. That wouldn't really work anyway. For example, people sometimes write reminders in name fields for people they rarely contact, and Signal isn't able to determine whether they did that or got the name wrong. It's also just not part of what they were intending. This is not something Signal does or will fix for you.

doublelayer Silver badge

Re: A campaign staffer

The article says: "a campaign staffer who later took a job at the US National Security Council official".

Typo in the sentence aside, you'd have to know what exactly this person was doing at the NSC to know how plausible the story is, but they would be included on the basis of their current job, not what they did last year. One might also question whether the kind of things done by a campaign staffer are sufficient to be a good employee in national security areas, but since we have little information about what actions they took in either place or what other experience they might have had, we don't have enough data. Of course, that story could also be false, but it isn't as unrealistic as your comment suggests.

doublelayer Silver badge

Re: Bugs and Blackhats

It's not ideal, but in many cases, it can be enough just to make it clear that you accept vulnerabilities and provide a way for the reports to get to someone who will read them and has a clue what they're looking at. I've had to report vulns to companies before, never received a bounty of any size for them, but knowing that there is a way to send them to the security team rather than sending something to customer support and hoping they can direct it from there really helps with my confidence and, in my experience, the chance that anything will happen.

I've also been on the other side of the bug bounty report system, and maybe having a lower payout will somewhat reduce the number of incorrect or junk submissions sent in. That is probably optimistic, but I can hope. In my experience, a lot of people who find vulnerabilities want to see them fixed more than just getting a payment. Even the highest offers, except cartoonish ones that are probably never going to get granted to anyone, exist, they're usually for bugs so bad that you could still theoretically make more by selling them to someone malicious. For example, that one that Apple gave $100k for, that would probably have sold for a bit more than that as a zero-day. But a lot of people would be unwilling to sell it as such and many who would be willing don't know how to find the people who buy such things. You also have to add a premium to the price you receive for hiding your illegal actions. Hoping to outbid criminals is difficult for people who don't have an Apple-sized bank account, but you may not need to.

Trump doubles down, vows to make Chinese imports even more expensive for Americans

doublelayer Silver badge

Re: Importing more than exporting?

What you missed while defending it is that, when the population is low, you import less because you need less stuff. You might have a higher proportion of imports compared to your economy (26.5% for Canada and 13.1% for US) or a higher value of imports per person, but your total imports is lower because your population is low. Meanwhile, when you have a lot of resources, a country that needs them and has a population ten times the size of yours is likely to import them from you, making your total exports high, especially as your small population won't need to buy and may not be able to process it all locally, for example Canadian petroleum getting processed in the US because the US already had refineries that worked well with it. Trade deficits, which is what these tariffs were based on, is total exports minus total imports, and does not take into account the relative size of populations or economies. Your statement is correct, but it doesn't work as a defense because you're comparing two different things. The statement only looks reasonable because you misunderstood it and used different metrics.

EU may target US tech giants in tariff response

doublelayer Silver badge

Re: Hmm

Retaliatory tariffs were a logical method in previous rounds of this. Essentially, the workflow goes like this:

1. Country A applies 20% tariffs on Country B because they think that will help them.

2. Country B applies 20% tariffs on country A.

3. Country B to country A: "You don't like these after all, do you? We don't like yours. Could we call the whole thing off?"

4. Country A, getting unhappy with their citizens reactions to both sets of tariffs, does so.

5. Both tariffs are removed.

This is probably not going to work the same way this time. However, since it has worked before, it is being tried. It's also being tried because there aren't a lot of other methods available to respond to a move like this.

Amazon's Project Kuiper satellites now boarding the rocket to relevance

doublelayer Silver badge

Re: Amazon subsidised satellite comms: A loss leader?

You don't need to sell it as a loss leader to sell it to those sectors, and in fact doing so would be counterproductive. To make sense as a loss leader, there has to be a plan to make money on some other service to equal or exceed the amount you're not getting from sale of the service directly. I fully expect that they'll sell to air and sea users, but they can afford to price that high as competitors, both in LEO and geosync, have been doing for quite a while.

If you are in agreement with the creator of this thread that a loss leader position makes sense, the closest thing I could come up with that works with both the posts is that traffic to AWS could be charged at a low rate while traffic to other places on the internet could be charged as a high rate, thus giving people reason to buy more AWS resources for things they intend to contact from remote sources. I'm not sure how much that counts as a loss leader, but it has a similar spirit, so we can go with it. It might work, but not for consumers, and it may not have as much an effect on businesses either. If the base service price is not competitive with other companies, it probably won't sell unless everything already runs on AWS. The more predictable price which can also contact AWS will probably be preferred. So if they went with that price, they might find that they're doing nothing but subsidizing existing AWS users with relatively cheap connectivity and not making their money back. I therefore doubt that they are planning to do that. I think they'll price it to earn a profit on direct sales, both to remote industrial customers and to rural customers, especially in North America, who don't have good terrestrial options.

doublelayer Silver badge

Re: Amazon subsidised satellite comms: A loss leader?

I doubt it. Alexa could afford to be sold at a loss because they expected people to buy their music service at the least, because it can't do much else and advertises it whenever asked to play a specific song if you don't have it, and profit margins on that must be huge. They weren't taking that big a loss because the hardware is just a bunch of mics and a WiFi module in a box, and while processing all the recordings on AWS added to the cost, speech recognition in bulk isn't expensive.

Satellite is very expensive and there's not much they can do to earn it back. Even if you posited the most invasive Amazon advertising possible, they can only get so much out of that. They can't really make it dependent on any other service they have. It might have a little positive effect on Prime Video when rural customers who didn't have the bandwidth want to stream things, but that would be the same for others' streaming services too. I wouldn't expect low costs because they don't have a way to earn them back, and risking that much on the hopes of some advertising is a risky prospect unlikely to work well.

Wikipedia's overlords bemoan AI bot bandwidth burden

doublelayer Silver badge

Re: If you can't beat `em. . .

Oh, what an excellent idea. We really must get on to Wikimedia and tell them to make that. It could look something like this. But you know what would be even better, how about they make a version that can be accessed like a web server so you don't have to change your code at all to scrape it, and it can all be done on a local computer. To make downloading as cheap as possible, they could use mirrors and individuals. That could look something lie this.

This is not like the copyright problems LLM creators also have. Wikipedia doesn't mind having bots access their content. They mind having so much bandwidth usage on their servers when any bot creator who put about five minutes into researching this could use either of the solutions I linked to. Those files include the images and video as well as the text, although if you just want the text, they both have already split that out for you.

doublelayer Silver badge

I don't know about that. LLM companies waste a ton of power on a lot of computers. They could afford to do small proof of work puzzles for their page views. Especially as any challenge a site uses has to be quickly solvable by a low-end, several years old consumer machine. It would help, but I'm not sure it would help enough unless the puzzles were large enough that individual users were suffering slow load times and high heat and power output from their general application. In turn, that would embolden people who lock access to their services behind apps, because if you use that, at least you don't have to do a puzzle for every image on the Wikipedia article you loaded.

doublelayer Silver badge

Both. Wikipedia doesn't use them because it is intentionally open to requests from all types of systems, and bot defense blocks plenty of humans who do something slightly unusual, hence why using a browser other than the big four or one of those that's not the right version will get CloudFlare to complain. And yet, CloudFlare's protection mechanisms are fragile and not that hard to bypass. It's annoying if you are a small bot creator who just wants to attack one site, so it sometimes works, but if your business is scraping the entire internet because you think you own it, you can bypass those protections relatively easily if you put your mind to it. I've bypassed CloudFlare's blocks with a bot before and it didn't take very long. They have multiple levels, my code probably wouldn't have done well against any one but the one the site I wanted to access was set to, and that was a few years ago so if I dug out the code and tried it, it probably wouldn't work anymore, but it worked then and I could act in a very bot-like manner without being blocked.

Introducing Windows on arm. And by arm, we mean wrist

doublelayer Silver badge

Re: Windows on Arm has been around since the Surface RT

Windows Phone may have improved significantly, but that doesn't mean that it had what was needed to last in the market. By repeatedly changing things and breaking compatibility, they annoyed users and developers, most of whom would never return. That might also have improved their software, but without sufficient developers, they'd never get the third-party software that users wanted, and without enough users, they wouldn't turn that around. At some point, they had to give up on a failing product, even if that product had a better interface and design.

I'd be interested if you think they had a reason to expect that it would have grown in popularity had they continued. I never used Windows Phone myself because of those above factors. By the time that I heard praise for its interface, Microsoft was fresh off abandoning users of Windows Phone 7 and 8, and I didn't want to buy something for it to be abandoned again. Update lifespan had been annoying me about Android, and Windows Phone seemed worse. I probably missed quite a bit about it, but as a fan, did you see things that could have been helpful to its longevity other than a nice UI?

China hits back at America with retaliatory tariffs, export controls on rare earth minerals

doublelayer Silver badge

Re: Some time ago I made a few posts about the USA being 'number 1'

Why do you assume that this article or this paper is painting US tariffs in a good light? While this article only uses the word "weaponize" for China's actions, do you see them praising the US's? How about the many other articles describing how damaging those tariffs are expected to be and demonstrating that the US is also using their tariffs as a weapon. I think you are mistaken in ascribing that opinion to The Register or to this author.

Not that you need to work hard to show how the US is weaponizing tariffs; the announcements by the people putting in the tariffs make it very clear that they are intended as offensive actions against people seen as competitors at best, enemies at worst. Nearly every discussion of relations between the US and another country involves a tariff threat to make that country do something desired. Admittedly, that's just one of about four things the US appears to think a tariff can do, and tariffs are not great tools for any of them, but they're really not hiding the intent to use them as a punishment. In fact, even when they consider a use of tariffs that's less often seem as a punishment, they still phrase it like that. One could make a pro-tariff speech about the self-sufficiency and local prosperity they are intended to create, and many who support them have made such an argument before because it sounds the most optimistic, but the US isn't making those statements central to their announcements, instead focusing on all the bad things bad countries have been doing to them, mostly without clarification. They have made it ridiculously obvious how they see tariffs and the people on which they are placed. While some journalists will probably defend these or change the arguments to look more sympathetic, I have not seen any on The Register fail to note the statements made or the likely results.

On the issue of AI copyright, Blair Institute favors tech bros over Cool Britannia

doublelayer Silver badge

Re: "text and data mining"...

I've been in your position, and I do wish that companies would be more accepting when people offer to help improve their services. I would also wish to weaken copyright protection for some types of compilations. For example, when Oracle and Google were arguing over the copyright to function definitions in an API (rather than their implementations), I was firmly on the Google (they should not be copyrightable) side of that argument. It sounds like I would want what you copied to be freely available as well, but that would be a blanket policy, not one triggered by their action or inaction, and if my assumption is incorrect about what the data entails, if the publicly-available sets contained more original work for instance, I would have the opposite opinion. The challenge is that I cannot bring myself to accept that their decision not to is severe enough to cancel copyright over it. I derive this unwillingness from two mostly independent reasons.

The first one is that, there are a lot of complications whenever an external person offers to help with things. I find your descriptions believable and I stand corrected that you had no commercial motives, but I've had experience with the alternative. I work in security, which means that my employers are frequently open to submission of security problem reports from the public, and I have reviewed these. They occasionally turn up useful things, which is why we do them and offer to pay people, but I've also had to deal with many people who offer things that are not security problems either because they are attempting to get a payout or because they don't understand how systems work well enough to know what we could fix and what things have a security-related outcome. This means that I've frequently had to decline submissions. And no, I'm not the guy who declines real security issues because I don't want to fix them; those companies don't have bug bounty emails in the first place. I have submitted problems to those people before, though. Using anything where declining an offer of help is sufficient evidence would require a lot of work to filter out unreasonable submissions, and I am not comfortable assuming that would happen.

The second is that I generally oppose restrictions on copyright which are about an action. Something should be covered or not based on simple rules, rather than attempting to control what the creator does with it at all times. Many such regulations have been suggested, usually by people who would really prefer that copyright would be eliminated but they don't find many to agree with them. If you don't update your website, that doesn't make your work less valuable. It's quite possible that if you did update it, your work would be more valuable, but copyright protects work because of its original value, not to mandate the creation of any available additional value later. I may be annoyed by people squatting on things they have, but I don't think that qualifies me to punish those who do; it certainly doesn't for people who do that with physical or financial things, so I don't see why it should if those happen to be copyrighted works.

doublelayer Silver badge

Re: "text and data mining"...

I generally sympathize with your view, but I can't entirely agree with it. A lot of people try to make the argument that their minor change was transformative and therefore should mean they get free, unlimited access to use, reproduce, and sell others' work without compensating those others. My ideal law would do quite a bit to restrict copyright holders from making unreasonable claims, but they should also protect those who created the work from unreasonable claims.

I find some of your claims unreasonable. For example, the claim that, if they don't make improvements to their site's layout, they should lose their copyright, and an acceptable proof that they have failed to make those improvements is that you have offered to work for payment and they have declined. Not changing the UI is not a good enough reason to cancel copyrights. There are plenty of reasons not to pay you for improvements, including not thinking you are charging a reasonable fee for the work, not thinking your UI improvements are a good idea, or thinking that your changes will be difficult to maintain. All of those are logical considerations. There are already protections in copyright case law specifically for search indexing, which also means that your example may not be affected already.

System builders say server prices set to spike as Trump plays customs cowboy

doublelayer Silver badge

No tariffs on data

Until someone tries to figure out how to do it, tariffs only apply to physical things, not software or data. Therefore, if I was a company considering putting a datacenter in the US, I'd consider instead putting it in Canada or Mexico. Especially with Canada, they have plentiful access to power and lots of high-bandwidth links to the US networks. That wouldn't work for things that are incredibly latency-sensitive, but a lot of the internet isn't. Avoiding those tariffs would make for a large incentive to invest that DC cash in a different country, quite the opposite of the intended result.

Raspberry Pi not affected by Trump tariffs yet while China-tied rivals feel the heat

doublelayer Silver badge

Except that "original" varies depending on your perspective. Really, it just means "manufacturer that isn't me". The manufacturer that builds something around the Pi was the original manufacturer of the bit with the interface and probably the power management. From the Pi's perspective, the other company is the OEM because they make the boards the Pi has to connect to. From that company's perspective, Raspberry Pi is the OEM because that's what they must interact with.

From my perspective, the place I'm most likely to call the OEM is the place that manufactured the thing last. I would categorize manufacturers before them in the chain as component manufacturers. So I think the version in the article is correct from the reader's perspective as well as RPi's, but your comment is correct from the other side.

Nvidia’s AI suite may get a whole lot pricier, thanks to Jensen’s GPU math mistake

doublelayer Silver badge

Re: Nvidia rules the market

A monopoly can be created or maintained by unethical means, E.G. by paying to damage the competition. A monopoly can be created or maintained by external control, E.G. one company given the license to use wireless spectrum and nobody else gets it. But a monopoly can exist without either of those. A company could just produce the product that people want to buy so frequently that their competitors can't get the business. That's not illegal, it's not impossible, and it can still lead to monopoly. Having become one, the company concerned may act in any way it likes, whether that is embracing their dominant position to radically increase prices or still trying to improve and acting ethically. All that's required to be one is having almost all of a market, however that came to pass.

Nvidia is not a monopoly in GPUs of all kinds. People still select AMD for many use cases. However, they may have one for some cases if the software concerned requires CUDA. Markets can get very specific and where you divide them depends a lot on your perspective. Consider a parallel: Windows isn't a monopoly and has never been in the sense that there has always been some other operating system you could install on your computer, but it is a monopoly in areas where no other operating system will run the software the buyer needs. It can act like a monopoly in that market while not acting like one for all OSes if it can create a separate strategy for the two submarkets. If a lot of AI people or people doing work would not consider an AMD chip, then Nvidia has the opportunity to act as a monopoly on them until they change their minds. I am not among that set, but from discussions with some who are, AMD is not usually considered because the software they intend to run may not work if CUDA isn't available.

LLM providers on the cusp of an 'extinction' phase as capex realities bite

doublelayer Silver badge

Maybe you have indeed found something the LLM can manage well enough and a way of using it that extracts that quality. It is an unpopular opinion with me because I've heard several people say it and most of them have generated bad results and their version of "as good" means there's text in the file that looks like what they were supposed to do. I do not know how to generate consistently good results with LLMs, and neither do most of the people I've seen use them, but some of them are convinced that it probably is, or that if it isn't they're going to pretend it is to get their work "done" faster. Without knowing you, I do not know whether you've allowed quality to slip and would justify this or whether the doc is as well-written, and importantly accurate, as you would have done by hand.

doublelayer Silver badge

Re: Canary in a coal mine or the ramblings of a dead parrot ?

Read more comments from the same source and you'll see a lot more of them. It's something in the weights for whatever generates the associated word salad along with "methinks" and the username on whatever it replies to.

doublelayer Silver badge

Re: Conclusion probably right, comparison probably wrong

The question in the report was whether it could sustain more than three players, which is what I was responding to. My conclusion was that it could, but possibly not long from now, demand might decline so much that even those shut down. However, I doubt that will happen either. There are enough people who use the models that have already been generated that I expect some people will still continue this long after the bubble has popped. The big companies may no longer get massive server clusters to build another model, but for a lot less money, you can still build more and more wrappers around the models you have. They cost less to run than they do to build, and if you make them small enough, you can make the users run them and still charge them for doing so.

I've seen plenty of negative uses of LLMs. It can be tempting to think that maybe one day the investors will pull the plug and they'll all stop, but this is not very likely. As long as people are still willing to pay for that, someone will still charge them for running it. I don't expect people who are already running LLMs in their workloads to stop until those LLMs cause single, very big problems that are publicly linked to the use of that model, rather than the numerous but small problems we get today. I'd like the shortcut out of that, but I don't see it and I'm pretty sure loss of investor confidence won't be enough.

doublelayer Silver badge

Conclusion probably right, comparison probably wrong

The comparison they've made is that the cloud market could only sustain three players, and the LLM market is similarly costly. That comparison is faulty for the simple reason that there are a lot more than three cloud companies. They're the big three, not the only three of importance. There are other large, international ones. Those three might not worry too much about the likes of Oracle, IBM, and Alibaba as competitors, and they're probably right, but those three are quite large in themselves. There are a lot more medium-sized cloud providers out there, many of them also profitable. They probably won't rise up and overtake the big three any time soon, but they don't have to, meaning that if those companies stumble, competitors will be there to change the market.

So if their comparison is so flawed, does that mean that LLM companies will follow the same trend? I don't think so. A cloud provider can be a place with a couple of datacenters and some automation, probably mostly open source. They can scale slowly if need be, and they can relatively easily merge their business with another provider to grow their user base. None of those apply to LLMs: you can't build a tiny one and expect any users, you can't make small improvements to a model and expect it will get much better, and two small models added together won't make anything. In the end, their conclusion that a lot of companies might run out of funds seems reasonable.

Except that might not give them the same result. The prediction that models are just too expensive to build was made less clear when DeepSeek announced their cheaper approach. It's not clear how honest they were about how cheap that really was. It does mean smaller companies have a way of building a model of their own without raising OpenAI levels of investment to do so. So, after all, I don't think the bubble will pop for the reason this report suggests. I think it will pop when investors eventually come to the realization that lots of companies can really make new models, but no matter how many new models they make, they never make one that doesn't make mistakes and they can't find enough people to pay for the ones they already have.

Intel's latest CEO Lip Bu Tan: 'You deserve better'

doublelayer Silver badge

Not exactly. If you want to run an LLM of that size, that's probably going to be your best bet. However, that is normal RAM that the GPU can access. It's LPDDR5X, not the GDDR7 that you get with the latest NVIDIA cards. That's going to run better than a normal CPU because the GPU is able to access it, but it is still not the same.

Also, it isn't only Apple that can manage that. AMD has the same thing available.

doublelayer Silver badge

If running a 500b parameter model is what you want to do with your machine, the CPU you need is different from the CPUs anyone else needs, or maybe it's irrelevant. The most limiting factor trying to do that on typical consumer kit is that you need lots of RAM. Even if you quantize it to FP4, that's 125 GB of RAM, not exactly the typical amount included in the average desktop. But Intel isn't making that, so ignoring that elephant for the moment. GPUs are faster than CPUs at running these models, but I assume you're not considering that for the RAM shortage since, while I can get 128 GB of RAM into a desktop if I pay enough, getting 128 GB of VRAM is not very likely. NPUs may eventually bridge this gap, and AMD's tend to be faster than Intel's right now.

That means that the CPU you want is exactly what you said you didn't, low performance per core and lots of them. Running an LLM benefits a lot from parallelism, meaning that the more raw compute the CPU can put out in total, the faster you'll get your results. There are a lot of tasks that don't get that, either by intractable problems with the algorithms involved or because they weren't written that way. That's why a lot of things benefit from having a smaller number of cores that can obtain much nicer single-thread benchmarks. LLMs are not among these uses. As increases in single-thread performance slow down, both Intel and AMD have been finding ways to increase their core counts, and in many cases, you get higher benchmarks on both from Intel processors with faster performance cores and more numerous efficiency cores at the cost of much higher power and heat levels, whereas AMD often gets higher numbers for performance per watt but has been doing some power inflation of their own recently. In both cases, running such a large LLM is not something a lot of buyers are looking for, so most consumer chips are designed for what they are doing with their hardware.

RISC OS Open plots great escape from 32-bit purgatory

doublelayer Silver badge

I'm not sure it would have real PR benefit. Most of the people who are gong to license their designs aren't going to care. As nice as it would be to see them fund another OS, I think it would be very hard to sell it to their finance department. I'm interested to see it happen, but I think it would take a lot of effort to make it functional and stable. That difficulty would probably also make it hard to sell as a PR exercise, because if it doesn't happen, ARM has to choose between adding more money and hoping that does it or having the less PR-friendly article about how ARM spent a lot on something that doesn't boot or falls over often.

Congress takes another swing at Uncle Sam's software licensing mess

doublelayer Silver badge

Of course the managers can track what software they bought licenses for. The problem is that there are thousands of managers who bought some thing, and they sometimes didn't have to because the manager above them already bought it but they didn't know that. Alternatively, they did need it, the managers above them didn't buy it, but it would have been more efficient had they bought it in bulk with several other managers they don't speak with, but trying to set that up would have taken a lot of time and paperwork whereas applying for budget approval for just the few licenses they needed was easy. Or they bought it, but they could have done just fine with something cheaper or free. Or they couldn't have managed with something open source because they have no developers, but if you add up all the people who are in that situation, it suddenly becomes profitable to have some developers employed centrally to fix or improve that software if only everyone could use it and send their suggestions.

It's not surprising that most of these things go unnoticed in general operation given how big the institution is. Unless you proactively search for inefficiencies like that, they'll just build up tiny ones at a time, and it happens all the time, at least from my experience, in sufficiently large private companies. In fact, I've seen similar things at surprisingly small companies as well, for example the company where one person was buying new laptops for their reports while another person was having me erase some laptops and put them in a closet because they had extras, in a company that had about fifteen employees. Fortunately, the person who was buying them asked me to recommend specifications, and I was able to take some out of the closet and suggest them.

doublelayer Silver badge

Re: DOGE is the one FINDING this!

LibreOffice would save even more if all you need Microsoft or Google for is a word processor or a spreadsheet. If you need other things that are included in those subscriptions, you'll need more open source software. Your confident assertion of simplicity shows the problem with it. A lot of improvement can be made by shifting to open source software, and it's likely that it exists for almost all the things the various departments and agencies need, but if you try to do it on a simplistic, single approach without considering what specifically the needs are, everyone is going to hate it and Microsoft will be back as soon as they can get rid of you.

I've seen attempted migrations to open source software be torpedoed, not by the software, which was perfectly able to do everything the users needed, but by adherents of open source who refused to see problems with it, even when those problems could have been worked around. If you need to replace Office365, you need to ask the people which parts they're using and which parts they don't, and only then can you even give them the list of things they need to do the same thing.

Malware in Lisp? Now you're just being cruel

doublelayer Silver badge

Re: More a failure of anti virus software, I feel ?

There were some malware that cleaned themselves up after doing something, but it wasn't that common because it wasn't very useful. After all, most of the reasons to install malware somewhere benefited from staying there for a while, whether it's ransomware, a botnet member, looking for bank passwords, or messing with users. Even those that just scanned once for something interesting would probably stay around to increase the chance of spreading. The most likely versions to self-delete would be things targeted at a specific victim that didn't need to spread, but even those were unlikely to do so because, if you were targeting someone specifically, you probably wanted to do plenty of things to them.

China cracks down on personal information collection. No, seriously

doublelayer Silver badge

Re: Just for a change

Oh, those are illegal in the US too. Just as they're illegal in the EU. Now they're illegal in China, except they were already illegal in China years ago. Like the other countries mentioned, the problem was that enforcement was very spotty. A few abusers got investigated and a few of them paid tiny fines, and everyone else continued to act with impunity. China's said several times that they were going to get real serious about this, trust us. They've been saying that about a bunch of other things. They've yet to get very serious about any of them.

Microsoft walking away from datacenter leases (probably) isn't a sign the AI bubble is bursting

doublelayer Silver badge

Re: "gigawatts" capacity ?

Not necessarily. You may want to be able to change the output frequently. For example, I might specify a team of five programmers who I intend to migrate between projects. Their productivity may be different from project to project, but that will depend on the specific people and tasks involved. If instead I try to assess the price for completion of the project, I just have to guess what function of X programmers and Y days will arrive at that result and how difficult, expensive, and every other input it will be for each point along that curve. I can know what the intended output is without being able to exactly compute what inputs are needed to obtain it, which is the unfortunate reality for managers everywhere.

With DC capacity, it makes a lot of sense. If you know the power and heat capacity of your build, but it turns out that nobody wants to train LLMs by the time you've built it, then you know how much normal compute you can pack in there. You may now lose money on the build because you had planned on people renting more GPUs, and maybe you even bought the GPUs and now can't successfully rent all of them, but you have the number you need to figure out what else you can put in there and still make money from it. You also know how much you can scale most of the time. If NVIDIA produces a new GPU that's twice as efficient, you can probably about double the amount of compute coming from that DC. There can eventually be other limiting factors, and you'd have to calculate overhead from other boxes, and maybe you don't want to do that anyway, but it is so often the limiting factor that it is often the number you need to calculate everything else.

doublelayer Silver badge

Re: "gigawatts" capacity ?

"And comparing my 2013 CPU to the leading CPUs today, it IS clear that POWER does the work. My 3220 2-core is 53 Watts, I see faster CPUs sucking 90 130 190 Watts."

There are those, but there were those in 2015 as well. There are CPUs that are much faster while maintaining the same TDP. For example, the Intel Core Ultra 9 285T has an official TDP of 35 W and benchmarks 179% above yours on single-core performance, which isn't far from your 3x requirement. It benchmarks much higher on multicore, but because it has 24 cores instead of your 2, so that may not count. You could argue that this is because it can burst up to 112 W, which is fair, but it won't stay there very long as heat comes up, and the performance wasn't only measured at that peak when it could operate that high.

But if you're particular about that point, let's instead use the AMD Ryzen AI 9 HX PRO 370. Not quite as fast, only benchmarking 147% higher, and it only has 12 cores. However, the default TDP is 28 W and it can only go to 54 W, so no bursting and probably more efficient than your Pentium most of the time. Single core performance may not have stuck to doubling every eighteen months, but it was obvious it wasn't going to, and performance per watt if you can use multiple cores has maintained that rate so far.

EU OS drafts a locked-down Linux blueprint for Eurocrats

doublelayer Silver badge

Re: Monocultures are vulnerable

"The default behaviour is that data is securely contained to the application and authorised users, exporting to standalone files that can be emailed around or copied to a USB stick needs to be specifically implemented by the system, instead of being something you have to try and block on a case by case basis."

And, as a result, means it now has to be managed on an app-by-app basis, which is much more complex and annoying than on a system basis. IT departments already do things to manage things like forbidding USB access. You don't need each application to do it for you, and if you do it at that level, the result is annoyance if you want to run two applications on the same file because application number one won't let you access the file outside of it.