* Posts by doublelayer

9378 publicly visible posts • joined 22 Feb 2018

How a good business deal made us underestimate BASIC

doublelayer Silver badge

Re: pot?

The problem is that we've seen what happens when you blur away the existence of files. You get the smartphone. It's your data, you know. No need to wonder about where it is, Google's on it, you don't have access to the raw data anyway, the apps know where to find it.

And that is wonderful, assuming that you only ever want to do the thing the app gives you controls for. If you want to have some of those files on portable storage because your phone doesn't have enough internal memory, but the app didn't give you that button? Sorry, you can't do it. Do you want to back up your data and restore it to a different device? You could copy all the files and transfer them over, if we let you see them, but since you can't, you only have a one-size backup method which might work or it might simply restore the app, helpfully back to factory settings except it remembers that you asked for dark mode. Maybe the app is doing something wrong? On a computer, I could edit a configuration file to make it do something differently. But even though the same configuration file exists on the phone, I'm denied access to it, so I'm out of luck. This is the kind of stuff that breaks when you try to pretend that files are unimportant.

A lot of the time, you don't have to look at or worry about internal files, but you really do want to have access to yours and having the internal ones is always an option that's useful to have around in case you find a reason to want it. It's necessary if you're going to write good, structured code. Partially it's because it makes organization of something complex more feasible, and partially, it's because portability and reproducability is critical here. Programming involves multiple layers of understanding how the machine works at one level, ripping that away and learning what's under that, then putting back the levels until you have the level of abstraction relevant to your task. Hiding something as simple as where your data is under a layer of vagueness is not helpful.

doublelayer Silver badge

Re: English is one of the easiest human languages

Could you explain what you wrote, though? In addition to all the valid criticisms of English, it seems to be missing the point that English wasn't used for the sake of simplicity, whether it has it or not. It was used because the people who wrote the language had English as a main language and were planning to have the software used by people who had English as a main language. Soviet programming languages used Russian as their main language, not because Russian is simple (anyone who tries to spell things will find that it's not, not that English should get many orthography points either), but because everyone involved spoke Russian, either because they were Russian or because they had to learn it in order to get to the places where the computers were available.

When dealing with a language like BASIC, language choice is actually quite unimportant. The small number of keywords means that you could use a language you don't speak because memorizing the keywords won't take very long. This is, for example, why all the words commonly used in music are Italian. You don't need to learn to speak Italian to memorize maybe fifty of them, and that's only if you study it for a few years. Modern languages, with lots more library functions, do need to pick a language that the users understand, and English was chosen, again not for simplicity's sake, but because most of the people using and making it already spoke it to some extent and no other language met those criteria.

doublelayer Silver badge

Re: I did comment the other day....

"someone could write a modern (internet-capable, web-capable) BASIC for the Pi."

Of course someone could. You can make a language with any set of capabilities you want. The question is what you put in it and how you make certain things convenient and powerful. If it's not at least one, the language is doomed, and some languages get close enough to both that they'll get used unless yours does too. BASIC's structure doesn't lend itself well to scaling the number of options, because you either end up creating hundreds of builtin commands or you try to make all your functionality as a library and force it to work in the small subset of commands you're willing to include. That means that, if you want to be able to send TCP and UDP and ICMP packets, you're going to need to plan out a lot of syntax. That's not the biggest problem though. The biggest problem is that you likely also want to receive those, which means you're most likely to build a network stack in a different language that can handle this better and just abstract it out for the BASIC program. That works for students, but it's very different from the BASIC that let you drop to assembly when you wanted.

"The complexity of modern systems is what makes them insecure and takes away their resilience."

I disagree. That sometimes happens, but the simplicity of early systems often also made them insecure. Complexity means that interprocess communication mechanisms can have vulnerabilities that people didn't catch before they were put into production, but before we had those, inter-process communication was unlimited, meaning that those vulnerabilities couldn't exist because everyone was allowed to do that. Before that, inter-process communication was impossible because multiple processes was impossible. So did that make us more secure? No, it did not, because it meant that any process running could do anything it wanted to any piece of data. For example, consider the ubiquitous DOS malware that copied itself onto boot sectors and autoexec files, which could not be prevented because there was no per-file security mechanisms and no monitoring while the process ran. Complexity didn't just add more features, although it definitely did that. It also made the structures necessary to have security in a world where not every line is trusted.

doublelayer Silver badge

Re: pot?

This is not the first time that Liam has argued that files are harmful. When he wrote his eulogy for Optane, he was unhappy with its death because he hoped it would kill the file system by uniting disk and RAM, a similar approach to what he's said here. I didn't understand it then and I don't understand it now.

Files are useful. They let you store things much more conveniently than an internal structure that something else writes to storage for you. You can find it, copy it, duplicate it, transfer it onto disks or over networks, it's the ultimate form of portability. We only managed without them because resource limitations were so restrictive, but as soon as the machines made storing separate data files feasible, people did so. Once you have separate data files, having code as data was useful in so many ways. Related to this article, early students of programming get two additional benefits from files. The first is that they can review and edit their work, even if their first attempt crashed the computer. They don't need to write down what they're typing so they can see what it was later. The second is that they can have multi-file programs, or in other words they can have a piece of code provided to them which they don't have to type in from scratch. Want to know how it did what it did? Open that file. Want to use it, just type in an import statement as supported by your language of choice, BASIC or otherwise. No need to renumber something or manually type in that code.

doublelayer Silver badge

Re: GOTO

There is a difference between writing goto in a language where you have better tools and writing it when that is your only tool. There are lots of things in assembly that we insulate ourselves from because it was causing preventable problems. When Dijkstra wrote his famous criticism, he was talking about people who had better tools and he wanted them to use those. He was not insisting that someone write assembly without any jumps.

Boffins carve up C so code can be converted to Rust

doublelayer Silver badge

Re: Rust, Agile...and then some questions....

Rust was designed to fix a couple classes of problems, not everything in software development. No language can fix or even slightly help with problems 1 or 2. Technical debt is a very broad topic, but if you have and count memory safety issues as a tech debt thing, then maybe it would help you. These problems are almost entirely unrelated to one another.

doublelayer Silver badge

Re: A.I.

You probably could build an AI* that could do that conversion, but you have two challenges that have prevented it before and will again:

1. This has been around for a long time: readability and maintainability. I have a program here which was originally written in assembly, I think Motorola 68K assembly. That wasn't very useful, and it was not attached to a lot of interfaces, so it wasn't hard to translate to C. So now, I don't need to try to virtualize something to make it run. However, I still don't really understand what's in here and can't modify it. I have C code that produces the same results that the original one did, but modifications means drilling down to understand what every part was for, and that is tricky because I had only machine code to start with. Every time automatic translations happen, readability tends to get lost. Even if the translation is perfect, it can be difficult to impossible to make modifications to it later.

2. Accuracy. This is a bigger problem with modern AIs, but it's always been a challenge. Most software does not have complete, mathematically proven test cases, where if the tests all pass then we're absolutely certain that nothing is wrong in this code. Mostly, we have basic test cases, where if any of them fail then something is very wrong with the program. Often, we don't even have too many of those. This means that when you're translating code, you can't know for sure whether it's even accurate to what the original program would have done unless you run them in parallel forever and set off an alarm if they ever disagree. Modern AI is very likely to add bugs when it translates. Admittedly, most of those are likely to be obvious, ranging from its output simply not compiling to the program crashing obviously, but when it looks like neither of those has happened, that does not prove that the attempt was successful.

* AI: depending on what you intend this to mean. We have explicit rules-based programming language translators around, but they generally don't qualify. I assume you mean an coding-focused LLM, which often make mistakes in their output, especially when the job is big. Translating a large program, and it would have to be large or you could let a human translate it, with insufficient documentation is very likely to be too large a task for such models to complete correctly.

doublelayer Silver badge

Re: Down vote?

I realize we're getting close to having more people who didn't downvote trying to explain them than there were actually downvotes, but I do have a few comments that could explain part of it:

"If people are using pointer arithmetic, it's for a reason. And Rust won't be able to do a damn thing differently or any more securely about that."

This is an unproven assertion. There may be a reason that could be done better or equally well, or it might be required. It all depends on why that is in there semantically. That level of generalization is as wrong as saying that everything in C should just be rewritten in Rust because it all works.

"You cannot write safe code that interacts with devices, computer buses or the majority of hardware directly. It's just not possible.": It depends what the hardware is, but that's not the case for all of it. Plenty of devices can be used without having to share raw memory as they imply.

"Until we change all the BIOS/UEFI, PCIe etc. specifications to allow arbitrary device detection and communication at bus speed using only, say, JSON or XML or similar languages, you're still going to need to write those bits in C (as they have to make assumptions about what the underlying bits mean and directly access memory)"

Not all the time. Direct memory access is necessary for only some hardware, and it's perfectly possible to handle nonstandard protocols in a memory safe way. It's not as elegant as having everything in a standard form, but it suggests that some classes of simplification are not possible when they are not only possible but sometimes have been deliberately written that way. Even when you do have to do something in a memory-unsafe way, C is not the only option for doing it, though it's often the most convenient.

doublelayer Silver badge

Re: Why?

I think the theory is that you convert to Rust, and then all the new modifications must be done in Rust so it will be more difficult for people to introduce that kind of bug because Rust will block them. Or you could add those checks into a compiler and refuse to merge something unless your C compiler accepted it. Either way, while there are cases where it would be theoretically useful, I doubt there will be all that much adoption. Among other things, I wonder how much readability is lost with conversion from limited C to Rust, because every other conversion I've done has involved some weird additional syntax which my brain takes longer to parse.

Amazon worker – struck and shot in New Orleans terror attack – initially denied time off

doublelayer Silver badge

Re: Wrong type of leave…

I'd buy it. Many places I've worked have worker management products with extremely clear forms consisting of about thirty boxes with short labels and no instructions about what is supposed to go in any of them. Some of them need things that the software should be able to but didn't pull from my existing records. Others need codes pulled from a document that's somewhere in SharePoint or Google Drive depending on whether this is a Microsoft or Google shop. Some actually contain data about my request. Even more are just there and can be safely left blank. It always seems easy to fill in such a form incorrectly, especially the first time I've done it. That doesn't mean Amazon wouldn't or didn't mistreat the employee here. Something plausible is also a good cover story if you were at fault.

Honey co-founder's Pie Adblock called out for copying GPL'd uBlock Origin files

doublelayer Silver badge

I don't think they will face any penalties. While theoretically they could be sued for not releasing or acknowledging, and the developers of the code concerned could try to get punitive damages assessed, that is a lot of work that they will probably not go to. As for why they didn't just release the code as they were supposed to in the first place, I imagine it's a combination of laziness and not wanting people to point out that they didn't write a substantial part of their plugin. As with many such cases, they didn't get to hide that and probably would have gotten less attention if they published that and hid it, but the negative attention probably wouldn't harm them. While they are in the wrong, I wouldn't expect anything interesting to happen as a result.

With 10 months of support remaining, Windows 10 still dominates

doublelayer Silver badge

Re: Enshittification...

Except, username pun aside, most of the common complaints are not about extracting money. The taskbar isn't as configurable now? We could speculate why they did that, but it's pretty clear that it wasn't about money. It would only qualify if they later charged you to let you have the previous behavior back. Either they somehow thought that the 11 behavior is better or they thought it would take more work to maintain it and they could get away with not doing that work. The same goes for any of the AI features that nobody wants minus those few where you have a finite amount of credits. Those that are merely included are there because someone at Microsoft has convinced themselves that people want that. In my case, and likely everyone else who has posted so far, they are wrong, but they aren't extracting money for those features.

I think the hardware requirements are a much more convincing one. I make this distinction, not to defend Microsoft, but to keep around a useful term. We don't need a special word for things getting worse. People have complained about that forever and people have complained about Windows getting worse with subsequent releases at least from the switch from 3.11 to 95, probably from version 1 to 2 but I haven't seen it. A special term makes more sense when referring to the process that Cory Doctorow described when he made up the term, for example as described here. If the term is used for any change we don't like, then we'll have to use long phrases to get this more specific meaning in there, and that weakens the entire point the term exists for. You can't call attention to something well if it constantly has to be explained every time it's relevant to a discussion.

doublelayer Silver badge

Re: Error bars

On a similar topic, I wonder whether it might be premature to start analyzing exactly how Windows 11 lost in market share when it was one month and a 0.82% difference. I have a theory: business machines stopped being used as much in December because some people were away, but personal machines didn't have as big a drop. If business machines are slightly more likely to be running Windows 11, and I think that is likely*, that could easily give you that result. If it falls consistently for the next three months, then we can consider it more reliable.

* Businesses replace their computers more regularly, meaning that some of them are likely to have gotten ones with Windows 11 preinstalled. Some of them may have a Windows 10 image they apply to every machine, but those are also the types of companies that would have eventually built a Windows 11 image they could use. Home users tend to have longer and more erratic replacement cycles and often users more willing to refuse updates over and over. I know several people who consistently refuse to install updates, even small security-only ones, and some of them are willing to press the "remind me later" button every day even if there is a "don't remind me" button they could have pressed if they looked for twenty more seconds. Therefore, I think it is very likely that a higher proportion of home Windows machines are Windows 10 ones than business machines.

doublelayer Silver badge

Re: Enshittification...

I don't hate the term, nor have I expressed criticism of it before. However, I do find that it is being used more generally than it was originally defined. It is intended to refer to specific, long-lasting policy decisions, so when people use it for "I think it got worse", it tends to lose some meaning.

If it got worse because it is a way to extract more money from you or someone else (advertisers, mostly) because that is the way the people running the business have decided to change the business model, that is what "enshittification" was supposed to mean. If it got worse because they have a new UI designer who is not good at their job, that is something else. Even if that UI designer hired more of them and they're all messing up what once made total sense, and they had some programmer friends who don't understand why you don't catch and ignore exceptions unless you have a specific reason, that's not what the term referred to.

Can 4G feature phones rise again on the back of QVGA, thin clients, and remote browsers?

doublelayer Silver badge

Re: Linux

Invent a way to do that and we'll tell you what's wrong with it. So far, the attempts to do that tend to require you to have a runtime installed, and that was the thing that required a different version for your operating system. For instance, you have something that'll run in a JVM, which means you already have to have one of those, you have JVM overhead*, and you may have JVM version compatibility issues. Or you use the cross-platform .NET versions, which are actually pretty good nowadays, but for a while the non-Windows versions were very restricted or broken and you still get version compatibility issues.

If you have a great idea for how we fix that, it could be really successful. It's not a simple problem to solve though.

doublelayer Silver badge

Re: DumbOS?

That approach is not very likely because sending pixel changes, even compressed, takes a lot more bandwidth than sending text and letting the local application turn that into pixels. This is being sold to countries where people can't buy low-end smartphones, even though you can get some very cheap ones these days. I don't think the users who are buying them have unlimited data plans. In many ways, the best thing this cloud browsing thing might do for the users is find ways of radically compressing everything they send so that users can browse without running out of their allotment in a couple hours. Downscaling or eliminating images could save plenty, and elimination is the only one you can do from the phone itself.

The pixel approach is also tricky in many ways. For example, if there is an email address in a website and I activate it, the browser can open my mail client and enter the address because that's already present in text form. For one that's only operating on images, there is no access to the text, so it would have to do a round trip just to confirm that what I clicked on was clickable and you'd have to implement protocols for giving it the necessary information to act on the thing it clicked on.

doublelayer Silver badge

I'm not sure what he thinks this needs. Many sites someone is going to try to load are going to use inefficient scripts, which I gather is what the cloud servers are for: they execute that and only send the document model to the phone. However, that introduces an extra point of latency, especially if that script is frequently changing the model. I assume this isn't sending a picture of the browser because that would increase bandwidth needs significantly. Operating those servers is going to add cost as well, and I'm not sure where the money to cover that is coming from.

In comparison, nearly all these devices already have a browser on them. Usually, it's a Gecko-based KaiOS browser or a Chromium version on something running a stripped Android build. Those browsers aren't great, and the small amounts of RAM available to the browser and the low-end processor aren't helping, but it can browse any site you want with no server in the middle. Somehow, his service has to compete with that, and I'm not sure how easy it is to sell a faster browsing experience.

Boffins ponder paltry brain data rate of 10 bits per second

doublelayer Silver badge

Re: The amount of nonsense in this paper is remarkable

"If we go back to the LLM example, know any 20W rated computers (hence consuming 20Wh at full throttle) that will be competitive?"

I can find you a computer that can run in 20 W which can run an LLM, sufficiently quantized, at an output token rate that matches slow human speech. The output you get will be only slightly more crap than most other LLMs. I don't think that's any better comparison to a brain, though. I've known people who can make up plausible fiction on the spur of the moment and keep track of everything, and those who can't string the truth together well if they're asked to do it quickly.

I think whatever we choose, there's not going to be a good parallel to what a brain does. Our limited understanding of how the brain does what it does is part of the problem, and assumptions about it is how we get papers like this that oversimplify things to the point that we can already realize how improbable it is without trying to duplicate their observations.

On the topic of typing gibberish, I'm surprised that you could type it as quickly as a language you speak. I certainly could not, though I type quickly in multiple languages. I somehow doubt that their research participants had been trained in the way you were, though, so I still think factors other than letter distribution were strong factors explaining the speed differences.

doublelayer Silver badge

Re: The amount of nonsense in this paper is remarkable

I think you're mostly correct, but a few points:

"Someone will have to tell these people that keyboards have a certain lay out for a reason and that subsequently it's not physically possible to type random sequences as fast as sequences in the targeted language of the layout."

I don't think this is why that difference exists. Most keyboard layouts have had a little thought put into them, but yet Dvorak, for all its adherents, doesn't actually speed typing up compared to the less thought-out QWERTY where avoiding lever jams was at least part of the consideration. I think the main differences is that I have a lot of muscle memory. When I want to type "the", I can call on a long history of typing those letters in that order without needing any conscious thought to do it correctly every time. If I have to type "ymx", I don't have that memory. Also, it is common to look at the screen while typing if only by reflex to verify that you have typed things correctly. That's easy when you're copying words, because you can remember the words for a short time. Let's see, I was typing about types of apples, the words say "green-skinned apples often used in", sounds good. If I have to copy random letters from something I'm reading, looking back and forth from one window to another is challenging because the random letters have not been memorized and I'm either comparing them manually, which is slow, or flicking back and forth for by reflex without being able to use that reflex properly, which is going to cause delays as the brain attempts to do something that it cannot do. If we created a keyboard layout by dumping all the letters into the space for letter keys at random, but we then had people only use that layout for years, I think they would be similarly fast while typing languages they know and similarly slower when typing random letters.

"The estimate for the human brain is that it uses 20W a day or on average less than 1W an hour."

Watts are already energy over time; you can't divide them like that. The brain tends to consume about 1.7 megajoules of energy in a day, meaning its average power consumption is 20 W whether it's per day or per second. Which means that you can in fact get a computer that operates with less power, and quite easily. It will be a lot more power-efficient than the brain when solving mathematical problems, but most of the tasks we want to perform daily can't be done at all or anywhere nearly as efficiently on such a machine.

Will 2025 be the year satellite-to-smartphone services truly take off?

doublelayer Silver badge

Re: Multiple financial risks.

Ground stations shouldn't be as big a concern in many countries, but that assumes that Starlink or any other company in a similar position wanted to help circumvent censorship. So far, they don't unless Musk personally disapproves of the censorship. The problem is that, while they can't easily block the service, they can block local payment to the service. They would have to provide the service for free or only to those who set up their own payments from another country, and it's more profitable to accept more payments and comply with whatever rules the host government has.

doublelayer Silver badge

Re: Multiple financial risks.

Brazil banned Starlink because Starlink refused to enforce a government order on X. While it's true that the first order they had was against X, their ban on Starlink was directly linked to Starlink's action, not just because they were owned by the same person. Many other countries might be less concerned about that, though.

doublelayer Silver badge

Re: Battery

That may be, and it's my theory as well, but they keep saying that they're going to offer data services. That could be Musk-style exaggeration which the mobile companies are willing to amplify, or they might think they can actually do it. Either way, I expect that the experience should they try is going to be worse than perspective users imagine. Either the speed will be much lower and people will find that the data isn't very useful, or the speed will somehow be high enough to use and you will quickly add up usage charges while draining your phone's charge while using it. It's up to them to prove these both wrong.

doublelayer Silver badge

Re: Battery

I think this will be a nonissue for long enough that it eventually becomes a big problem. Right now, when the only thing you can do is send a message, people won't use it very often. They will use it to either send information to emergency services, where they are really not concerned about battery life, or something important enough to fiddle with things to get it out, where they are similarly not too worried. That means nobody will complain about the battery life problem for a while until the day, assuming it comes sometime, when they are able to use data connections from these satellites. When people start to browse a website, get online navigation instructions, or watch a video over these services, I expect battery life is going to start being one of their largest concerns, but it will be a while before anyone lets someone do that. There's a chance that technical limitations will prevent that working well at all, and there's a chance that they choose to charge so much for bandwidth that few ever try it, so we might not even get there.

Eight things that should not have happened last year, but did

doublelayer Silver badge

Re: To be fair...

No, they are not suggesting that. They are suggesting that it does not, by design, send the collected data to Microsoft. Surely you can see how those things are not the same. At one level, you might just want to know whether negative things can happen as a result of Recall: absolutely yes, probably lots of them. At another level, you might want to know which bad things are most likely and how they could happen, and if you jump to the conclusion that the bad thing is that the data is sent to Microsoft's servers for their perusal, you've named the wrong bad thing.

doublelayer Silver badge

Re: We all know that 'AI' is not really artificial intelligence.

All that quote suggests is that Steven Pinker doesn't understand a lot of things. I'll give him the order to obey orders thing; understanding that takes more knowledge about how the systems are very imperfect. The not harming humans thing, though, should be eminently simple to understand, at least in the case of making an order that could accidentally harm humans and preferring that the AI think through and reject the command rather than carrying it out.

The job title makes this a lot worse, though. A cognitive psychologist should understand that humans often do things outside of purely rational logic that they would prefer not to do. All sorts of negative behavior is done for irrational or indirect reasons which makes it harder to stop than if it was a simple mistake. Many of the rules we attach to AI, either real software or fictional or hypothetical versions, is designed to prevent that type of emergent behavior from causing big problems.

Screwed by the cloud: Hardware vendors looking for that raison d'refresh

doublelayer Silver badge

I don't think they're suggesting that the newer machine is more secure by design, but that if you're running eight servers, then theoretically that's eight opportunities to mess up the security of one of them, so you should definitely be trying to have as few servers as possible, and wouldn't you know we happen to have one server that can be like a bunch of older servers in one box. Of course, if you are running a lot of different servers, then you're very likely deploying the same image to most or all of them, meaning you have exactly as many opportunities to do it wrong. At that level, you also probably deploy a bunch of VMs on that server and weird manual configuration is going to be on specific VMs, meaning the number and size of physical servers is irrelevant to security at that level. Customizing each server manually is more likely to happen if you only have a few of them anyway, in which case buying an expensive newer model may be overkill because you aren't using as much performance as you just bought.

Trump's tariff threats could bump PC prices by almost half

doublelayer Silver badge

Re: But surely

That individual would be required to pay them. That might be precalculated by the vendor and added to your cost. It might be precalculated by the platform. It might be added by customs and charged to you at entry. Or they all might forget to do it and do nothing. You get to figure out which one is chosen each time you buy something.

One third of adults can't delete device data

doublelayer Silver badge

"I'm not saying laws are/were a problem, but forcing every manufacturer into using just one connector makes life really easy for the Filth and criminals."

I'm not sure how you intend those two parts not to be contradictory or how you expect the second part not to be wrong. The box full of adapters was evidently available, so connector multitude wasn't doing it. Of course what was helping at the time is that you probably needed lots of different software to sync call history and contacts from each model. Nowadays, when there are many fewer operating systems in use, you just need exploits for those that are. The number of cables is not the problem. Nor is device covertness, because there's a really simple solution to that. Not a vape, but a small phone charger battery. It's got a USB port and plenty of spacing. Just connect a USB-C or Lightning cable to it and press the button and it can use the exploit. There's your covertness sorted. The problem has nothing to do with cables or laws about cables.

doublelayer Silver badge

Right, the laws are the problem. It's not like, without those laws, there were a total of two connections, so all they needed was a Lightning-to-USB adapter to stick on the end of their cable. It's not like you could use a Bluetooth keyboard to do that if you had an unlocked device, which would also be compatible with everything. A standard of USB-C is the villain in this case. Physical possession of a device with the locks undone gives you a lot of power, although, like computers, there are things you can't do without authenticating again and you can add more things to that. Both Android and IOS support locking apps behind a passcode, either the normal phone one or a custom one.

doublelayer Silver badge

Re: Surely

The biggest risk is that people might pick up some unerased devices in the hopes that some of them have a pin of 123456 or unlock without a pin at all. Then they have a low-cost way to steal things. It's not hard to press the erase button, and for most people, there is little reason for concern that there will be recoverable data left after doing that. The flash may not be entirely erased, but most phones encrypt by default and the part of the flash where the key is has been erased. For most opportunistic attackers, getting low-level access to read the flash is more work than they're going to do either.

There are exceptions for phones or drives that don't work where physical destruction is advisable. Otherwise, most people have no need to do that.

Christmas 1984: The last hurrah for 8-bit home computers

doublelayer Silver badge

Re: Glory Days

"apart from ARM (which is technically Japanese-owned and is listed on the Nasdaq) no major technology has ever been able to establish itself in the US, e.g. Nokia, Vodafone etc."

If you're talking about the average consumer, Nokia was relatively successful in the US. In 2005, it was the third highest seller of mobile phones in that market. It didn't continue to succeed there for the same reason that it didn't continue its even more dominant position in the European market: it didn't match its earlier smartphones to the feature set of competing smartphones for a few years, then Microsoft bought it and prevented it from doing so for a few more years, and by the time it tried to fix things, it was too late. Curiously, it seems Nokia (the brand, actually HMD)was fifth by market share in the US for mobile phones in 2022. I'm not sure if they stayed there, but that's surprisingly high for a company that makes relatively unimpressive hardware compared to their earlier status.

doublelayer Silver badge

Re: 1984: The last hurrah for 8-bit home computers

"the NES lasted until 1995 outside Japan and 2003 in Japan."

Wasn't this the point, that this didn't really count because it was only a game machine? The processor inside it doesn't matter to the article's point if it's a single-purpose device that only ran games and you couldn't write other things for it. So even if you could continue to buy them, did any significant 8-bit home computers get designed after 1984?

Bonus discussion topic: does this matter? Why is a 16-bit home computer different or worse than a 8-bit one?

OpenAI plans to ring in the New Year with a for-profit push

doublelayer Silver badge

Re: Don't worry about it dear investors...

I think any investors they have will be fine with this*. It lets them get lots more money, and they need lots more money so that Sam Altman can get lots of it, sorry I mean they can quickly build the most amazing models possible before someone beats them to it. That's necessary if OpenAI is ever going to be profitable, and if you thought they weren't going to, you wouldn't be investing. At this stage, investors are still banking heavily on outspending and outlasting their competitors, so the need for more infusions is clear. Any investor who thinks they've stagnated is not too interested in still being an investor either, and the more for-profit they are, the easier it would be to find someone dumber to take the stake off you.

* All investors except, of course, for Elon Musk if you can call him one now, but that's not really about a for-profit move even though it's probably the lawsuit he's got the best argument for.

Just how deep is Nvidia's CUDA moat really?

doublelayer Silver badge

Re: "but you also entirely eliminate any ability for a manufacturer to improve anything."

I don't understand how that's supposed to happen. When you say "If they added a feature then the patch would recognise it", what is "the patch"? They can change their implementation in microcode or something like it, thereby improving their implementation of the allowed instructions. There is only so much you can do to improve that. They wouldn't be allowed to have new instructions because those new instructions aren't part of the standard, so if someone compiled for those new instructions, it would only work on that manufacturer's parts. What improvements, other than microcode, do you think they'd be allowed to do with a mandated standard? While we'd see some improvements with microcode, ruthless focus on microcode alone is how we got several serious CPU security bugs, because Intel and AMD wanted more performance out of the same ISA every year and kept looking for more and more hacks to get it until one of those hacks had some nasty side-effects. Even then, they were still adding some things to the ISA which sped up some classes of program, which wouldn't be allowed if they have to stick to a standard.

Nvidia's moat is not a good thing, but attempts to eliminate it by requiring Nvidia and AMD to build the same chip isn't helping. There are a few negative effects. It seems we're disagreeing about the ability to improve on a mandatory standard, which I assume is due to a miscommunication somewhere. We also have the problem that, if any improvement one makes is immediately available to its competitor, there is less benefit to the manufacturer for improving, so why bother to do so? I think the best way to fight against Nvidia's moat is what AMD is currently doing: making it easier to port things from CUDA to run on theirs as well. They're lagging, not because Nvidia did anything wrong, but because Nvidia built something that AMD didn't bother with and people wanted that thing. AMD can catch up but they have to go to some effort to do it. I don't think we are helping the users by trying to fill in that moat for AMD. There are some times where that kind of regulation is justified, but I don't think we're there and I think attempting it nonetheless will be unwelcome to users and less effective than you expect.

Fining Big Tech isn't working. Make them give away illegally trained LLMs as public domain

doublelayer Silver badge

Re: Small thinking

I'm not trying to trip it up, nor am I hung up on AGI. My problem is that, if the result of the process is something I can't check, which your example is, then I have come to expect that there may be many errors. Your clarification makes me more concerned about this. You fully expect that GPT will make these mistakes unless I prompt it with something I'm supposed to get at random online, because somehow it can't do that itself. I'm guessing this works for you because, if you're building dozens of these, you probably have all the knowledge you need to look over and confirm these designs before you pay someone to make it. For someone who doesn't, it seems like a recipe for wasting money on flawed designs, of which disconnected data lines is an example, not something that has actually happened because I'm not actually building designs.

My basis for this has been actual testing of the thing when I had software tasks to do. If someone who didn't know how to write software used GPT to write some software for them, they would not get something usable. I conclude that from several attempts to get it to write software, which I know enough about to judge its output. Its output on small tasks often contains simple yet unpredictable errors, and that's when I split it into little pieces for it which a nontechnical person will not do. I cannot give GPT to someone unfamiliar with software and expect them to get good results. I have as much familiarity with PCB design as they have with writing software, so I cannot expect good results from that. It may be that simple PCB design is simpler, with more boilerplate or fewer options for messing up, than the simple software which GPT reliably fails to write properly. In that case, maybe it is actually more capable for you than it is for me.

doublelayer Silver badge

Re: Small thinking

This sounds great. I certainly do not have the knowledge, and I've had many ideas for something where the hardware design should be pretty simple. If I could have someone make me the board designs where usually the most complicated thing is getting buttons in the right place, that would be quite helpful.

Sadly, I have seen GPT's software output and I do know how to write software. Therefore, if I ask it to produce a PCB output which I don't know enough to check, I expect that it will come back from the contract manufacturer perfect, at least the exact shape that I expected with the USB port fully connected to the processor and RAM, except the data pins won't be connected so I can't flash software to it and a couple of the LEDs installed without the necessary components. I can get working code out of GPT. If it's a really simple problem, the code often comes out correct the first time. If it's anything else, I have to do it again and again, checking manually each time, so much that it would have been faster to do it myself, and the only reason it's correct at the end is that I have the skills to check its output manually. That's why I don't trust it for anything I can't do myself and why I don't use it for things I can.

doublelayer Silver badge

Re: Imaginary Property does not exist

Free means several things:

1. Liberty (see freedom).

2. Lacking something (a field free of trees).

3. Not costing any money.

4. Not limited physically (let it fall free).

5. Generous or frequent.

6. Probably other things, plus all the things it means that aren't adjectives.

Stop claiming it means one thing. We all know what "free software" means. When we talk about it, we will use both words, and many of us will say "free/libre software" just in case it gets mistaken for software that is free(3). If someone says something like the thing you quoted, they and you both clearly understand that they were using the completely valid and more widely understood definition number 3. If they meant the free you're trying to claim as the only option, they would have said "released as free software". Your grammatical argument, that free(3) must always expand to "free of charge" is wrong, and it fits just fine in the way they'd express any other price ("for £20").

"There are no rights - only restrictions."

Wrong again. There are rights. If you have copyright over some code, you have a right to copyright, defended by your nation's copyright laws. "Right" has another similar meaning, the ability to do something, granted by a contract. Thus, if I sign a contract stating that I will allow you to come live in my house, you can call it perfectly grammatically, and the contract may also call it, the right to live in my house. You can negotiate for permission to do something, and while it has a slightly different meaning than the governmental right, that's a valid word for it.

Grammatical defenses of things tend to be unconvincing, even to someone who mostly probably agrees with you.

doublelayer Silver badge

Re: Potentially another option - poison the well?

"I only contribute to projects that proudly brand themselves as free software, as I know that the project will always be working for freedom."

Ah, one of those who think that "free software" and "open source" aren't often used as synonymous terms by people who understand that they are slightly different, but not so different that ideological battles are necessary. You're not wrong that some things are described as open source when they're not, but two things apply to those: A) all the things you list are contrary to definitions of open source, including the one you linked to and B) something calling itself "free software" has done all of the same things.

You go on to decide that, somehow, GitHub is a way to tell these from another. I don't know why, you just do. Of course, everyone who has spent a little while actually working with this stuff understands that, to figure out how well it fits with the ideals of free software or open source, you actually have to check the details because what website they host their repos on tells you nothing. A good start is often to open the license file and see what it says. Is it a standard license you've already read? Is it one of those with an additional clause attached? Is it one of their own design? Does the one of their own design veer into restrictions on being allowed to use, modify, or distribute? Does it have a section that could best be summarized starting with "except for someone"? That doesn't answer all of it, but it tells you a lot of useful things, often good enough to toss out many of the fakers.

doublelayer Silver badge

Re: Imaginary Property does not exist

I agree. It is definitely the wrong punishment. I merely wanted to indicate that it's incorrect to say that "ending copyright restrictions by releasing a work into the public domain without other restrictions is in no way a punishment, no matter the situation". It is, but it's not strong enough and it causes unnecessary and preventable harms that we shouldn't allow.

doublelayer Silver badge

If your concern is privacy rather than copyright, your solution is somehow even worse. It would be much easier to get at any of your private data those models hold if those models were open source. I could start chopping out any protections that would prevent it from happily spitting out the data. I can run queries much faster until patterns start to show up. Asking for something that contains your private data to be made public because you're unhappy that it contains your private data is backwards.

It's the same as if I said that Google should be penalized for their data extraction by taking their records on everyone they can find and making those public. It's a real penalty, as that's the data they use to claim to advertisers that they can target ads. "See, we have every site this person ever visited and everything they ever searched for, so surely we know what they will be willing to pay for." They won't get that revenue without having exclusive access to that. However, my problem is that Google has it, and that is not solved by making sure that everyone else, from Facebook to governments to criminals has it too. It is a purely negative change.

I understand your reasons for suggesting it, but those are flawed as well. You see something that was costly to create, and you don't like getting rid of it. I can be like that as well. If something is working, even if I don't need it, I don't want to toss it into the trash. I try to find someone else who will use it, even if most of them tell me that they've got something better, because it's not dead yet. That doesn't work when the item concerned has a flaw. If, for example, I had a Samsung Galaxy Note 7 with one of the self-igniting batteries, I might be unhappy that I now have to dispose of a device that was expensive to make and so far hasn't done anything wrong. It would be dangerous to myself and others to keep using it on that basis.

doublelayer Silver badge

Re: Imaginary Property does not exist

It would be a punishment, just a much smaller one than they deserve and one that harms someone else at the same time. Companies with LLMs would lose a lot of money if the models they spent tons of money on were released for free, so they would really hate it. For the other reasons, that's not good enough. They should be required to destroy the models that contain the data, destroy their copies of the data, and they can go back and train on the remainder of their training data, already confirmed to be something they have rights to, using their latest software. That probably makes for a bad model, but that's their problem.

doublelayer Silver badge

Re: King Cnut

We may not be able to prevent LLMs from being created. That doesn't prove that the tide is useful. It doesn't prove that the intentions were wrong. It also suggests that your idea of why the Luddites were Ludditing might be wrong, but you're far from the only one to make that mistake.

However, in this case, that's not my opinion. Make an LLM if you want. I don't find them very useful, and I'm more than happy to ban the use of them in some cases, but if you want to make one and use it to try to do your work, have at it. Just don't steal people's work to do it. You have to buy that when it's copyrighted. If you don't want to, you can use any stuff that is in the public domain and anything people agree to give you. I may not think your electricity usage is the best use of that resource, but you're paying for it, go ahead. I may not think you have the ability to make a good product, but it's your product, don't worry about me. Trying to protect people from abuses, specifically ones that have been obviously illegal for some time, is not trying to hold back a technology. You can use copyright infringement for a number of things, and some of those are things I find useful. That still doesn't justify letting you do it.

Apple called on to ditch AI headline summaries after BBC debacle

doublelayer Silver badge

Re: It's garbage

There aren't secret or copyrighted laws in these cases. There can be exceptions, for example where a law mandates a standard and ISO won't give you the standard without payment, but most cases don't involve that kind of thing so we can ignore them for now. The problem is that, even when you scrape all the laws and feed them into an LLM, they can easily mistake things the way they mistake lots of other things. A law means you are allowed to conduct a certain action, and you are sued for conducting that action, sounds like a match. Except the LLM has not noticed that the law allows you to conduct that action if you are a law enforcement officer in active duty following a disruption to communication caused by a serious natural disaster or terrorist attack, but that only appeared once in the training data so the LLM didn't recognize that you're none of those things.

Best case: a lawyer, paralegal, or other legal person reads the produced document. They weren't aware of the law, so they look it up. In the summary, they realize it doesn't apply to you. They throw out the document and start again. Maybe the LLM will produce something correct the next time. Result: the time to generate the original document and the time to review it for errors is lost.

Average case: A lawyer hands the document to a paralegal and says "check this". The paralegal reads the document and finds the reference to a law. They spend a while reading the text of that law to confirm that, even though the summary seems to limit it, the LLM which is supposed to be the next great thing may have found a cool loophole which will get this client off. They spend several hours checking this to realize that it doesn't help. They report their problems to the lawyer. The lawyer sends the report to the prompt generator. The prompt generator makes a new document and the process repeats. Result: several hours added to your legal bill.

Worst case for now: The lawyer hands the document to a paralegal and says "check this". The paralegal sees that a law is mentioned and sees the quote that the action is allowed. They check that the law exists, and it does. They check whether the quote is in there, and it is. They send the document back approved. Result: "Guilty. We are also considering contempt of court charges for council for the defendant."

Jury spares Qualcomm's AI PC ambitions, but Arm eyes a retrial

doublelayer Silver badge

"He believes in one set of acceptable copyright licenses of works of opinion and another set of acceptable copyright licenses for functional works."

Which is fine if you're discussing what you'd like copyright law to be, but completely meaningless when discussing what copyright law actually is. He already mixed those a bit too much for my liking in that essay, but you went the whole hog and started citing that source like it makes points about actual law which, in reality, are his opinions.

doublelayer Silver badge

Re: Appeal

Without access to the specifics, I also thought that ARM was more likely to have written this into their licenses. I don't like to bet, but if pressed, I would have guessed that ARM was more likely than Qualcomm to win here. That they haven't could mean that the jury simply got things very wrong, but it could also suggest that ARM did in fact mess up here. I have no reason to prove otherwise other than my instincts about how smart ARM's lawyers should be, and we do have to consider that these are either the same lawyers or close colleagues of the ones that persecuted someone who wrote a good textbook used to make things ARM needs on dubious trademark terms, so they aren't infallible. My instincts on their competence aren't strong enough for me to have confidence that the jury erred here. Maybe the trial on the remaining charge will give us more information.

doublelayer Silver badge

Re: Appeal

With ARM's contracts, at least as ARM described them, they had to demand destruction of the parts because there was no other remedy allowed. The next step, presumably, was for Qualcomm to go to ARM and ask if ARM would retract that demand for destruction if Qualcomm paid them some money, which they could negotiate. Qualcomm instead chose to challenge the accuracy of ARM's claim, and they apparently got a jury to decide that ARM was wrong after all. I'm pretty sure that ARM didn't want those destroyed, they just wanted every bit of leverage to convince Qualcomm to negotiate. They're probably quite angry now, so the question is whether they have opportunities to be vindictive and if they take them.

Are you better value for money than AI?

doublelayer Silver badge

There are some parts of some jobs that could be done by a chatbot. For example, support requests which often fall into two categories: the one where the user has to read the manual but didn't and the part where something weird is broken and we need to figure out what it is. A chatbot could, in theory, answer the former basic queries. Pretty much everyone would be happy if it could do that. The problem is that, when it isn't a basic query, it may try to answer it anyway with an answer likely to be unhelpful at best, and there's a chance that, while answering something basic, it will go off the rails and print something ridiculous nonetheless. The technology superficially looks like you could use it for some narrow use cases, but each unreliable example makes it harder and harder to do so.

Techie fluked a fix and found himself the abusive boss's best friend

doublelayer Silver badge

Re: Cobol...

I prefer "did". I could speculate about places where it could have come from, and I often do, but that is rarely as helpful as drilling into it and figuring where I got it from, where that thing got it from, and so on until I find the first thing in the chain that gave me the wrong thing. The other risk is that I posit something as an "it could have come from here", the recipient hears it as an "the problem is here", it turns out it didn't come from there, and we have disagreements all round about whether I'm an idiot and did what I should have done.

OneOdio Focus A5: Big battery, budget sound, and a bargain bin price

doublelayer Silver badge

Re: Getting names wrong

I'm not sure it will be as big a problem as it seems. While you're right that odio is hate in Spanish, audio is as much a word in Spanish as it is in English, and people will understand that. I'm reminded of the urban legend that a car named the Nova sold badly in Spanish-speaking countries because "no va" means "doesn't go". Most people understood the etymology of new, easy as the Spanish word for that is nuevo/nueva, and there was not a difference in sales. For a similar reason, the word "one" is the word for sand in a number of Polynesian languages including Maori and Hawaiian, but I don't think people are going to be confused by that either.

Biden’s antitrust crackdown on tech M&As may linger into Trump’s reign

doublelayer Silver badge

Re: Need a full ban on M&A

"Whether "consumers didn't care or even notice" is irrelevant. People often don't "notice" when a harmful strategy is imposed on them."

Point taken, and you are correct. I was trying to describe mergers that are actually inert as far as consumer experience goes, which do happen. Those are quite common for very small businesses, such as local ones, where the person who started it and operated it wants to stop doing that, either because they're retiring, moving, or they decided operating something small and retail was not fun enough that they want to do something different with the rest of their time. Successful businesses are sometimes merged with a company that just wants to keep operating them for the profit, allowing the former owner to leave without having to shut it down and liquidate it. Customers often prefer that the business didn't shut down. You are right that some harmful mergers also go unnoticed.

"Cheaters gonna cheat. It is in the best interest of society to catch and punish them."

The problem here is that those cheats wouldn't be obvious, which makes them hard to catch, and they might not be legally cheating at all, which makes them hard to punish. When mergers are possible, but sometimes we don't allow them to complete, there's an explicit denial and an arrangement to circumvent that is more easily detected. If all mergers are prohibited, then you need new laws describing how non-merger relationships are restricted and new organizations vetting what might be secret negotiations for something that looks too merger-like. Those laws don't exist right now because, when companies normally want to merge, they do merge, and that makes them easier to detect and stop if we need to.