* Posts by doublelayer

10600 publicly visible posts • joined 22 Feb 2018

Now Trump's import tariffs could raise the cost of a laptop for Americans by 68%

doublelayer Silver badge

Re: Eh?

In a way, but a lot of this argument has boiled down to trying to say that only one side pays for the tariffs. Pro-tariff people shout "China will pay for the tariff". They are wrong. Anti-tariff people have at times reversed this and say things along the lines of "only we are paying for the tariff". They are also wrong, or rather, they are simplifying so much that they're lying, because they know that, in reality, both trading parties to a tariff suffer. Only the government in the middle stands to gain some revenue, and even it may not get as much as it was planning when it all resolves.

The point of all of this is that, when a retaliatory tariff is put in place, it's not really seen as shooting yourself in the foot. It's seen as shooting the other guy, who recently shot at you without provocation. You won't benefit from that, which is why most of the retaliatory tariffs are put in place with the idea that they should be removed as soon as possible, but the diplomatic equivalent of a fist fight where one party is hoping that the other fighter will back down isn't going to work in a painless way. The question you can consider is what you would do to someone who started making things worse for your economy. Would you accept their penalty because any action you take in revenge will also harm you, or would you take some revenge in the hope that it makes them reverse course? Neither answer is always right. With hindsight on how long the tariff was in place can let you calculate which was the best option in each case, but you don't have any of that at the beginning because it revolves around what a diplomat or executive in this case was thinking.

doublelayer Silver badge

Re: Eh?

"However since we are smart enough to realise tariffs are bad then why would we retaliate by making our own shopping more expensive?"

Generally, because they're hoping the tariffs will be dismantled. What they're trying there is basically "See how much this tariff isn't helping you? We don't like yours either. Why don't we call this whole thing off?", and it works. When Trump tried some of his tariffs the first time, for example ones on Canada and Mexico, those countries responded with targeted tariffs, ones that would hurt industries in areas represented by Trump's allies. Many of the tariffs concerned were removed, with the retaliatory ones going too. The challenge is what you do when you try that method and the other side doesn't turn off theirs. You don't really want to keep yours, but if you eliminate them too quickly, people learn that they can put unilateral tariffs on you and you'll cave quickly. That's when an otherwise tariff-reluctant government can find that they don't have much of a choice and why more generalized anti-tariff agreements have been set up.

"And also why do we make things more expensive anyway with the tariffs we already have?"

Either those diplomatic reasons, they think some kind of malfeasance is going on, or some local industry has succeeded in getting protection for themselves. Those are the general reasons, and it's not as if the people of opposite opinions on the effectiveness of tariffs disagree on the reasons. They just disagree on whether that is a good thing.

doublelayer Silver badge

Re: Buried lede

It sounds like that because the initial poster got it wrong. It's not a hardened Android or AOSP build. It is an open Linux one. The reason it's so expensive is that they're trying to write their own, fully open source mobile operating system, and they probably sold like fifty of them. Okay, fifty is an underestimate, but the actual numbers are not high. They've also been selling the same model for seven years, took two years to ship flawed initial versions, and the version they have now isn't exactly a smooth user experience, so they're still not selling very many. Trying to write a Linux-based, open source operating system is not simple if everything was done right, and many mistakes happened during this process.

doublelayer Silver badge

Re: Is this madness unbounded...?

I think the only fundamental point is that Trump really likes broad, general, and high tariffs, always has, and has convinced enough people that they will solve their problems. They are now going to find out whether they actually do. Of course, there may be obstacles to imposing all the tariffs he talks about having because he does like to make promises he can't keep, but tariffs are more in his power than most other things he's promised. For example, he has no chance of implementing an income tax offset from the tariff; he can't do it unilaterally and politicians are not going to help him, but he can impose most of the tariffs without external approval.

The ultimate Pi 5 arrives carrying 16GB ... and a price to match

doublelayer Silver badge

You might be right that demand for the 16 GB model will be lower than demand for the lower-spec versions. That would just mean that they make fewer production runs of that version. It wouldn't really change what price they can charge for either of them or whether they have enough buyers to make the 16 GB version profitably. I'm guessing that there are enough people who want to use the Pi 5 as a desktop or have a memory-hungry process they already run on Pi 5s who will buy a 16 GB version, so they won't be losing the money for this production run.

doublelayer Silver badge

"The Pi also does not have a GPU and the processor is rather limited, so it's not ideal for that application in many ways."

I have to correct you there. The Pi does have a GPU and you can use it through a few interfaces. OpenGL and Vulkan are available, and some of the boards support OpenCL 1.2 at least. It's true that most normal acceleration systems will not support this, but that is a very different thing than the board not having a GPU at all. I wouldn't recommend it for running an LLM, but you have the ability to do so if you don't mind slow output.

doublelayer Silver badge

Re: Viable as a desktop replacement?

Most of that should work. The only thing I'd be concerned about is Zoom. The Pi is capable of all the other tasks as a low-end desktop, and if you use an NVME drive, load times are significantly reduced.

Zoom, however, is making me nervous. It doesn't look like they have native support for ARM Linux for their native app (using this system requirements page). That means you'll have to use Zoom from a browser. Browsers and graphics have always been a weak spot for the Pi. For a long time, Chromium was the only browser that could use hardware video anything on the Pi. Theoretically, Firefox can do it now. That wasn't necessarily a problem when playing streaming low-resolution video, but it will likely be a much bigger problem when handling multiple input streams, an output stream if you're on camera, and screen sharing. If you need Zoom for lots of video, this would be worth testing. Maybe you can find someone with a Pi 5 already who can report on this.

doublelayer Silver badge

Re: Just sayin' 'no'

I'm not sure why you even considered the Pi for that use case in the first place. If you need 64 GB, and I'm not sure why you do but I'll take your word for it, it seems obvious that a board with a maximum of 8 GB until today, which has only been available for a couple years, wasn't going to do it. You're comparing two very different types of computer, as you've described in detail, which are suitable and realistic for completely different uses.

doublelayer Silver badge

Re: At that price...

Tell me, what is the Pi about?

Because for me, my massive collection of Pis is not about ARM. ARM is how they were able to make such a thing, but I did not buy a Raspberry Pi because I was annoyed with Intel and had some objection to running my workload on those. I bought mine for the following reasons:

1. Cheap, but powerful enough to run a non-stripped OS.

2. No noise, low power, and low heat during operation.

3. Compatibility with Linux with sufficient support that I don't have to go to weird efforts to make things keep working or replace the box if this one breaks.

And the Pi does all of those things very well. A couple minor glitches on number 3 because of nonstandard kernels for the Pi, but those are mostly erased because the Raspberry Pi people have done a fantastic job at maintaining support for all their variants, something other ARM-based SBCs don't do. However, benefit 3 is also available on X86. Benefit 2 is available with small Intel-powered boxes. And that means that, for me and many others who have used Pis over lots of cases, a 16 GB Pi and a 16 GB Intel box may end up looking more similar after all. The Pi has GPIO support that the Intel box doesn't have, but I do not have any use cases that need 16 GB of RAM and the GPIO at the same time, and I could always buy a Pico or Zero and use it to drive GPIO devices for me.

I do want to know what the Pi is about for you, though. There are many use cases where the Intel boxes we're comparing with wouldn't be the best option, but that's not what you said. If there's some other important distinguishing factor, I am curious what you had in mind.

AI can improve on code it writes, but you have to know how to ask

doublelayer Silver badge

Re: Await Fix

I disagree there. The AI did almost the same kind of thing that I would have done. I wouldn't have done the first step, because my bias is always to do things incrementally when possible to avoid wasting space, but the rest of the steps match my typical workflow:

Stage 1: You want a solution to your problem. Write some code that solves the problem.

Stage 2: You want the code to run faster. Identify the simplest bottleneck in the code I wrote and fix it, again as quickly as possible.

Stage 3: Improve the tools I'm using to speed it up.

Stage 4: Can I parallelize?

The code didn't bother with stage 5: try to find ingenious mathematical solutions to get it done faster, and it probably couldn't find any. It instead used a different stage 5: add extra stuff nobody asked for. However, the progression otherwise makes sense. I also tend to try to get some information before starting this process on how fast the code needs to be so if you need stage 3 performance, I don't bother with the first two. Since many problems are just fine with stage 1 performance*, it isn't automatically bad to start there.

* For example, if they're going to run a script automatically once a week and the simple version takes ten minutes to run. I could spend three hours writing new code and the script would now take eight minutes to run. The time savings would take two years to equal out, so they don't care. But I could also take an extra eight hours and the script would take ten seconds to run, so now the time savings equal out in less than a year. They still don't care, because for something that runs only once a week and doesn't need human attention, nobody cares that the computer does it for ten minutes. If they run it every hour or if it's going to need to scale to millions of requests or entries, I would speed it up. Otherwise, they want lowest maintenance costs and time to write it, with execution speed a sacrifice they don't even notice they've made.

doublelayer Silver badge

"Not knowing knowing what it did makes the article seem speculative and a little boring. (Sorry)."

We do know what it did. All the generations are posted here. Most of the optimization that it did was farming out the calculations to libraries. It did quite a few things that are typical of good software, but almost in the way that people showing off would. The rest of the optimization was adding additional features that nobody asked for, which could be positive or negative. Sure, some longlasting tasks might benefit from a metrics reporting server, but for a task that originally took half a second to execute back when it was allocating a few megabytes when it didn't have to, that might not be the most useful and it was definitely not specified.

doublelayer Silver badge

Re: I'm not following here

Yes, in the same way that you can do that to an interview candidate. This problem is exactly the kind of thing I've been asked during interviews, where they are looking for me to demonstrate the most efficient way of achieving a goal that nobody cares about. When you're actually working there, they don't care in the slightest how long the computer takes because 0.5s of server time is free unless it's at scale, and instead they're optimizing for lowest writing time and/or lowest difficulty during maintenance. Writing it more efficiently is something the programmer tries to do even though management is mostly ignoring whether they have.

What this won't do so well at is getting bugs out of some code. As demonstrated, as it tried more and more advanced methods, it introduced more and more bugs. It also started expanding outside its brief. For example, the first version creates a function which does the task provided and returns the result, an integer, as an integer. By version number three, it was printing it out to the console with extra text. It wasn't told to do that. Maybe we wanted to use that result in a calling function, but now we're going to have to fix that. That expansion, by guessing requirements, just increases the number of places where bugs can crop up. That massive speedup, by the way, wasn't from the LLM finding a new, more efficient algorithm. It was by using a library to run it outside of the Python interpreter. Of course, if you need this answer really fast, not writing the number-crunching part in Python might have been a starting point.

The bot's output was somewhat impressive to me, but mostly because I've seen much worse code generated for even simpler cases. It wasn't because it did anything particularly difficult.

doublelayer Silver badge

Re: Why bother?

But there is a chance that at least one of 3,999 or 99930 do not appear in your set. True, that solution is probably more likely to pass tests than the more advanced methods the LLM did try because the chance that the knot-cutting solution is incorrect is 0.0045%, but it is guaranteed to be wrong for that 0.0045% of cases.

Apple shrugs off BBC complaint with promise to 'further clarify' AI content

doublelayer Silver badge

They might win, but I doubt they would win damages large enough to make Apple stop and think. If they did get the first court to impose such things, that's when Apple's lawyers would get into action, making sure that, at the end of this, Apple would pay a much smaller amount some time in 2034. If their lawyers ever get lucky, then Apple pays nothing and it all goes away. It's not that they have enough lawyers that they never lose, but rather that they have enough lawyers that they never lose big and they have enough money that they can lose small over and over and not notice.

doublelayer Silver badge

My money's only on the latter. Apple won't face any serious libel suits over this because they can inundate anyone in counteractions until they go away. It will take something they hate seeing, trumpeted by actual media, for them to try to stop.

Windows 11 24H2 can run – sort of – in 184MB

doublelayer Silver badge

Re: Imagine the world we'd be living in...

No problem. Just give up all the other things that weren't available when you had 4 MB. That screen of yours, I'm guessing it's a bit higher resolution than you had when it was connected to a computer with 4 MB. Your networking options were probably a little more limited too.

doublelayer Silver badge

Most of that is going to be the RAM used by the browser. Windows can evict stuff from cache and free up a lot of that RAM, but the browser generally doesn't and happily uses as much as it can.

This is, incidentally, the answer to Richard's question: "The Reg cannot help but wonder why Windows 11 24H2 has the requirements it does, given that the core of the operating system clearly doesn't need them." It has them because they don't want people to use a device with 2 GB, run a browser that wants to use all of that to keep ten tabs open, and blame Windows for why it's awful. 4 GB is not needed for Windows itself, but their recommended minimum for running Windows and having enough room for other applications. They especially don't want people to be manufacturing devices with 2 GB and telling their irate customers that "Microsoft said this was okay" when they call to complain.

Dude, you got a Dell, period! RIP XPS, Inspiron, Latitude, Precision

doublelayer Silver badge

This is called marketing. Marketing specifically calls it rebranding, but you don't have to worry about what they're talking about because what it really means is that they wanted to change something because they hadn't in a while and this is what they picked. They're not trying to make things worse for you to extract more money. In fact, they probably think they're helping by simplifying things a little. After all, was there any clear difference between a randomly chosen Latitude and a randomly chosen Vostro that made the names useful? I don't know of any. Or maybe they thought they were removing an old name so they sound newer, but even then it's a benign change. Companies do this all the time. It wasn't very long since AMD significantly renumbered their processors, which happened because Intel significantly renumbered their processors, which happened because AMD slightly renumbered their processors or because Intel thought that sticking an "Ultra" into the brand name would not look as silly as it actually does, which they probably did because of the "Pro Max Ultra" thing that Apple did, which they did because they wanted to have more expensive tiers of iPhones. It doesn't have to affect our lives very much.

doublelayer Silver badge

Re: Simple?

Yes it is. Their problem seems to be that they want Apple's branding simplicity. However, Apple makes only a few types of laptops. Understanding the differences between a MacBook Air, MacBook Pro 14 inch, and MacBook Pro 16 inch is not hard because there are only three of them (three models and three differences, they're pretty much the same laptop except for the obvious). Dell has hundreds of models, so it isn't easy to know the differences between all the available ones or even which ones are available. That allows them to serve more markets, and there's a reason for them to do it, but it unavoidably requires making the product naming more complex. Changing the first words in the brand isn't going to make any of it simpler.

doublelayer Silver badge

What it should mean:

Dell: Word processing, email, web browsing. You should get this for pretty much everybody.

Dell Pro: If you're doing something that needs more processing and memory and want something larger.

What it actually means:

Dell: Cheapest, probably, check that it actually is.

Dell Pro: Basically the same stuff but more expensive.

As with pretty much every manufacturer, the only way to figure out what is in a model is to ignore the brand name and only look at internals and price. Once you've found one that looks to be the cheapest that has what you need, then look at the model number to work out repairability. Sometimes, the brand is related to that, but often, it's pretty random. Often, I've found a model that's theoretically the low-end range with better specs than a more expensive one in the higher range.

Google's 10-year Chromebook lifeline leaves old laptops headed for silicon cemetery

doublelayer Silver badge

Re: "...The lowest-end hardware, which is a little cheaper than comparable Windows machines..."

I'll have to take your word for how well a low-end Chromebook is, though I have heard complaints which suggest that your observation may not be universal. Since I have used no Chromebooks, I can't counter it with experiences of my own. I have experienced some low-end Windows computers, and it's not quite as dire as you paint it. I have a feeling that many people who have such things do not like them because it is not difficult to use them in a way that makes them miserable to work with. However, I have had the misfortune to use Windows 10 on 2 GB of RAM and an EMMC and, to a lesser extent, Windows 11 on 4 GB of RAM and also an EMMC. I could browse around with that. I could read email and edit documents with it. I could write code on it, which probably wasn't in your list but that is one of the things I did most on that machine. It was far from ideal, but it wasn't useless. I'm therefore not entirely convinced that Chromebooks are a significantly better user experience at the low end, and if they are, I would guess without further description that the difference is that it's harder to make Chrome OS do inefficient things.

doublelayer Silver badge

Re: End of Chromebook?

Many, if not most, Chromebooks are already at comparable prices to normal laptops. People are still buying them enough for new models to get made. I don't know why, but they are. That means they'll probably be good to continue selling those things for some time. The lowest-end hardware, which is a little cheaper than comparable Windows machines but not as much as you might think, is probably not going to last the full decade for reasons other than software updates. Early obsolescence is still in their toolbox.

Boffins carve up C so code can be converted to Rust

doublelayer Silver badge

Re: Why?

"So what actually happens say in a production system?"

You see a compiler error and fix your code. Or you see a compiler error, decide you can't or won't fix it, and label it. That's the point of checking for and not allowing those things. It requires a different language in this case, but it's very common for C compilers to complain about certain things that can be compiled but shouldn't be, and it is also common for a production system to operate on the policy that you don't have compiler warnings on your build unless you can explain why they are not a problem here.

doublelayer Silver badge

Re: Rust, Agile...and then some questions....

Rust was designed to fix a couple classes of problems, not everything in software development. No language can fix or even slightly help with problems 1 or 2. Technical debt is a very broad topic, but if you have and count memory safety issues as a tech debt thing, then maybe it would help you. These problems are almost entirely unrelated to one another.

doublelayer Silver badge

Re: A.I.

You probably could build an AI* that could do that conversion, but you have two challenges that have prevented it before and will again:

1. This has been around for a long time: readability and maintainability. I have a program here which was originally written in assembly, I think Motorola 68K assembly. That wasn't very useful, and it was not attached to a lot of interfaces, so it wasn't hard to translate to C. So now, I don't need to try to virtualize something to make it run. However, I still don't really understand what's in here and can't modify it. I have C code that produces the same results that the original one did, but modifications means drilling down to understand what every part was for, and that is tricky because I had only machine code to start with. Every time automatic translations happen, readability tends to get lost. Even if the translation is perfect, it can be difficult to impossible to make modifications to it later.

2. Accuracy. This is a bigger problem with modern AIs, but it's always been a challenge. Most software does not have complete, mathematically proven test cases, where if the tests all pass then we're absolutely certain that nothing is wrong in this code. Mostly, we have basic test cases, where if any of them fail then something is very wrong with the program. Often, we don't even have too many of those. This means that when you're translating code, you can't know for sure whether it's even accurate to what the original program would have done unless you run them in parallel forever and set off an alarm if they ever disagree. Modern AI is very likely to add bugs when it translates. Admittedly, most of those are likely to be obvious, ranging from its output simply not compiling to the program crashing obviously, but when it looks like neither of those has happened, that does not prove that the attempt was successful.

* AI: depending on what you intend this to mean. We have explicit rules-based programming language translators around, but they generally don't qualify. I assume you mean an coding-focused LLM, which often make mistakes in their output, especially when the job is big. Translating a large program, and it would have to be large or you could let a human translate it, with insufficient documentation is very likely to be too large a task for such models to complete correctly.

doublelayer Silver badge

Re: Down vote?

I realize we're getting close to having more people who didn't downvote trying to explain them than there were actually downvotes, but I do have a few comments that could explain part of it:

"If people are using pointer arithmetic, it's for a reason. And Rust won't be able to do a damn thing differently or any more securely about that."

This is an unproven assertion. There may be a reason that could be done better or equally well, or it might be required. It all depends on why that is in there semantically. That level of generalization is as wrong as saying that everything in C should just be rewritten in Rust because it all works.

"You cannot write safe code that interacts with devices, computer buses or the majority of hardware directly. It's just not possible.": It depends what the hardware is, but that's not the case for all of it. Plenty of devices can be used without having to share raw memory as they imply.

"Until we change all the BIOS/UEFI, PCIe etc. specifications to allow arbitrary device detection and communication at bus speed using only, say, JSON or XML or similar languages, you're still going to need to write those bits in C (as they have to make assumptions about what the underlying bits mean and directly access memory)"

Not all the time. Direct memory access is necessary for only some hardware, and it's perfectly possible to handle nonstandard protocols in a memory safe way. It's not as elegant as having everything in a standard form, but it suggests that some classes of simplification are not possible when they are not only possible but sometimes have been deliberately written that way. Even when you do have to do something in a memory-unsafe way, C is not the only option for doing it, though it's often the most convenient.

doublelayer Silver badge

Re: Why?

I think the theory is that you convert to Rust, and then all the new modifications must be done in Rust so it will be more difficult for people to introduce that kind of bug because Rust will block them. Or you could add those checks into a compiler and refuse to merge something unless your C compiler accepted it. Either way, while there are cases where it would be theoretically useful, I doubt there will be all that much adoption. Among other things, I wonder how much readability is lost with conversion from limited C to Rust, because every other conversion I've done has involved some weird additional syntax which my brain takes longer to parse.

Intel debuts laptop silicon that doesn't qualify for Microsoft's 'Copilot+ PC' badge

doublelayer Silver badge

Re: So ?

No, because an NPU is not required at all for Windows 11. Right now, it means that they cannot use a marketing badge that nobody cares about. Maybe later, there will be some local AI feature that they won't be able to run, but so far, there isn't even that.

Even at $200/mo, Altman admits ChatGPT Pro struggles to turn a profit

doublelayer Silver badge

Re: People use it more than we expected?

But you do know that whoever is willing to pay $200 for a plus version is going to use the plus version a lot. It would have made a lot of sense if OpenAI had somehow deluded themselves into thinking that tons of people were going to buy their plan, but not many did, so they got little money for their trouble. The thing those payments are for cost quite a bit to create, after all. What makes a lot less sense is why they're surprised that the people who bought this are using it, and using it so much that they're losing money on that subscription. That's a clear indicator that they did not think through what it cost to deliver the service they were charging for, which is a much more basic mistake.

doublelayer Silver badge

Re: People use it more than we expected?

He evidently expected that this would be the kind of product where people buy it but don't use it, so you can afford to serve the higher users from the savings on the low users. Like unlimited storage packages, where many users buy them and store 200 GB in them, so you can afford a few 30 TB users. Altman seems unaware that someone who is willing to pay $200 per month probably expected to use the services kind of a lot, and anyone who didn't use them too often saw that pricing and left. Sorry Sam, that price is out of the range for people who might use this and are willing to pay a bit to have the option available.

Amazon worker – struck and shot in New Orleans terror attack – initially denied time off

doublelayer Silver badge

Re: Wrong type of leave…

You're right, and if something that severe happened, I wouldn't be trying to fill out the form. I'd send my manager an email and tell them to fill out the form, see you when I'm done with the medical stuff. However, I've tried to fill in the forms myself before, and the reason that I've mostly done them correctly is that I take a lot longer to do them the first time I try because I'm being very careful with what I put in each box. That means that, if this worker decided they would fill in the form themselves, I wouldn't be surprised if they made a mistake. They had a better option, making the employer do the paperwork, but that doesn't mean they used it. For that reason, I can't jump to either conclusion about whether Amazon is lying or just unhelpful.

doublelayer Silver badge

Re: Wrong type of leave…

I'd buy it. Many places I've worked have worker management products with extremely clear forms consisting of about thirty boxes with short labels and no instructions about what is supposed to go in any of them. Some of them need things that the software should be able to but didn't pull from my existing records. Others need codes pulled from a document that's somewhere in SharePoint or Google Drive depending on whether this is a Microsoft or Google shop. Some actually contain data about my request. Even more are just there and can be safely left blank. It always seems easy to fill in such a form incorrectly, especially the first time I've done it. That doesn't mean Amazon wouldn't or didn't mistreat the employee here. Something plausible is also a good cover story if you were at fault.

How a good business deal made us underestimate BASIC

doublelayer Silver badge

Re: Replace "Beginner" with "casual programmer" and I am 100% on board

When you're collecting data, how do you keep track of it? Do you store it in a location with a name, where you don't store unrelated data? Congratulations, you know as much about files as you'll need while writing software. Unless you're writing a filesystem, kernel module, or something that's trying to deal with disks, all you need to know is that files have names and contain bytes and directories have names and contain files. That last bit is not technically true, but you don't have to care. I'm sure you know both of those things already.

There are a lot of languages that are simpler for the novice than the ones you listed, but they all have limitations, which is why more general and powerful ones as you have listed are popular. Nothing requires you to use them. I would also encourage you to find the many tutorials that are out there for all the languages you name, because we really aren't trying to keep you away from knowing how to write them. There will be many that describe how Python syntax handles dictionaries which could help explain it*. Having a data structure like that is necessary for most programs, so having one provided for you is generally helpful. If you got a modern BASIC, then you'd have to build that yourself, and I'm guessing you'd find that a little trickier than learning the Python syntax for them.

* The simple version is that you name your dictionary and put your key in brackets. If your dictionary is d and your key is "hello", you can set a value with d["hello"] = "world" and retrieve it with d["hello"]. There are other options, but that version is all you need to use dictionaries at the start. Or, if these aren't useful, you could just not use any dictionaries. Python is a lot like BASIC if you just don't use the structures you dislike.

doublelayer Silver badge

Re: I did comment the other day....

My argument did not rely on DOS being the epitome of anything. It applies equally well to any of the other operating systems that had no security in their design. A lot of the operating systems of the time, especially on the home computers that this article and the comments are talking about, were exactly the same. Mac OS did it. OS/2 did it. RiscOS, BeOS, AmigaOS, all of them did it. This means that, when comparing any of those systems to modern ones, they'll always come up short when comparing their security chops because zero is an easy number to beat.

But your point is correct: not every operating system was like that. Some of them did implement inter-process security. They did not do it perfectly. Some of the places where they had vulnerabilities were due to things that cause modern ones: coding mistakes, resource limits, or just making it easy for admins to mess up. Others were a result of the restrictive design, which is less of a factor with modern operating systems. Their simple design did prevent some classes of vulnerability, but not because the design handled it, but because the features that are necessary to have the vulnerability weren't available. It's similar to saying that my house with no windows is more secure than your house with windows because you can't throw a rock through my nonexistent windows. That is true as far as it goes, but by being able to look through your windows, you are more able to see and respond to possible threats. A system with no networking wasn't vulnerable to attacks over that connection, but it's not because their code was any better.

doublelayer Silver badge

Re: pot?

"So whats the result if no files, going with some sort of uber-Registry containing it all?"

That's one of the several things I don't get. As far as I can tell, it's objects held by software in their indexed, ready to edit form. What happens when you want to use those objects in other software, I don't know. I would guess that you're supposed to export it from the software into some format that you can import into another piece, and I'm not sure where the data is supposed to be while you do that, but I'm now trying to figure out what you do to promote data interchange when the simplest mechanism is explicitly denied. I think it's up to Liam to explain what you would do where files aren't available, and so far, I've found that part either vague or missing entirely in any proposal he has posted.

doublelayer Silver badge

Re: English is one of the easiest human languages

My guess is that it will still be English for a long time. The people designing these things have seen lots of examples, and those examples were in English. If they're working internationally, the lingua franca they're going to use is English. For example, the Loongson architecture, which should theoretically be easy to do in Chinese because it's only developed there, still uses English-style mnemonics. You could argue that they copied a lot of that from MIPS, and since MIPS used English maybe that explains it, but I bet it was considered easier to use Latin letters to avoid anything weird with encoding of Chinese characters, and if you're going to use Latin letters and you speak English, why not use ones that make sense there?

doublelayer Silver badge

Re: A novice does not know the difference between RAM and disk, and they should not have to

And in what form should your stuff be returned to you? Because you can insulate yourself from files whenever you want but before you can, you have to decide on a format. Have a database server and you never have to manually read or write to the disk. You don't have to worry about endianness of your numbers or packing your data. But that only works if you've agreed to use the database's formats instead. You can go one level up and never serialize any of your types because a library is doing that for you, but you still need to limit yourself to things that library understands. You don't have to worry about any of those things, but you can stop worrying only after you've decided how they will be handled for you. Trying to opt out of handling them at all is laziness, the bad kind, and works the same way (I.E. badly) as people who insist that you just solve the problem without being clear on what the problem is.

doublelayer Silver badge

Re: You had me...

I think we understand that it worked that way. I can even agree that, at the time, that made sense with how to do it. Most of the reasons no longer apply. For one thing, we have better error messages. If they typed "pront" in at the terminal, it still wasn't going to work, but now we'll write out the part that wasn't recognized so they aren't as confused. Whether that happens when they run the file or when they make a typo at the terminal, they're still going to have to learn that the machine won't fix your mistakes for you.

In many ways, all the things you praise about BASIC are maintained by Python, possibly one reason why people keep using that as an introductory language. Sit down at a Python REPL and you can also issue one-line commands and see results. Instead of line numbers, you write your blocks:

for i in [1,2,3]:

print("Hello")

The main difference is that you don't have to re-enter everything because we've decided storing your work to be edited at will is something you're too stupid to have right now. That made sense when there wasn't nonvolatile storage except for a tape or when the editor couldn't be used freely because you had to wait for lines to be printed one by one onto paper. Neither apply anymore, and any student will have used free editing into files many times before they write their first code. The argument that we should go back to those things sounds like someone who thinks the way they did it must be the best way in the world, in which case I have to ask them why they weren't using older methods that real programmers did in the 1950s and 1960s. Let me guess, those were superseded by better technology, but technology stopped getting better when you touched it.

doublelayer Silver badge

Re: English is one of the easiest human languages

"I also think that agrees with doublelayer's view (and others'), reading between his lines (my bad if wrong) that English is in part, widely spoken for the same reason - its ease of use."

Sorry, it seems we did misunderstand one another a bit. I don't think English is used for its simplicity. I think English was used in BASIC because it was written by people in the US and UK to be sold to people in the US and UK. If there had been a much more straightforward language, it still wouldn't have been used because English is what they spoke. If people were building the early microcomputers on which BASIC would be used in Argentina and planning to sell there, then the language used would be Spanish and people would have lived with it.

English does have several simpler grammatical structures that make it easier to use and understand than other languages. I agree that simplicity is one of the reasons that English is popular in the world today, though to some extent I think the simplicity is the result rather than the cause of that global success. However, I don't think that explains most of it. Most of the popular languages are popular regardless of their complexity. English, Spanish, Arabic, and French are four out of the top six languages by number of speakers because those are languages that spread all over the place and were held there for long enough that many had to learn it. Chinese and Hindi are to some extent an exception, but they had periods of forcing others to learn it as well. Simpler languages failed to achieve success because they had fewer people forcing others to speak the language, fewer incentives for learning it voluntarily, and fewer people expanding it to be more versatile.

doublelayer Silver badge

Re: pot?

The problem is that we've seen what happens when you blur away the existence of files. You get the smartphone. It's your data, you know. No need to wonder about where it is, Google's on it, you don't have access to the raw data anyway, the apps know where to find it.

And that is wonderful, assuming that you only ever want to do the thing the app gives you controls for. If you want to have some of those files on portable storage because your phone doesn't have enough internal memory, but the app didn't give you that button? Sorry, you can't do it. Do you want to back up your data and restore it to a different device? You could copy all the files and transfer them over, if we let you see them, but since you can't, you only have a one-size backup method which might work or it might simply restore the app, helpfully back to factory settings except it remembers that you asked for dark mode. Maybe the app is doing something wrong? On a computer, I could edit a configuration file to make it do something differently. But even though the same configuration file exists on the phone, I'm denied access to it, so I'm out of luck. This is the kind of stuff that breaks when you try to pretend that files are unimportant.

A lot of the time, you don't have to look at or worry about internal files, but you really do want to have access to yours and having the internal ones is always an option that's useful to have around in case you find a reason to want it. It's necessary if you're going to write good, structured code. Partially it's because it makes organization of something complex more feasible, and partially, it's because portability and reproducability is critical here. Programming involves multiple layers of understanding how the machine works at one level, ripping that away and learning what's under that, then putting back the levels until you have the level of abstraction relevant to your task. Hiding something as simple as where your data is under a layer of vagueness is not helpful.

doublelayer Silver badge

Re: English is one of the easiest human languages

Could you explain what you wrote, though? In addition to all the valid criticisms of English, it seems to be missing the point that English wasn't used for the sake of simplicity, whether it has it or not. It was used because the people who wrote the language had English as a main language and were planning to have the software used by people who had English as a main language. Soviet programming languages used Russian as their main language, not because Russian is simple (anyone who tries to spell things will find that it's not, not that English should get many orthography points either), but because everyone involved spoke Russian, either because they were Russian or because they had to learn it in order to get to the places where the computers were available.

When dealing with a language like BASIC, language choice is actually quite unimportant. The small number of keywords means that you could use a language you don't speak because memorizing the keywords won't take very long. This is, for example, why all the words commonly used in music are Italian. You don't need to learn to speak Italian to memorize maybe fifty of them, and that's only if you study it for a few years. Modern languages, with lots more library functions, do need to pick a language that the users understand, and English was chosen, again not for simplicity's sake, but because most of the people using and making it already spoke it to some extent and no other language met those criteria.

doublelayer Silver badge

Re: I did comment the other day....

"someone could write a modern (internet-capable, web-capable) BASIC for the Pi."

Of course someone could. You can make a language with any set of capabilities you want. The question is what you put in it and how you make certain things convenient and powerful. If it's not at least one, the language is doomed, and some languages get close enough to both that they'll get used unless yours does too. BASIC's structure doesn't lend itself well to scaling the number of options, because you either end up creating hundreds of builtin commands or you try to make all your functionality as a library and force it to work in the small subset of commands you're willing to include. That means that, if you want to be able to send TCP and UDP and ICMP packets, you're going to need to plan out a lot of syntax. That's not the biggest problem though. The biggest problem is that you likely also want to receive those, which means you're most likely to build a network stack in a different language that can handle this better and just abstract it out for the BASIC program. That works for students, but it's very different from the BASIC that let you drop to assembly when you wanted.

"The complexity of modern systems is what makes them insecure and takes away their resilience."

I disagree. That sometimes happens, but the simplicity of early systems often also made them insecure. Complexity means that interprocess communication mechanisms can have vulnerabilities that people didn't catch before they were put into production, but before we had those, inter-process communication was unlimited, meaning that those vulnerabilities couldn't exist because everyone was allowed to do that. Before that, inter-process communication was impossible because multiple processes was impossible. So did that make us more secure? No, it did not, because it meant that any process running could do anything it wanted to any piece of data. For example, consider the ubiquitous DOS malware that copied itself onto boot sectors and autoexec files, which could not be prevented because there was no per-file security mechanisms and no monitoring while the process ran. Complexity didn't just add more features, although it definitely did that. It also made the structures necessary to have security in a world where not every line is trusted.

doublelayer Silver badge

Re: pot?

This is not the first time that Liam has argued that files are harmful. When he wrote his eulogy for Optane, he was unhappy with its death because he hoped it would kill the file system by uniting disk and RAM, a similar approach to what he's said here. I didn't understand it then and I don't understand it now.

Files are useful. They let you store things much more conveniently than an internal structure that something else writes to storage for you. You can find it, copy it, duplicate it, transfer it onto disks or over networks, it's the ultimate form of portability. We only managed without them because resource limitations were so restrictive, but as soon as the machines made storing separate data files feasible, people did so. Once you have separate data files, having code as data was useful in so many ways. Related to this article, early students of programming get two additional benefits from files. The first is that they can review and edit their work, even if their first attempt crashed the computer. They don't need to write down what they're typing so they can see what it was later. The second is that they can have multi-file programs, or in other words they can have a piece of code provided to them which they don't have to type in from scratch. Want to know how it did what it did? Open that file. Want to use it, just type in an import statement as supported by your language of choice, BASIC or otherwise. No need to renumber something or manually type in that code.

doublelayer Silver badge

Re: GOTO

There is a difference between writing goto in a language where you have better tools and writing it when that is your only tool. There are lots of things in assembly that we insulate ourselves from because it was causing preventable problems. When Dijkstra wrote his famous criticism, he was talking about people who had better tools and he wanted them to use those. He was not insisting that someone write assembly without any jumps.

How the OS/2 flop went on to shape modern software

doublelayer Silver badge

Re: Developers! Developers! Developers!

"People said the same about WINE 25 years ago. If Linux could run Windows apps there'd be no market for native ones."

And I think those people were right. For two separate reasons, that didn't end up having much of an effect:

1. Not that many people used Linux desktops, so most people didn't bother to make software to work with it, either natively or through Wine. It didn't look like there were a lot of Wine applications because there weren't, but that wasn't because people chose to make native ones. They just ignored Linux users. They mostly still do.

2. The reason why Microsoft writes Linux applications: Wine was never quite good enough for companies to sell things that required it. Sure, some things ran perfectly, but many others did not. That sometimes changed. A company wanting to make a Linux variant might try making a customized version of Wine that ran their software well and then wrapped their application with that version, but that was a much more substantial effort than trusting that anyone who wanted their software enough would do it for them. Had Wine been perfect and unchanging, it probably would have been used. A lot of the software that Microsoft makes for Linux, for example, uses cross-platform runtimes like .NET, admittedly they wrote a lot of the Linux support into that, and Electron, so the effort required to have one is much lower.

Most of the native software available for Linux was written for Linux first. Most of the native software for Windows was written for Windows first. A lot of the time, the authors don't bother branching out if changing the compiler options doesn't do all the work.

doublelayer Silver badge

Re: I remember reading Letwin's post

I didn't use OS/2, so a couple of those don't sound as daft to me as they do to you.

"Why weren't long filenames truncated? DOS & Windows apps couldn't see anything with longer than 8.3. That was just daft."

Because that was a limitation that is better when you no longer have it. Forcing every limitation on something in the name of compatibility probably wouldn't have helped either, because then it would have been identical to Windows and Windows, being cheaper, would also have won. When the differences make OS/2 better, it makes sense to keep them.

"Why the hell was dragging only done with the right mouse button? What conceivable benefit did that have?"

This one isn't a good thing to me, but presumably the conceivable benefit is that, at some future point, a long press on the left button could be assigned another meaning, making the mouse more capable. Of course, if they never assigned that, it doesn't end up being very useful, but that's the primary reason I can think of for doing it that way, and it has precedent because that's why there were three buttons on the mouse instead of one in the first place.

HMD Fusion: A budget repairable smartphone with modular flair

doublelayer Silver badge

Re: Pine phone

The European store is attempting to be a more retail experience, with additional post-sales support*, whereas the global one is selling at lower prices with no support included. You can get a more thorough explanation in the blog post when they announced these two stores. You can buy at the lower price from the global store in Europe, though.

* I'm not sure to what extent the support will help with software issues. Pine64 is well-known for providing hardware and letting you figure out software. Consider that before buying their products.

Apple auto-opts everyone into having their photos analyzed by AI for landmarks

doublelayer Silver badge

For actual creepiness, they're not that different, that is if you believe Apple's statements about how they've worked to make this feature respect privacy. I actually do believe them on this one. However, the important distinction and the reason why there are different reactions is that the facial recognition thing runs on your device and landmark detection doesn't. Landmark detection requires that parts of your images be copied to a server, and they chose to do that copying without asking. Even though I don't think they intended to or are actually doing anything on that server that I am worried about, it isn't something they should be doing without asking the users first. Doing something similar on the device is potentially unwanted, but it doesn't involve copying off data, so it can be on by default.

Honey co-founder's Pie Adblock called out for copying GPL'd uBlock Origin files

doublelayer Silver badge

I don't think they will face any penalties. While theoretically they could be sued for not releasing or acknowledging, and the developers of the code concerned could try to get punitive damages assessed, that is a lot of work that they will probably not go to. As for why they didn't just release the code as they were supposed to in the first place, I imagine it's a combination of laziness and not wanting people to point out that they didn't write a substantial part of their plugin. As with many such cases, they didn't get to hide that and probably would have gotten less attention if they published that and hid it, but the negative attention probably wouldn't harm them. While they are in the wrong, I wouldn't expect anything interesting to happen as a result.

With 10 months of support remaining, Windows 10 still dominates

doublelayer Silver badge

Re: Enshittification...

Except, username pun aside, most of the common complaints are not about extracting money. The taskbar isn't as configurable now? We could speculate why they did that, but it's pretty clear that it wasn't about money. It would only qualify if they later charged you to let you have the previous behavior back. Either they somehow thought that the 11 behavior is better or they thought it would take more work to maintain it and they could get away with not doing that work. The same goes for any of the AI features that nobody wants minus those few where you have a finite amount of credits. Those that are merely included are there because someone at Microsoft has convinced themselves that people want that. In my case, and likely everyone else who has posted so far, they are wrong, but they aren't extracting money for those features.

I think the hardware requirements are a much more convincing one. I make this distinction, not to defend Microsoft, but to keep around a useful term. We don't need a special word for things getting worse. People have complained about that forever and people have complained about Windows getting worse with subsequent releases at least from the switch from 3.11 to 95, probably from version 1 to 2 but I haven't seen it. A special term makes more sense when referring to the process that Cory Doctorow described when he made up the term, for example as described here. If the term is used for any change we don't like, then we'll have to use long phrases to get this more specific meaning in there, and that weakens the entire point the term exists for. You can't call attention to something well if it constantly has to be explained every time it's relevant to a discussion.