* Posts by doublelayer

9408 publicly visible posts • joined 22 Feb 2018

Mamas, don't let your babies grow up to be coders, Jensen Huang warns

doublelayer Silver badge

Re: IT is the art ...

Sounds good, as long as you're willing to be the kind of person I've seen as a programmer who tells me not to tell you why I think this is a bad idea, I build exactly what you said to, and then the game of who to blame for it not working right starts up. Programmers can easily misunderstand what is needed, but usually, it isn't because they got a clear answer and rejected it. Usually, it's because their assumption on what a vague spec means was wrong. That is why, whenever possible, I spend a lot of my time making sure that I understand what you need and you understand what I can do and what I'm currently planning to do. Skipping or rushing that process usually produces code that neither of us wants to see.

doublelayer Silver badge

Only when I've debugged it for weeks and have started to come to the conclusion that, maybe this time, it's not my fault. It turns out that it usually still is my fault. The reason I can do this is that the compiler gives me the same result each time I compile the same code, so it is possible to change something, see if it helped, and change it back if it did not.

And sometimes, it is the fault of the tools. While I tend not to find many compiler bugs, I have found and fixed bugs in standard libraries before. It starts the same way with me assuming that my code's wrong somehow and I can't see it, and I start to read the library code to see what it is doing with my input because I can't see it. Sometimes, I notice that it's the one doing the wrong thing.

doublelayer Silver badge

Not at all. Just as with cryptocurrency before it, Nvidia isn't invested in what the process makes. They're not the gold miners. They're the shovel sellers. Shovel sellers often do well. OpenAI and other LLM farms connected to big tech will be okay. It's the smaller AI startups that will take a hammering, and they may not even need the bubble to burst to have that happen.

doublelayer Silver badge

Re: "Jensen Huang believes"

Throughout the development of tools, we've reduced the number of people required to write any given program, and by doing that, increased the number of people who want programs written. Yes, it will probably happen again, whether LLMs are good enough to do so I'm less sure, but something will eventually start to automate the basic programs that some people have to write. I don't think the people having to write them will mind too much, both because having to write that kind of thing is really boring and because there will be more things that need writing. This is a problem for someone who wants to learn one tool and keep using it forever, but that hasn't worked for programmers for any of the previous years and they should know that it's not going to work now either.

doublelayer Silver badge

Re: That's not what he said

That's just syntax, though. There are languages that can express that sort in fewer statements, requiring less knowledge of the internals, and obtain the same result. A search like that in SQL looks a lot like your sentence, except that you'd use specific column names instead of things like "the vendor" and "part number". Something could take a sentence that looks natural and translate that into the sort you want. That is just a compiler that uses a language that's closer to English. I've seen a few of those, though I tend not to like them.

What you're describing is a bit unclear. It's easy to imagine a program capable of understanding a broad concept: "I want a program to translate text from one human language to another supporting at least English and Swahili", going out and building such a thing. It's not easy to imagine how you'd actually obtain it, but that's what the hype around AI appears to be doing. Meanwhile, your version appears to involve someone being much more specific about how the job gets done, but doesn't end up bringing back all the restrictions that being that specific tends to entail. You're clearly not saying that you can have a one-sentence spec, but I'm having trouble figuring out how your spec and a program differ except that more of your spec looks like English.

doublelayer Silver badge

Re: That's not what he said

Yes, that is a computer language, just one that has a lot of syntax options. There is a lot of that in languages already.

English text: "if 8 in list_of_numbers then"

Mathematical formula: Well, duh.

Flowcharts: I don't like those languages much and they're most often used to try to teach programming, but they exist.

Laws: You can drop one in. It will be your fault when the AI doesn't properly encode legalese into logic appropriate to your situation, since even a programmer nowadays requires a lawyer or two to do that correctly.

I get what you're trying to say, but a program that reads a certain form of input data and produces a program is just a compiler. The syntax it understands can become more and more complex, but if it's deterministically taking an input text and producing an output, it's parsing a language. In order to be deterministic, that language will have to have certain limitations on it as well. I'm not sure the distinction is as clear as you think it is.

OpenAI claims New York Times paid someone to 'hack' ChatGPT

doublelayer Silver badge

Re: If you deliberately ask for it…

It doesn't matter how many hoops I jump through. If I find that a certain set of queries to OpenAI's servers gives me the source code to GPT, that does not make it my source code. I had to jump through a lot of hoops to retrieve it, but my willingness to take that action does not change the legal situation I'm in. The law considers the results and the intent, not the bit in the middle. Taking the code or the articles without permission is illegal. The only reason why getting the quotations is necessary is that OpenAI would, without them, be willing to lie that they were not in the training data. This action does not create or cancel the crime. It only provides evidence of it.

doublelayer Silver badge

Re: If you deliberately ask for it…

The one who gave it to you without the permission to do so, and if you don't have permission to have it, you to. The newspaper does have permission to have their own articles, so in this case, it's just OpenAI who are infringing.

doublelayer Silver badge

Re: In summary....

They need the regurgitation demonstration to prove it unequivocally to the court. OpenAI and its adherents have frequently come up with arguments as to why their output is the result of convergent evolution, it's just like any brain, etc. That is more difficult to argue when the quotations are direct. The issue is as important whether the regurgitation is total or partial, but that may not be as obvious to everyone, so showing that it can be made to quote the whole thing demonstrates the violation more clearly.

Security is hard because it has to be right all the time? Yeah, like everything else

doublelayer Silver badge

Re: Puzzled!!

In principle, this is true, because there are different security requirements based on what the likely consequences of compromise are. However, I still have to disagree with you because it's the argument that people always bring out. What they say is "the potential damage if anyone breaks into this is so small that it's not worth building it more securely". What they mean is "building it more securely would take more time or money and I don't want to".

The world contains a lot of people who assume that security isn't their problem and rationalize it using the same arguments you make. I work in security, so this bothers me a lot. It reminds me of a colleague I had when I was just starting my career who approached all questions of security using the following criteria: if a nontechnical person can break into it, then we should secure it (E.G. if the system logged in automatically or if the password was password, I could do something). Otherwise, since it wouldn't stand up to a concerted attack by China (it was always China), why bother to do more? Something as basic as encrypting drives on laptops was dismissed as unnecessary because his concept of a nontechnical attacker wouldn't know how to get around that, so as long as the laptop asked for a password before logging in, it should be good enough. I don't think you have the same mindset as that guy. I think the argument you make makes it too easy for that kind of guy to rationalize his conclusion.

'How do I reset my router' isn't in LLM corpuses. An alliance of telcos wants to change that

doublelayer Silver badge

Re: I don't see this working...

True, but after you've done that, you need to log in and reconfigure it. I don't envy the customers who don't know that the experience of being told by a chatbot mangled instructions of configuring an ISP-supplied modem. I pity their technical friend or relative even more. Having recently spent hours on the phone to customer service for a friend's ISP to get them to allow the modem to talk to their network, even though I could see that it had already managed to communicate and be issued an DHCP lease, I know the chatbot won't be doing that for anyone.

Google to reboot Gemini image gen in a few weeks after that anti-White race row

doublelayer Silver badge

Re: conscious effort to create models that produce diverse and fair results

What gave you that idea? I don't just mean about diversity, but about anything? These models don't exist to give you reality, but to give you what you are likely to want. You ask for a picture, they assume you want one tailored for pleasantness or picturesqueness, not realism. Its training data has a lot more pictures tagged with "tree" of trees people thought looked nice on a background that was pleasant rather than a dead tree on a boring background, even though there are probably a lot of boring dead trees to be found in reality.

doublelayer Silver badge

"The machine is genuinely colour blind. It can't be sexist, or racist, or any other kind of ist because it's not aware of any such concepts."

It doesn't have to be aware to have that outcome, in either direction. They're feeding in unverified junk from the internet, which is made up of all of us. A number of us are prejudiced in some direction, and if we just let the bot show us what it saw, it could easily reproduce that. It doesn't matter that it's showing that prejudice because it ingested some input from someone online rather than being programmed deliberately to do so, at least it doesn't matter to anyone affected by what it does. This experience demonstrates it as clearly as the opposite: it can be trained or prompted in such a way that it produces incorrect information without that being the intended result.

Work for you? Again? After you lied about the job and stole my stuff? No thanks

doublelayer Silver badge

Re: Being polite is great

"What, you don't think computers should act the way *we* would like them to,"

Not really. Basically, if you want the key to mean advance to a new line but don't actually add an explicit line break, then write your own program to do it. I have no problem if you choose to do that. If you choose to use the tool that's already there, spend a little time learning how it works instead of demanding that it do something there is a perfectly good reason for it not to do. As a programmer, I'm happy to explain how it works. I'm happy to explain why it does that. I'm happy to consider why an option to do things differently could be useful in a situation I hadn't considered. I am not happy adding lots of option switches for bad reasons then trying to work out all the other changes needed to accommodate a preference that the user could get around by spending a few minutes learning how it works.

doublelayer Silver badge

Re: Ah, Joy

An email summary that gets sent to everyone? You're stuck in the first decade of this century (lucky for you). Nowadays Grandad, we use bespoke web platforms that store every tiny detail of a project in its own task page which has about a hundred buttons and twenty text boxes for different details, none of which has anything in them because filling in a big description of tasks that will take you an hour to complete is pointless. You're supposed to open, close, and update them yourself, but nobody does it because, even if you do, there will be a meeting where someone goes through every one on the board, one by one, and takes a few minutes to hit the resolve button. They're designed to look like they could possibly be useful if they were used differently, but I'm not sure whether that's ever happened.

Are you ready to back up your AI chatbot's promises? You'd better be

doublelayer Silver badge

Re: Disclaimer

True, but it is where the legal ambiguity starts to come in and, when it does, the average consumer starts to worry about proving what could turn out to be a simple case. For example, in this case with the broken phone, the charging port was working when it was new, and now it's not. I don't know exactly why as it wasn't mine, but it probably wasn't someone pounding it with a hammer. I could try to suggest it's manufacturer's poor workmanship and they can try to prove that it was caused by user negligence. The typical user looks at all this and decides that, since this was a cheap phone, it will take so much effort that, even if they win, they've probably spent more than paying for a repair or replacement would cost and they're not confident they would win anyway. That is how companies can use disclaimers, even invalid ones, to blunt consequences. The only way around this is if the disclaimer is ruled invalid and they are forced to remove it entirely. If some company can find a wording that the court accepts, everyone will use something similar.

doublelayer Silver badge

In a way, since your company would probably sue you for doing it. It wouldn't get them out of their liability, but you could still face consequences. I'm fine if Air Canada wants to try suing their chatbot provider for that to recover the costs. It probably won't work, though.

doublelayer Silver badge

Re: Disclaimer

Of course you do if you can hide the disclaimer and have no humans. If they find some set of conditions under which a court allows them to lie via chatbot and not have any consequences, they can use those conditions. Anyone wanting information may try to call someone, but if they only give them the chatbot, then many customers will use it because it's the only option. The customers would be annoyed and some might try not to buy from them, but that doesn't seem to have stopped a lot of companies today who have the bare minimum of customer service.

doublelayer Silver badge

Re: "In a few years, it will be a different story."

I agree with you there. If they did it, it would have to be done with some other type of technology, and it's unlikely to be able to do it with perfect accuracy. However, I still think that one of these AI companies might eventually realize that they need it and try to build such a thing. They may fail to accomplish it, but for the moment, nobody is even trying. This is assuming that enough people rely on incorrect chatbot answers and suffer the consequences, so I'm hoping that rulings like this continue to happen. If users manage to find a way not to suffer when they use a chatbot's answer to screw over another, my assumptions could prove wrong.

doublelayer Silver badge

Re: Disclaimer

It depends if a court accepts it. If they basically say that the chatbot can say whatever it likes and they don't have to honor any of it, then it can still look like it's providing customer service and leave every user with the results.

Such things are not uncommon already. I was reminded of this recently when I was asked to repair a phone with a broken charging port which had been purchased only months ago. The warranty attached to it had so many different reasons why something wouldn't be covered that, as far as I could tell, the damage was not covered. The customer service person said that it was not covered, but not why. What I can't figure out is what kind of damage, other than maybe the phone being broken before anyone touched it, would have been covered by the warranty. Yet a customer buying the device would think that they had some kind of protection anyway because the warranty existed, and surely they wouldn't have a document if it meant nothing.

The average user will probably never see the disclaimer and assume that, when the chatbot the company chose to put there gives them some information, it is valid. It is possible that a court will overturn that and invalidate the disclaimer, the same way that I could probably have tried to challenge the warranty, but most users will not try because there is a good chance that it won't work and they'll end up wasting time and money in the attempt.

doublelayer Silver badge

Re: So then.

All the scripts would get very long. Instead of a quick answer to your question (possibly wrong), you'd get a two-page answer to your question (possibly wrong). A lot like what you get if you try to find the information online.

doublelayer Silver badge

Admittedly, its first answer could be due to the vague question. "the VAT paid on £500" could mean £500 total expenditure including tax or £500 before tax. If I were asked the question, I'd probably ask for clarification. If I couldn't have it, I'd use context to guess which was wanted, for example using a before tax amount if the person asking was a seller and after tax if it was a buyer, but that isn't a guarantee of anything.

doublelayer Silver badge

Re: "In a few years, it will be a different story."

I wouldn't be surprised that OpenAI will eventually try. They've made it so far by having a bot that can print coherent sentences, and they've succeeded at convincing some companies to pay for it. I think that ascendance will eventually break when the inaccurate results become bad enough. If they can see this coming, they might start focusing on getting some accuracy out of it, not just hiding their training data. The money they've made from sales so far and from massive investments from Microsoft should allow them to contribute resources to the attempt. I don't know how easy that will be, and they may try to get it and fail, but I do think they or someone like them will try.

Judge slaps down law firm using ChatGPT to justify six-figure trial fee

doublelayer Silver badge

Re: Second time in two days ...

People can take some levels and forms of unreliability. For example, when they turn on their navigation systems, they know there's a chance it might send them the wrong way, but they also know that most of the time, it is correct, and often if it is not, they can use their eyes to detect it and their physical control of their vehicle to prevent it from causing problems to them, the odd story about a person deciding to drive directly into a lake excepted. If the navigation system made errors like that on a significantly larger set of trips, people would be more cautious about using it, as they are in areas where map data has not been updated. If those errors were not immediately detectable and frequently led to wasted time or risked lives, it would be dangerous for users to rely on it.

GPT's reliability and accuracy is along those lines, since, as you know, it is frequently wrong and it doesn't make that obvious unless you already know it. People who trust it are making a mistake, and people who use it in such a way that others are depending on GPT-derived unreliable data are harming others. I have been clear about this, and others have stated their opinions. Your repeated assertion that we must be saying these things out of fear is beginning to confirm my original assumption that you know what we think, cannot respond to it except by trying to understate the inaccuracy of GPT, and are resorting to making up a different reason and putting it in our mouths. I can only tell you that, if you sincerely think I and many others here are actually fearful of GPT's effect on our jobs, you are profoundly mistaken.

doublelayer Silver badge

Re: Second time in two days ...

My point is that you usually can't rely on the answer, and this is one of although far from the only reason why not.

doublelayer Silver badge

A bad workman blames his tools because he uses crap tools. A good workman does not need to blame his tools because he chose the right ones. GPT is not the right tool, and using it indicates that something is going wrong in the researcher, and if they're not exceedingly careful, the research. Similarly, a researcher who asks someone who has no reason to be trusted as an authority is also doing research wrong, even if they put little chunks into a search engine and find a page that says something that looks similar. They actually have to understand what they're reporting on to do research correctly.

doublelayer Silver badge

Re: Second time in two days ...

What you appear not to be taking into account is that GPT changes a lot. It's not just because the algorithm has a lot of random elements, but that there is a lot of premade prompt material which is added to your prompt before you start, OpenAI changes it frequently, and you can't see what it is. What one prompt produces today may not be generated, even similarly, by the same prompt a day from now.

This "what is a computer" example is a stark demonstration of the effect. It did actually happen. OpenAI verified that it happened. This is not supposition. It's working now because people posted that and other examples where it was clear that someone had flubbed something very important, making a lot of prompts produce gibberish. This works for other cases as well. If I post a prompt which returns completely made up information and you run the same prompt some time later, you may get the same inaccurate information, an I don't know response, completely different inaccurate information, or even correct information. There's really no way to tell. Asking people to post prompts then pointing out that you can get something different suggests that you might not know what is happening when you do so.

Since we were discussing it in a thread above, this inconsistency is one of the reasons why I have a problem with the use of GPT when accuracy is important and one of the reasons I thought you might be trolling. Even getting a useful answer once doesn't guarantee that you can get a useful answer later with the same prompt.

doublelayer Silver badge

I will try to explain, assuming that you really want to know our views and aren't just trying to troll. First, I need to clear up some problems with what you asked. We aren't saying what we do because we're afraid of GPT, at least I am not. I do not fear it, and I'm not worried about losing my job to it. Call that overconfidence if you like, but that is not a consideration for me.

My problem with GPT and models like it is that it is so frequently wrong that it is bad to use it for situations where accuracy is important. Most of the time, accuracy is very important. I have had similar problems with many things before. For example, I know some people who, when they're feeling a bit sick, will try to use Google to diagnose themselves. Their arm aches because they did more work with it than they usually do, but they're now worried about cancer. That is something I'd rather they not do. It's not that any particular page is lying to them, but that it's not the correct tool for what they want done. GPT is fine if you want something fictional generated, and it may work if you want a virtual conversation, but it is not a good tool for research.

I do not see it as similar to Wikipedia at all. Wikipedia gave you lots of details with the possibility that some of them were wrong or that some important information may be missing. GPT gives you summaries with the chance that everything it's saying is rubbish and no way to get more information. I have tried it numerous times and, whenever I ask for details that aren't commonly found, it either fails to find them when a simple search could turn them up or it makes them up. Someone who relies only on Wikipedia is likely missing something, whereas someone relying only on GPT is probably going to do something catastrophically wrong sooner or later.

Space nukes: The unbelievably bad idea that's exactly that ... unbelievable

doublelayer Silver badge

Re: There's no way there aren't already nukes in space

"Is "insure" vs "ensure" a UK vs US ( two countries divided by a common language) thing?"

According to my experience with the language and a quick check of a couple dictionaries, no. The spelling and definitions are the same. It was probably just a typo.

doublelayer Silver badge

Re: Game Theoretic Analysis

"[leaving the button disconnected] the one tiny -- yet potentially, oh, so far-reaching -- act of spiteful disobedience within their power that is certain to go undetected, let alone punished."

That's not true either. People check those. Unless everyone involved thinks exactly as you do, someone will notice that, whenever you have checked the button, it's inoperable. That's called sabotage and it is punished rather severely when what you're sabotaging is sensitive military equipment. You might be able to get away with it the first time with people assuming you were incompetent in repairing it, or even that you were incompetent and didn't notice the fault when someone else created it. They'll teach you how to fix it and watch that you fix it. It won't work the second time. At best, they'll decide you're incapable of doing what you're supposed to and fire you. There are worse options available. So practically, it still won't work.

By the way, there is not one button, but a bunch of launch controls. The place that fits your description is the launch controls on a missile or on the warhead itself. The warhead is likely not something you can sabotage because they're manufactured in the same place and at least one will be investigated, so you can't successfully sabotage all of them by working at the factory. Your best bet is to prevent a missile from firing, prevent the missile from activating the warheads on it, or make them fail in some other way. Even if the military doesn't check things, you can only do it to the subset of missiles you maintain, not all of them. It could make the attack weaker, but unless you've got a large group of people who all think the same, it's not going to prevent it from working. Whether someone tries or not, it's not going to work as well as you think it is.

doublelayer Silver badge

Re: Game Theoretic Analysis

That analysis only works if the main, and likely only, thing you care about is whether people think you are the baddie or not. There are lots of other ways to run it, not necessarily correct, but plausible enough that someone might. For example, a few options include:

Option 1:

* If the other side has already fired, and their launch was successful, what remains of your country will likely be conquered. You don't want them to conquer you. You want them to be unable to do it, so attack now.

* If the other side has already fired, and their launch was unsuccessful, then they started a war and you should defend yourself from their next attempt. Also, they were willing to kill lots of people, so they deserve whatever they get.

Option 2:

* If the other side has already fired, and their launch was successful, I'm dead. What do I care what history thinks?

Option 3:

* If the other side has already fired, and their launch was successful, then they might try the same thing on others. If they are unable to do it because you counterattacked, others will be safer. It might even convince someone else not to launch an attack because they can see that nobody wins when you do.

In any of these options, it is easy to justify a retaliatory attack. Justifying a preemptive attack is harder, but there are ways that people have done it. I'm not asking you to agree with any of these, just to recognize that others could easily believe them. If they do, then you have a risk.

doublelayer Silver badge

Re: That was my thought, too.

So you took something which means this:

"Basically, everyone who signed a treaty that said they were going to protect Ukraine have decided to ignore that, including one who decided to destroy it. Looks like that treaty is void now."

And turned it into this:

"Let's make nuclear weapons!"

It is true that the treaty said they would not have them, and that treaty being void means they are allowed to make them if they want, but it doesn't automatically follow that because their legal impediments have been removed that they necessarily will. I have a lot of rights to do things I don't intend to do. Then again, if they did decide to try, I won't be surprised. The work would be difficult to build up an arsenal, but I can't blame any Ukrainian from thinking that the treaty was useless anyway, which it was, and deciding that nukes are the solution. I hope they don't decide that, but it isn't that illogical.

Lender threatens to sweep MariaDB accounts over private equity bid

doublelayer Silver badge

It looks like it is. The current price appears to be about $0.35 per share for a market cap of $24.7M. It seems to have decreased significantly. For example, the highest price in the last year was about ten times that.

Americans wake to widespread AT&T cellular outages

doublelayer Silver badge

I think it's likely a combination between someone being in an area where other providers' signal is weak and having phones that lack some of the bands that other providers would be using in the area. If someone on a different provider would be using a band that wasn't included on the device because that one was built for AT&T networks, they would be unable to use that one to connect to emergency services. This would indicate why it was just some customers, not everybody in the city, who can't connect. It would be easier to demonstrate if we checked the devices that didn't work. Most flagship devices have a lot of bands, but some cheap ones, especially the cheap ones sold under the providers' brands, go for the basic minimum.

ChatGPT starts spouting nonsense in 'unexpected responses' shocker

doublelayer Silver badge

I agree with you on getting explanations for why you said what you said, although that's far from perfect, but I can't agree with this bit:

"Humans are far more debuggable right now because we have 50,000 years of learning to understand each other."

And despite that, we can almost never debug something even when we've decided it's going wrong. I can't tell you why interactions between humans go wrong except in broad terms, and I can't go in and fix them. I can't explain why there are dictators who prefer to kill people even when they don't have to, nor why people support them, but they do. Those are some pretty big bugs in my opinion, but we have no chance of fixing them any time soon. It works at the micro level as well. There are lots of mental disorders that don't have an established debug process. The best we can get in many cases is a mitigation, and those often fail. We have psychologists, neuroscientists, and to some degree geneticists working on that, but those efforts will take a long time to improve.

An LLM, despite its general uselessness, is easier to control. If OpenAI doesn't want it to quote copyrighted material, they can significantly limit that behavior, although they can't eliminate it entirely. If you want to avoid thinking something, it will take either a significant amount of effort or it could prove impossible. The LLM is easier to debug.

Staff say Dell's return to office mandate is a stealth layoff, especially for women

doublelayer Silver badge

Re: This is why we can't have nice things

It might be if the job might require you to go somewhere. If, for example, I was the person they'd call if they wanted something physical changed, then I'd have a job requirement to go where that thing was in the time they expect. If my job involved going to in-person meetings in a certain location, they might have a similar requirement. Which house I live in is none of their business, but the general location can be. I don't know whether these considerations apply to the commenter above, but they might.

Persistent memory to replace DRAM, but it could take a decade

doublelayer Silver badge

Re: ARRRGHHHH!

Why do you want it? Remember that fast and nonvolatile probably also means small and expensive. Are you taking those tradeoffs into account, and if you are, why does existing fast nonvolatile storage not work for the use case?

doublelayer Silver badge

Re: Its gonna be hard to supplant DRAM

That's not really what I intended. That's RAM caching, which is already available and works fine although if you don't have a lot of RAM, the caches will be rebuilt frequently. I was trying to talk about the initialization process of a typical program today, which involves copying its machine code into memory, then creating a lot of runtime data. Much of that data is loaded verbatim from files on disk, but some of it is calculated at startup. Theoretically, you could precalculate any static data and store that in the binary instead of computing it when the binary starts, which would decrease loading times a little. However, it's not going to reduce it too much and it is already an option.

doublelayer Silver badge

Re: Its gonna be hard to supplant DRAM

They can do that, but it won't change how mass storage is used. For example, when a file needs to be expanded, it's still going to be fragmented rather than treated as a big string in memory, and to manage that, they'll still need to track which areas of mass storage can be written to across programs. And we've reinvented the filesystem.

doublelayer Silver badge

Re: And security?

The problem with that is that there isn't a great place to do that. If this hardware takes the place of your main RAM, it's not like decrypting disk data and caching it in part of RAM. You can decrypt it as it goes from RAM to cache, but you'll find yourself using a lot of processing time just encrypting and decrypting over and over again since your CPU cache is really small in comparison. If you had an encryption coprocessor doing that, maybe it would work a bit better, but we're getting to a point where I have to ask what benefit will justify the price of adding those things and taking the performance hit. I'm not sure what you get by going to that effort, and if the buyers aren't either, it won't sell.

doublelayer Silver badge

Re: Its gonna be hard to supplant DRAM

It would be faster, but the same could be accomplished by building a faster SSD with the same stuff. Basically, it's Optane again. You could save a bit of time by changing the executable format you're using from one that loads itself into memory and does a bunch of initialization to one that has already initialized and just stores and loads that state, but the programs will still have to load any configurable state from somewhere persistent which will likely still involve loading files from some kind of storage and processing their contents. I'm not sure loading speed would be different enough for people to care if the price is much higher.

Duo face years in prison over counterfeit iPhone scam

doublelayer Silver badge

Re: A bit harsh?

They probably won't get that. Maximum sentences are rarely given unless it's an extreme offense or the criminals concerned have done this many times before. People charged with this who stole orders of magnitude more don't get that sentence, so I doubt these guys will either.

Cutting kids off from the dark web – the solution can only ever be social

doublelayer Silver badge

Re: Chilling Effects

"If phones were age restricted parents would not be handing adult phones down to their children."

Yes, they would. They would not be supposed to, but they would still do it because their old phone is free, a child phone is not, and a lot of parents today don't see any reason to bother adding restrictions to devices they give their children. Unless there was someone to check what phones a child was carrying, banning that would not prevent parents from handing them down. This is a reality which will affect any attempt to restrict what devices are used, should anyone try to enact one.

doublelayer Silver badge

Re: Chilling Effects

During my childhood, many people my age had access to alcohol. This increased exponentially with age (I don't know anyone who had a lot of access at 12, for example). I did not consume it, but I know some did, and there was likely some harm as a result. So yes, it does fit the pattern in that there's some effort to prevent it but it is not that successful and it is not treated the same way as murder or any other violent crime. So if the law was written to do it, expect a lot of leakage. In fact, expect even more access than with alcohol because there's an important factor. When an adult uses some alcohol, it's gone and children can't have it. When a parent uses a phone and replaces it, they may have an old phone that they plan to give to the child. That's going to introduce many adult phones to children.

This is all assuming that you want that. I do not. I don't want there to be a law requiring locked-down devices to exist, nor one that restricts what technology children can have access to. If that happened, something as simple as a Raspberry Pi would probably have to be denied to children since it is capable of accessing Tor, and neither denying it to children nor restricting it are acceptable options in my mind. So even if you can find a way to make this work, I will still oppose it.

Euro shoppers popping more and more premium phones in the basket

doublelayer Silver badge

Re: “it's very difficult to measure the replacement cycle at any given point.”

They don't have to be, but if you want to install apps, then you'll have to sign in at least to the app store. That may change now that the EU's DMA restrictions are coming, but until now, a phone without an Apple account was limited to the builtin apps. Those probably do enough for many users since it would include phone, SMS, email, navigation, calendar, camera, and a notes app, but I expect that most users had some app they wanted and signed in to get it.

doublelayer Silver badge

I would say you're being charitable to the point of incorrectness there. Project Treble didn't do a whole lot. They have modular security updates now, but that doesn't help you if the manufacturer doesn't push them. You can't go and install them on the main image yourself. You are still at the whim of the manufacturer on whether and when you'll get those patches, let alone actual feature updates, and the only change there is that some, and only some, manufacturers have started to realize that people want them. So if you buy a Samsung device, you can get 4-5 years of security updates but that doesn't apply to most others.

As for Lineage OS, it will certainly make the software life of a supported phone nearly infinite, at least so long that the phone is likely to have some serious hardware problem by the time that Lineage no longer supports it. The problem is that your S10 is one of the few devices that will get it. You can't install it on the S20 or any later flagship, and if you bought an A-series budget or midrange device, you're almost guaranteed not to have support. You can try to build it yourself or use an unsupported version built by someone on XDA, but that's not guaranteed either and it doesn't necessarily give you what you wanted. I'd actually say that the custom ROM scene for Android today is weaker than it was ten years ago. The one you get with the device is a bit better, which softens that blow a little, but someone used to the active community will probably be a bit disappointed with the options they have today; a bunch of great projects that would provide what they're looking for which don't run on any device they have.

doublelayer Silver badge

It's worth checking, but don't get your hopes up. Unless it's a Fairphone or Pixel, it's likely not supported. They still have builds for many old phones (the Samsung Galaxy Note 3 is still on the supported list even though it's over a decade old), but most things you would have bought in the last four years are not on the list. Even modern Pixels aren't on the list, but there's a reasonable chance they might be added.

doublelayer Silver badge

Re: Unscientific anecdotal sample

From my experience, using iPhones to stretch the time between having to buy new ones, I haven't noticed this too much. The newer operating system versions usually don't task the hardware too much, but one reason they might is Apple's CPU throttling when the battery gets old (yes, they're still doing it, it's just that they admit it now). That can show a noticeable decrease in performance, though for my use cases, it hasn't been a problem, but my uses are relatively light on CPU usage.

As for apps requiring the latest version, I have seen it but quite rarely. It is not like Mac OS, where basically everything does that, especially everything from Apple, but it can happen. This is slightly mitigated by the fact that, if there is a new version out, the iPhone I already have is almost certainly capable of running it because they get the update at the same time as every other supported model. If someone is intentionally trying not to install a new version, though, I could conceive of this being an annoyance though not a widespread one.

Apple makes it official: No Home Screen web apps in European Union

doublelayer Silver badge

Re: Did anyone ....

It depends what the courts think "service" means. If it's PWAs only, then your logic works. If it's the operating system in general, not so much. They can point to the fact that PWAs are still supported outside the EU, not inside the EU, and say that the operating system's compatibility has been degraded. I don't know that they will, but if we're working on literal meanings, that one fits as well.

Dave's not here, man. But this mind-blowingly huge server just, like, arrived

doublelayer Silver badge

Re: What happened to the server?

However, the information from the person who stole the server indicating that the people were engaging in two illegal activities, selling illegal substances and receiving stolen goods, would probably make it much easier to obtain the warrants necessary to search. Moving the server would help the criminals in this case, but that relies on them having another place for it to go because I doubt they could have found a buyer quickly enough to not have it at all.