* Posts by doublelayer

9408 publicly visible posts • joined 22 Feb 2018

AI stole my job and my work, and the boss didn't know – or care

doublelayer Silver badge

Re: "Stored in a retrieval system"

The analogy is not exact, but it is closer than you're imagining. Decompilation is not as simple as it's painted. There are many ways to take a machine code file and get some source code that, when you run it through a compiler, gives you the same or similar machine code. Sometimes, even that fails. However, you tend not to get source code that you want to read, let alone modify and put back into production. There are some languages where that's different, and there are binaries that left in a lot of extra data that makes this easier, but since that data is irrelevant to the functioning of the program, we can't count on it being there.

Similarly, there's no simple way of taking a model and cracking it open to get copies of its training data. Some of the data isn't there, and what is there isn't stored in any convenient way. That isn't a guarantee that it's not present. In many cases, LLMs quote from their training data on request. That's more likely to happen with a large model than a small one, not really a surprise. Large models also tend to be the more useful ones, though. Even if that quoting is not exact, this doesn't really matter. A poorly OCRed copy of something I don't have the right to copy is still infringement.

You don't need simple byte-for-byte recovery to violate copyright. In fact, you don't need to reproduce at all, and that's where the analogy makes more sense. Getting code and compiling it is copyright infringement even if I never give it away. Chances are that if I have those things, you're not likely to catch me, but the violation doesn't cease to exist just because I got away with it.

doublelayer Silver badge

"don't pass a law that can't be enforced. The problem with AI training is there's nearly no way to know what material it's been fed from looking at the output."

They're not asking for that. They're asking that the training data be identified from the source, which is much easier. Anyone training an LLM knows exactly what they trained it on, and the only reason they might not know where they got it is if they're being sloppy with the record keeping. Now there are a few situations where they might have an extra layer, I.E. they scraped a book from a pirated copy on a different site rather than from an original source, but they would at least be able to identify the pirated copy and its location. That can be enforced. It won't be, but it can.

Google is a monopoly. The fix isn't obvious

doublelayer Silver badge

I'm not sure I understand your argument. In the situation described, most of the Google employees that are driving Android somewhere are still there, still working on Android, and still being paid to do it. Their employee records just don't say Google on them.

I don't think a consortium of manufacturers alone could manage to run Android successfully. While I think there are enough of them that would understand why they can't go off on their own, a slow process would gradually cause one manufacturer after another to try doing it all themselves in order to have software that their competitors haven't got. However, a consortium of manufacturers funding a separate company might be harder to disassemble because the manufacturer that splits off would no longer get the work funded by the others. I base this assumption on the fact that it's basically what Google already does; while I don't know how profitable the Android unit might be, they do get revenue from OEMs and they have prevented more companies from pulling an Amazon and trying a fork. Most forks of Android are one of the following:

1. Ones for China where most Google services don't work, forks I don't want to be running.

2. Custom, made by the community, not really a competitor to any manufacturer, such as Lineage or Graphene.

3. AOSP builds that only exist for unusual hardware, posing no competitive threat.

4. Something like Amazon Fire OS, which could theoretically be a competitor, but it's not being developed as quickly, so if it was truly forked, it wouldn't go very far, and since it hasn't, it has to keep coming back to get the changes to AOSP to keep itself from getting obsolete.

doublelayer Silver badge

In this scenario, Google wouldn't exist anymore. Various pieces of Google's software would now be Android Corp property. For instance, Google Play Services would not be a Google product because it's part of Android. It would now be Anddroid Corp Play Services or something else. Many of the things you're thinking about that are missing in AOSP would still be closed-source but Google would no longer control them.

That wouldn't necessarily be true of all the preinstalled Google apps. The GMail app, the search app, the Maps app, those might have gone with other parts of Google. The ones that are foundational to Googly Android, though, would be part of the Android company that was spun off.

doublelayer Silver badge

Android is one of the easiest ones to imagine. They already get funding from device manufacturers. The new Android Corporation would probably still have important things like Google Play Services, whatever they renamed it as, so they could continue to sell licenses to hardware OEMs and use that to fund the OS. I'm sure Google chipped in some extra funding which they won't get anymore, but Android Corp doesn't need to do any of the data collection things that Google asked for, so their workload will decrease as well. If it was failing, some manufacturers might fund the continued development so they still had software for their phones. That depends on manufacturers having gone through enough variants to know that making their own OS with their wonderful new features but without compatibility with phones made by other companies is a bad idea. It's not a guarantee, but compared to many other branches of Google, Android is one that could probably work if it was split off tomorrow.

I would put search, ads, and possibly YouTube and GSuite in that category. I would not put Chrome, Maps, Gmail (the individual one), or many of their more popular products in there. Of course, there are many more services that we could put into one category or another.

doublelayer Silver badge

Re: Perhaps the buying and selling of businesses should be outlawed

It won't work. There are lots of situations where a company can buy another company that don't result in monopolies. The restrictions are usually placed just on buying the competition, and even then there are times where that is necessary. Many of the situations you describe probably wouldn't work out differently if buying the competition was banned. Instead of driving a company into a tricky situation, then buying them, a powerful competitor could just drive them further into the situation until they couldn't operate, then set up shop wherever they pulled out.

Regulation is needed, but it won't be something that can be easily stated in a paragraph. There is no quick fix that will work on Google, let alone all companies everywhere.

doublelayer Silver badge

I don't think that's how it translates. Using your analogy, I'd translate it as this: "as a hostage, I don't want you to annoy the hostage-taker too much because they might split into two smaller hostage-takers each more evil than the original one, both of them holding me hostage". I suppose the analogy doesn't work very well.

The concerns are relevant. If you split Chrome off from the rest of Google, you've now got a lot of people who make a browser and no money. Google only makes a profit from Chrome due to their advertising and data collection. Both of those wouldn't be there anymore. All the Chrome developers want to keep their jobs and get paid. How can this be accomplished? A lot of the possibilities are bad, even worse than what Google does with it. Perhaps the best approach is for Chrome to simply cease to exist with the wreckage picked up by Microsoft's Edge team, whoever makes Brave, and some people can go to Mozilla or Apple to work on a different browser, but there are worse options, like everyone's data collected from Chrome sold to the highest bidder, then again to the next highest bidder, then on down the list. As bad as Google's collection from Chrome is, they're not doing that. Thus, preventing stuff like that should be part of any good breakup process to prevent the parts from causing even more damage. It's not necessary to be nice to Google, after all the whole point is that Google will cease to exist, but to limit the damage to users when it happens.

Shots fired as AT&T and Verizon ask FCC to block Starlink's direct-to-cell plans

doublelayer Silver badge

Re: Is Starlink disrupting or is AT&T trying to close the market?

The Chinese won't be interfering with US signals because, unless they get an FCC license, they have to turn off their radios when over the US. If they don't, then the FCC will prohibit people from buying their services. So no, the Chinese will not be ignoring FCC rules over the US. Over the open ocean, probably. Over China itself, maybe but they've got mobile phone companies too and they sometimes fight.

doublelayer Silver badge

The second example is faulty because it expresses a quantity, not a comparison. A quarter of the people agreed says that, out of x people, x/4 of them agreed. If x/4 of them agreed with statement A, and x/16 agreed with statement B, then stating that "a quarter agreed with statement B" doesn't convey the information. "A quarter of those who agreed with statement A agreed with statement B" probably misstates the groups involved by suggesting that it was a strict subset. Saying that "four times fewer people agreed with statement B compared to statement A" conveys the information. That isn't to say that it is the only or the best way of conveying it, but if it is understood, it meets the first requirement.

doublelayer Silver badge

I have a debt of £20 and no assets. That means my net balance is -£20. Oh no, somehow my debt has gone to £180, or nine times as much. That means my net worth is nine times less than it was.

When you have negative numbers, you can have X times less. If you have positive numbers, chances are you can make negative numbers from them that mean something. You can also understand the intrinsic link between multiplication and division, and thus that y=(1/9)x also means x=9y, and saying "one ninth as stringent" is more likely to confuse people.

doublelayer Silver badge

Re: Stupid Question Time !!!

Because, if they were allowed to do it and it did cause problems, the other carriers would probably have to prove that it was specifically the Starlink satellites doing it rather than anything else against a bunch of lawyers from Spacex trying to hobble them. In the meantime, Starlink keeps doing it. Then, if they succeed in proving this interference and the FCC suspends their license, Spacex sues the FCC for doing something wrong, whatever their lawyers can make up, and demands an injunction because they've paid for a bunch of hardware on thousands of satellites which they can't get back because their equipment can't manage -120 but only -110.6, and some judge will grant them the right to keep using it while that case is decided, which will take years. In the meantime, Starlink keeps doing it.

There is a reason why there are experimental licenses, which allow for something like that, and production licenses, which usually have to be granted before you put your equipment into operation. Otherwise, every company would do whatever they wanted until such time as someone figures out the harms it causes. In at least a few areas, radio spectrum being one of them, this is limited to prevent chaos. The FCC had some reason for thinking that -120 was a good limit, and the other carriers appear to think that raising it will cause problems. By all means have some knowledgeable people review that, but saying "I guess you can do whatever you want" whenever someone asks for changes is not a recipe for success.

Still waiting for a Pi 500 and wondering what do this summer?

doublelayer Silver badge

Re: Point

"What you are saying is that all computers should be based on Intel X86 CPUs and no other CPU or architecture is worth it because you have to compile an OS to run on non-X86?"

Am I saying that? I don't remember saying that. However, I am pointing out that the original critique: "with the x86 ecosystem where you can just use a generic ISO to install the OS instead of needing a custom built ROM for each SOC" is correct, and not only for Windows.

That is no reason why you can't use something else. It is one point in X86's favor, not the only thing to judge a CPU on, and certainly not a reason to ignore every other architecture whatsoever.

Also, it's not about compiling the OS. Someone has to compile the kernel to run on anything, and that's either me or the distro maker, mostly them in practice. The distinction is just with the ease of portability and modification when standard firmware is involved. The solution to this that I'd like to see is not the destruction of ARM and the use of X64 everywhere, but instead standard firmware used on ARM chips. Neither of those things are going to happen, but if standard firmware for ARM happened, I'd be happy, and if destruction of ARM happened, I'd be unhappy. For one thing, I'd probably have to turn in all my Raspberry Pis for that to happen and I don't want to.

doublelayer Silver badge

Re: Point

You weren't actually that subtle making your suggestions that, instead of using an SSD, I run stuff from RAM:

"Run stuff from memory (up to 8GB)"

or use network storage:

"Extra storage comes over the network."

So I didn't have any problems comprehending your suggestions. I just didn't like them for use as a desktop.

Maybe you're accusing me of misunderstanding you because of this part:

"However, your response that nobody needs it kind of sounds like you also think it doesn't work and are trying to make excuses for that."

If it was, you might note that I said that it "sounds like", not it "definitely is". I'm pointing out how someone could interpret your comment when they know the benefits of SSDs. I doubt you have any experience testing whether or not the SSDs work because, from your comment, you don't need them for your uses. However, as a response to someone claiming that they don't work, this isn't very convincing and neither is being told that, since you don't need one, I don't need one either. So if that's what you had in mind, it seems the misunderstanding was on the other end.

doublelayer Silver badge

Re: More portable

You could build something out of it, but I'm not sure how useful it will be. Unless you're willing to use a really small screen, you'll probably make the package bulkier to add a screen large enough that you're willing to use it, and a small USB keyboard is easily obtained but will make internal cable management tricky. I've built similar things out of Pis before, and they worked, but they were always more ungainly than a normal laptop. The footprint of my box was smaller, but it had to be taller to get everything in and wider if I didn't want to trail cables.

Incidentally, also from experience, the only reason you can accomplish your goal is being willing to mains power it. Pis do terribly with batteries and the Pi 5 is the most power hungry of all of them. You can use a USB battery for charging phones which will be relatively cheap and last a while if you get one of the larger and heavier options, but the Pi can't measure its capacity, so if you forget to charge it, expect power to be lost at an inconvenient time. You could instead get one of the battery systems intended for use with a Pi which will get around that in exchange for costing a ton more than the other option and generally being limited to a few 18650s which will give you a few minutes of battery life. I've wanted a Pi-powered laptop since the Pi 2 became available, and they've had them for a while, but battery life is always one of the worst features.

doublelayer Silver badge

Re: Point

Nothing needs an SSD; I could boot my laptop off a USB disk and it would still work. However, one of the selling points of the Pi 5 was that you could use an SSD, which is sometimes relevant. One of the cases where it is especially relevant is in a desktop computer, such as the one the article is about. I don't want to hold every application I might want to run in 8 GB of RAM. That wouldn't leave me a lot of RAM for memory-intensive things. Nor do I want to get another computer to provide network storage and run a Pi as a thick client that still has to execute everything. If I'm doing that, I might as well run the computing on the server providing the storage and use an earlier Pi as an actual thin client.

Fortunately for me, I wouldn't have to, because that's one of the main reasons why NVME was added. I don't know how well it works because I haven't bought a Pi 5 (there's a cap somewhere for how many Raspberry Pis you can have before you have a problem, and I might be near it). I wouldn't be surprised to hear that elsergiovolador's summary of reliability is simplistic. However, your response that nobody needs it kind of sounds like you also think it doesn't work and are trying to make excuses for that.

doublelayer Silver badge

Re: Point

No, that's not how it translates because of two things I think you probably already know:

1. On an X64 processor, you don't need custom images of any operating system, and you usually don't need custom kernels either. On a Raspberry Pi, you definitely need custom images, and sometimes you also use custom kernels. On other ARM boards, you always need custom images, and you often use custom kernels. The more "custom" things you have in a build, the more annoying the process can be when something doesn't work as expected. For instance, in-place upgrades from one standard distribution to another are a pretty common test target and generally work well, whereas doing that with images sometimes breaks or just isn't available for a while, and if your kernel has patches which haven't been upstreamed, you may not get to update the kernel at all unless you can find the sources and try to patch it yourself.

2. If you really do want to run Windows, the Raspberry Pi can do it. Probably better now, because the last time I tried, it was on a Pi 3B. Windows 10 didn't like the single gigabyte of RAM and, while it ran, it wasn't fun to use.

If you insist on misstating what people say, you paint yourself, not as someone who understands the distinction and has a good argument, but someone who has a pointless adherence to a product without the knowledge to explain why. There are reasons to favor the Raspberry Pi, but you haven't stated any of them.

IRS has loads of legacy IT, still has no firm plans to replace it

doublelayer Silver badge

And for those of us who aren't, it's not exactly impossible to learn it. I have not written in Pascal and very little in Fortran, but I have written in a ton of other languages, many of which got some concepts from one of the above. I probably won't learn them because demand is lower for them, but if I were a criminal, and if I learned them I could take lots of money from the IRS, then learning them wouldn't be a barrier.

LLM-driven C-to-Rust. Not just a good idea, a genie eager to escape

doublelayer Silver badge

Re: Why use an LLM?

Because you can't actually write code to convert C to Rust accurately, and the closest you'll get after significant effort is a program that converts most C to valid Rust without fixing any of the memory issues for which Rust is supposed to be better. Getting to that level is already a substantial effort. So because the problem is hard, someone decided to make it someone else's problem. They won't get any better results out of an LLM, in fact they're likely to get much worse results, but maybe they can get someone to give them money to try doing it.

I'm afraid that the person who came up with this idea probably does not understand LLMs or Rust, especially the reasons why Rust is believed to be better. It's only better because it makes it harder for programmers to do one class of wrong things, but without programmers in the loop, it won't help with that either.

Biden tries to cut through fog of confusion caused by deliberately deceptive customer service tricks

doublelayer Silver badge

Re: I like how Apple makes it simple

There are a lot of services that do that, but they don't advertise it. I've had many experiences where I have a thirty-day trial, cancel it on day 27 to be safe, and get the message that it will continue working through day 30. I know people who do that with Amazon services and streaming in general, but I couldn't make a full list and it could change at any time. However, since so many people are doing it, I wonder if there is a requirement somewhere that is making them. Otherwise, I would have expected them all to cancel as soon as they got the button press.

Apple is coming to take 30% cut of new Patreon subs on iOS

doublelayer Silver badge

Re: Make the app free

The app is already free. The app isn't the product, just the way to organize payments to others who don't have an app of their own. Anyone who wants can pay through this by going to their website instead of their app. However, they will not be able to continue to use the app and direct people to any other payment systems, either in-app or on a website. Outside the EU and South Korea, Apple has forbidden doing so. Inside those areas where it is required, Apple has placed so many restrictions on doing so that they're still effectively forbidding it but pretending that they aren't.

doublelayer Silver badge

I'm guessing by the comparison to Rockbox that they wanted features the iPod's music player software couldn't do, such as playing formats other than the two that Apple supports or transferring music not using iTunes. Of course, then I start to wonder why they bought an iPod instead of something else they didn't have to hack. At the time, I'd have preferred an iPod running Rockbox to something generic, but if I didn't know that Rockbox would be supported, I would have bought a cheap one instead rather than deal with the limitations.

doublelayer Silver badge

I think the restaurant shouldn't get to charge me for going to someone else's restaurant just because I have gone there in the past.

This is not about paying more for iPhones. I've done that and willingly to get the benefits of the phone, such as updates that don't stop coming after two years. When Apple does something, I am willing to pay Apple. When they don't, I am not willing to have to pay them more. Trying to tax any financial operation that involves an iPhone, even when they have done nothing at all during that operation, is not the same as charging higher prices for their own products.

Before we put half a million broadband satellites in orbit, anyone want to consider environmental effects?

doublelayer Silver badge

Most of those areas are less populated, which means two things:

1. Geosynchronous satellite's capacity in their area isn't divided very much, so it provides perfectly fine service unlike some more densely populated areas.

2. Starlink is unlikely to serve those areas because they don't have the regulatory approval. For mountainous areas around the Andes, they have obtained some of the licenses they need, but for your "volcanic island in the middle of the Pacific not near any existing fibre links", they're not likely to get that.

For instance, Tonga's internet was taken out a while ago, so maybe people there might like Starlink. Take a guess whether Starlink has bothered to get approval to sell to that small market. They haven't. In addition to probably being beneath their consideration, the other reason for that is that being far from links also means Starlink doesn't work very well. If Tonga's link goes down, the satellites have to send a lot of data over a mesh network until they find the next ground station, which means a lot of transfers because Tonga isn't very close to other things. This is a problem that geosynchronous can solve more easily. You're overstating Starlink's functionality with your own examples.

Software innovation just isn't what it used to be, and Moxie Marlinspike blames Agile

doublelayer Silver badge

Re: Blame........

I'm inclined to agree with you in many areas, but I think your main point still gets it wrong. There is indeed a lot of complaining, including on things that are not worth complaining about. I've pushed back on several of them already. While I haven't talked much about Agile here, and I tend not to like it, it too seems mostly irrelevant to most of the things that we're discussing.

However, this is not something where we can just articulate what the problem is, because the topic in which we'd try to find the central problem is "what is wrong with software in general, worldwide, and why". There are lots of problems in that area and almost none of them apply to everyone. When you have a question as vague as that, answers are less likely to work because most of them won't apply well to all software writing. Those that do will probably be worse, because the only way you can claim to comment on all programmers is to have ridiculously broad stereotypes (E.G. young people are terrible because not one of them learned anything other than JavaScript, old people are terrible because not one of them can write in anything other than VAX assembly). No simple statement of a problem, no matter how accurate, will make that the problem.

doublelayer Silver badge

Re: Agreed

Now tell me: when did I recommend doing so?

You don't have to do that to use a framework. There are lots of ways to manage dependencies that don't clone a repo on invocation and just roll with it.

doublelayer Silver badge

Re: He's right about black boxes and over-abstraction though

"Some OO languages (C#, I'm looking at you) prevent individual data items being passed by reference, thus forcing an entire objectful of data to be passed into a method rather than a pointer to the single item which is to receive a new value."

There are several problems with this description. First, it's not C# or any language doing that. It's the writer of the object labeling a member private or protected or not exposing it directly. If it's your library, you can change it quickly without breaking compatibility. If it's not your library, it leads directly to the second problem, which is that this is done on purpose for a reason. It's done because it lets the writer of the library restrict their code to defined behavior. If they don't want you manipulating an internal value without going through the setter, it probably means their setter is doing something or it might in the future, and if it does your direct manipulation might break stuff.

This is the same reason why, even though I can, I don't write to internal data of other parts of a program or dependency. Maybe it works now, but there's no guarantee that it will continue to work when something changes and if it doesn't, the failure is likely to be annoying to debug, unlikely to show up first in testing, and potentially damaging to the user's data or environment. It also makes debugging harder because, if you put something invalid there, the problem will not be detected when you do it. It will only show up when some later operation tries to work with it. Debugging will require tracing back from where it broke to where you inserted the invalid data.

doublelayer Silver badge

Re: Tradeoffs

This probably depends on your definition of framework, but people use frameworks because they want something that is likely to work and they don't want to spend long times implementing and maintaining it. We do it all the time with operating systems. I could write something without using Linux's system calls. I know how to do it. They taught me how to and made me do several when I studied operating systems. If I'm working in something embedded, I will. If I'm not, I won't. I won't because I want my program to run later on, so if the system call gets improved, I will get the benefit. I also don't want to waste time I could be spending getting new things to work redoing something that's been done before, probably better than I will because they've got years of experience finding and fixing bugs in it. If I'm writing another platform, library, framework, or whatever else this is, then of course I'll reimplement because I'm trying to make something better than they have. If I'm trying to build something that needs such a platform, library, framework then I won't rewrite it unless their one is not good enough.

There are lots of good reasons not to use a dependency, but ascribing laziness to those who do is a ridiculously broad stereotype that does no good to anyone. Marlinspike's comments were aimed, as he says, at those who cannot implement such a thing, not those who don't. A large part of knowing how to do something properly is knowing if it makes sense to do that thing.

How to ingeniously and wirelessly inject malware onto someone's nearby Windows PC via Google's Quick Share

doublelayer Silver badge

Re: Quick Share?

Google started it, but Android still has a place where apps could store their data and you could see and modify it. A lot of apps choose not to use that, and it's not a recent thing. Google enabled this, and almost from the start, but app developers deserve at least some of the blame when they have an option to avoid it.

doublelayer Silver badge

Re: Quick Share?

Yes, sometimes, but not always. For one thing, there are a lot of Android apps that have worked to kill the filesystem and don't place their files in a place where you can easily find them, or sometimes, a place where you can't retrieve them at all unless you have root access. Some of those might support other forms of sharing, though.

Then you have the problem that connecting a USB cable and enabling media transfer allows the person you're connecting to to read all the files that aren't locked in the previously described way, not just the one you want to send them. You also have a compatibility issue. If you connect an Android phone to a Windows computer with a cable, things will tend to work. If you connect two Android phones together with one cable, then you have to work out how to tell both devices which one is presenting MTP over this cable and which one is receiving, and you can't go in both directions. That will take some time to figure out. I would also point out that Mac OS and Linux don't mount MTP devices (virtually every Android device uses this) as conveniently as Windows does and can't access them at all without installing other software, but that probably isn't relevant because it sounds like Quick Share doesn't run on those either.

We're thirteen years on and XKCD 949 remains a major gap.

Secure Web Gateways are anything but as infosec hounds spot dozens of bypasses

doublelayer Silver badge

"For some of us (a minority), features like USB access are the whole point of computers. I'm willing to give up internet access to have USB access."

I don't think I understand your desires. If that's what you want, why can't you use a computer now. The browser does the internet stuff and all sorts of other things can do USB. The only point of letting the browser do USB is to let internet-based things access USB devices. If you're giving up internet, then that doesn't help you anyway. Also, in my experience, the main reason why a browser-based thing wants access to USB devices is as a profiling tool (keyboard IDs would be a pretty good fingerprint). Malicious pages have even more nasty things they could do with that access. The one use that hasn't been that evil was a serial console for a development board, implemented in Javascript, but I have plenty of native programs that can send serial over USB.

doublelayer Silver badge

Re: The complexity of the modern internet browser

"a. I thought modern browsers ran in a sandbox?"

They don't, but they create sandboxes for the scripts that run. This means that a script will have to either find a vulnerability to escape its sandbox or get a user to do so. For instance, assembling a malware file on the user's computer, downloading it, then telling them to go open it.

"b. If the SWG can inspect SSL traffic doesn't that mean the SSL traffic isn't really secure."

Yes, in this particular case, which is what is intended. If you specifically allow the traffic to be intercepted by installing something with the rights and capabilities to supply its own certs, you know it. This requires configuration on the endpoint. If you control a network link that my computer connects to but you can't put your certificates on my computer, you'll be unlikely to be able to do the same.

US 'laptop farm' man accused of outsourcing his IT jobs to North Korea to fund weapons programs

doublelayer Silver badge

Re: To Fund Weapons Programs

It's not private enterprise. In order to get a network connection to start doing this work, you need approval from the government to have one, often not even in North Korea. The scale of the operation and repeated statements, including from workers, managers, and government leaders have demonstrated that this operation is government-controlled. So yes, the money earned from doing it goes into government coffers. They spend it on whatever they want, including agricultural supplies, fancy stuff for the Kims, and weapons manufacture and research programs. Not in that order.

Faulty instructions in Alibaba's T-Head C910 RISC-V CPUs blow away all security

doublelayer Silver badge

Re: Obit

Would it kill you to actually understand what people say? In this case, the comment you replied to did not say that this instruction was fine. From that comment alone, you can determine that they think RISC-V in general is bad and that this vulnerability, while bad, is better than everything else. If you need more clarification, look at the other comments posted by the same person. I don't agree that it's quite that bad, but you've taken up a valiant effort to attack a point that nobody said and the specific person you're responding to would probably disagree with more than anyone else.

doublelayer Silver badge

Re: "They've also published a website, ghostwriteattack.com"

You seem to have two separate points. Why have a website for a vulnerability? I don't know, so you can find the information faster than finding the website for their institution, which I certainly couldn't have told you was https://www.helmholtz.de/en/about-us/helmholtz-centers/centers-a-z/centre/cispa-helmholtz-center-for-information-security/, then find the information wherever it's posted. Educational websites are often a maze and sometimes break links because there are multiple teams working on the site and none of them speak to each other, let alone the researchers. While this looks to be a research-focused organization rather than a university, I doubt their site management is very different.

Then you complain about the name. What name is it supposed to be? The vulnerability is called GhostWrite because you can write when you're not supposed to without detection, and it is a possible attack. The only other one that seems likely is ghostwritevulnerability.com. What name would you not object to?

doublelayer Silver badge

Re: i want one

The SBCs it's in right now are not as cheap as you describe, but whether your number is correct or not is not the major issue. These are easy to find because they describe the SoC used right on their product pages. What we don't know is what else has one of these chips in it. For instance, manufacturers rarely tell you what CPU powers a printer, a television, a security camera, or lots of similar embedded things. Some of those can get code, either as a firmware update or as a third-party addition (for example, a networked television installing an app to stream something) which could exploit this. Now, there is a beachhead on your network and they have root access on it so can do things that a sandbox might have prevented.

That's just considering consumer or office stuff you're likely to have, and given the youth of this chip, that's probably the most likely kind of device to find this chip in. However, such a thing is also a favorite attacker of infrastructure. I'm sure several RISC-V SoCs have been built into industrial equipment that can have effects on safety if disabled or messed with. Probably not this one, but I don't know that for sure. The kind of people who like turning off water and electricity would quite like being able to elevate or to trigger something that requires a restart. In fact, they can combine their efforts. Elevate to root, install something that will execute the instruction that crashes the machine as early in the boot process as possible, then execute it once. Kind of like the CrowdStrike bug did, but to embedded infrastructure instead of desktops. Ideally, it should be harder to get this to execute on those machines due to other security measures, but that isn't an ideally I want to rely on too much.

Hello? Are you talking on a Cisco SPA300 or SPA500 IP phone? Now's the time to junk 'em

doublelayer Silver badge

Re: Phones still OK on an inside non-routable network?

It depends on your setup. I'm not sure what these models look like and I'm not going to bother looking it up, but a lot of phones like this that I know have have two ethernet ports. You plug in the one cable from the wall, then you plug a passthrough cable for the computer. Now that phone has its own internal address which you can firewall and it's got access to the user's computer traffic as well. It can send traffic as the user's computer, allowing it onto that network. So if you do that, probably firewall rules won't be enough.

doublelayer Silver badge

Re: Phones still OK on an inside non-routable network?

I suppose the theoretical attacker who has lots of time on their hands and good skill could make this really nasty by installing C&C systems on all the phones. Then, after something like a ransomware attack where the computers were completely wiped, you think they're gone because all the computers have been reimaged from a clean state, but they can come back from what looked like a dumb desk phone and do so over any part of the network in which those phones are installed. It would still require the same access you described, but that's probably the worst thing they could do with it. The risk may be low, but unfortunately the potential harm in the worst case is substantial.

Study backer: Catastrophic takes on Agile overemphasize new features

doublelayer Silver badge

Well said. Your comment reminds me of a blog post from several years ago that gives some good examples of ways people misunderstand simplicity and its value in product design.

doublelayer Silver badge

Re: Agile would be great if not for all the people

I have to agree here. If there's one thing that the manifesto actually does state clearly, it's that the type of mismanagement described is harmful. It doesn't say how to avoid it, but given how a lot of implementations work, I'm not sure it would matter much if it did since management would ignore it anyway. I've worked in places which were better at this, often by having a more informal team structure as Agile recommends. On that point, Agile has recommended the right thing. It should be obvious but sadly is not.

doublelayer Silver badge

Re: You Keep Using That Word

"And on the subject of 'comprehensive documentation'. That is in the manifesto because software code IS documentation."

No, it isn't. It never has been. It never will be. Code is a set of instructions for a computer, not a set of instructions for a human, and definitely not a set of instructions for a user. A user does not care how the program does something, and they do not want to and should not be expected to have to read through in order to figure out how to use the thing.

I've worked at places where that was tried. One team went as far as to put in their readme file (all ten lines of it) RTFC. The result: weeks of wasted time trying to figure out which parts of their library worked and how. Did I mention that one of the people wasting their time figuring this out was on the team that wrote it? Because they did not document, and incidentally did not check, anything properly, their code was impenetrable. To make it worse, their library called out to another library which was written the same way. This is why you have reference documentation. What this function does. What parameters it can take. Otherwise, you too will find yourself spending time trying things over and over again because all the invalid calls return empty sets, but some valid ones have zero results so also return empty sets. Now was this one a failure or success? If only someone had written that down.

And that's in a situation where everyone involved could do that. What Agile proponents often fail to recognize is that most of the people for which software is being written are not programmers. They cannot read the code. They don't want to either, but even if you made them, they would not be able to do it. Some of them don't have the code in the first place, since not everything is open source.

doublelayer Silver badge

Re: Air traffic control

Any comparison of that nature is so vague that it can't really be used. For instance, ATC that is available for five minutes per ten hours is not going to be very useful even if it is right. What are the chances that it chooses to be available when you need it rather than being available for five minutes of routine flight. It also presumes what "not right" means. In reality, an ATC that is inaccurate probably isn't completely random or guaranteeing wrong answers but has some chance of error. If that error means that a flight makes two turns instead of one because they read out the wrong number, the result is very different than if it is told to taxi at a building the pilot cannot see.

In general, a system needs to be available and right quite frequently or it won't be useful. We then have to consider what harms are caused by unavailability and incorrectness of the type that actually happens to decide how bad those things are. Our goal is of course to have 100% correctness and 100% availability, but our tradeoffs have to be calibrated for the actual outcomes. Often, the incorrectness isn't something we can predict accurately because it is due to irregular events, especially human error, which is why we try to design around frequent causes of error rather than calculating probabilities on something unpredictable and planning with those unreliable results.

doublelayer Silver badge

Except when you find that users actually do want new features. There are many features that get developed for a set that's so small that you might as well ignore them. However, many customers do use features to decide what to buy and complain if something doesn't have them.

Everyone has a level at which they don't really want many new features. If they were very thorough, they'd name four new ones (one that could work for other people, three very specific only useful for them) that they would like added. Everything else that's added just annoys them. However, it takes a product a while to get to that level, and if it hasn't gotten there already, they want the new features and will choose the option to use based on them. As with everything else, you have to manually determine where your product is in this cycle or it will not work.

doublelayer Silver badge

Re: Nope.

In most systems, including basically everywhere I've worked, devs don't get to put stuff on the backlog, and that's stuff that's unassigned. If I took something I had been assigned, didn't finish it, and put the tasks on the backlog to disguise this, not only would I be doing something I'm really not supposed to do, but they're likely to find out quickly and be very unhappy with me. Nobody does that.

Backlogs can be a problem. They fill up with stuff that's never going to get done, and if the project manager or enough devs don't want to do something, it can get shunted into the backlog and nobody will ever see it again. However, it is not the tool for devs who don't intend to do what has already been assigned.

doublelayer Silver badge

There are at least two phases in an engineering project, and the first few steps can be done using something similar to Agile. When researching what works and making initial designs, you can afford to try something that might work and see whether it does or not. Eventually, you have to stop doing that and decide on a good design for the product that's actually going to be sold. Continuing to use the informal methods when making the product will lead to lower quality. Not using a more experimental approach in R&D is likely to prevent the invention of new capabilities which would make the product better.

As with software, you have to know when it is safe or desirable to use a method and when it is not, then run those things separately. This doesn't work well with people who demand a single solution for everything. They demand that either Agile is a great system and can work with anything or Agile is bad at everything and there is a different one that isn't. Neither is true.

Report: Tech misconceptions plague the IT world

doublelayer Silver badge

The reason is that, while you were working with this stuff in the 1970s, a lot of the people your age weren't working with it at all. They don't understand the hardware like you do any more than a younger person who hasn't studied it does. The stereotype that young people should know how computers work isn't new, and I can't say the starting reason is wrong. A lot of people who grew up without computers use them now but not very well. As time goes on, this is weakening, but I've helped quite a few older users set up a computer who do not understand how to use it (let alone how it works). They can do a small number of things by rote memorization. For example, to read the email, you click on this mail icon. If the icon moves, they get confused. If they're using someone else's computer without the mail client, they won't know what to do. They see a young person who knows how to access their webmail on any computer, and they can see that this person knows how to make the computer do something that they, the older user, doesn't. In fact, the younger person may not understand how to set up a mail client, but in comparison, their tool lets them access their mail and the other guy's doesn't work as well, so their one seems better at the time.

They then jump to the incorrect assumption that, because the young person can make the computer do what they wanted, that they must understand how it did so. To bolster this mistaken impression, the younger person has learned some tech-related terms that the older person doesn't understand. In fact, the younger person may be using them wrong, as I'm sure we've all seen from users from time to time ("The server isn't working because the WiFi driver isn't plugged in"), but the less knowledgeable user doesn't know that. I'm sure that these younger people will get old, some new technology that they've lived without for decades will come along, and they'll be looking at their grandchildren who use it naturally and make incorrect assumptions about what they're doing. The grandchildren will in turn make incorrect assumptions about what skills their grandparents must have had just because they don't instinctively know the hand gestures needed to make the AR environment launch their game.

Verisign under fire for increasing .com prices each and every year it can

doublelayer Silver badge

Re: Who operates .US?

History killed it. If, when .us was the first country TLD to be created, people could register them the way they could .com, it might have taken off. .uk worked out that way. But instead of doing that, for the first seventeen years, you could only register weird geographic domains that were ridiculously long. Did Apple want to be apple.com or apple.cupertino.ca.us? It took until 2002 to remove that limitation, and there's still no organized third-level domains like .co.uk because .co.us is already reserved for Colorado.

doublelayer Silver badge

Re: Verisign believes...

It's also very unlikely. I don't think the people who hold domains intending to flip them are going to mind a $2 increase. There is a level of increase that would actually get rid of them, and getting rid of them might be nice, but the increase required would have so many other negative effects, for instance making buying a domain for a small site prohibitively expensive, that I'd rather have the bots parking what they think are nice domains. Increasing by small percentages does nothing except mean that more money from the domain resellers goes to Verisign, and it also comes from everyone else. Verisign does not deserve that power, and their abuse of it suggests that we might be better with someone else running the registry altogether.

Incidentally, I wonder whether anything has happened with Nominet recently. It has been a while since they had to be restricted from trying a similar thing, but since their CEO was removed and some board members changed, there has been nothing from them, which I hope falls into the "no news is good news" category. If they eventually succeed in getting reasonable leadership back, maybe they could take a turn running .com. It probably wouldn't go that way, because someone in the US would freak out if someone not in the US was named to run it, but I'd be willing to try it. Only if the relative silence indicates they've improved, though.

Michigan probes Musk-backed PAC website that weirdly tried and failed to help register people to vote

doublelayer Silver badge

Re: Not just Michigan

Actually, I'm guessing that there was a combination of you not understanding what I was replying to and I not phrasing my comment clearly enough to point that out. Obviously, this particular site was not redirecting people, although it sounds like it was in some other states. However, the comment to which I replied was asking why, in general, there are sites from people other than the government that help people register to vote. There are many such sites. It is not uncommon for American political groups to have a voter registration page. For instance, from a quick search, I can find several organizations external to the federal or state governments whose purpose is either specifically to help people register to vote (example 1, example 2), or is political in nature and has a voter registration page on their sites (example) or from political parties (Democratic, Republican state party website doesn't seem to have one but maybe it's on another site). The existence of these pages is relevant because it is why the misleading page from this PAC was so dangerous; people may expect based on the existence of many similar pages that this one might be trustworthy.

My comment was intended to answer the question of why such pages exist at all. My comment that most of them direct to governmental pages refers to the legitimate ones. The fact that I was speaking in the plural rather than the singular was intended to reflect that fact. I suppose it was not clear enough. I suggest, however, that you may want to consider the context of a thread before making incorrect conclusions about the meaning of a comment.

doublelayer Silver badge

Re: Not just Michigan

I'm not sure it's needed, just that a lot of places that get visitors who might want to vote tend to put up instruction pages. From the description in the article, it sounds like most of those just redirect to the right government site anyway, so they're only driving traffic rather than submitting applications themselves.

Punkt MC02: As private, and pricey, as a Swiss bank account

doublelayer Silver badge

Re: Pricey?

Not exactly, just willing to pay a lot more for a phone than I am. Maybe you actually use some of the things that make that more expensive, but I won't, so I am not willing to spend that much. There are also people who would find that price difficult to fit into their budget, but that's not the only thing the line referred to. After all, you could probably pay twice as much for that phone as you did, but you might not have been willing to.