* Posts by doublelayer

10519 publicly visible posts • joined 22 Feb 2018

Video game actors strike because they fear an attack of the AI clones

doublelayer Silver badge

Re: Do the work, get paid, move on

They don't have to do it that way. They could easily pay actors the same way they pay many other people who do jobs, creative or not. Mine, for instance. I get paid to write some software. They don't pay me each time they run it. We negotiated how much they had to pay to get the software when I was hired. If they do that with an actor, no problem. The reason they don't is usually that they don't have enough money to do that, so they offer to continue paying the actor from money that comes in later instead of paying up front. You could also do software that way: I'll get paid less while writing it, then you give me a portion of the license fees.

Neither has any connection to the AI work. If they buy the software writing from me, they get that piece of software. They don't get other pieces of software I write. They don't even get fixes to that one unless I am still employed. When hiring an actor, you don't get rights to everything about them forever, you get the rights to the performance they just did for you. You want to keep showing that video of them following the script? Great. You want to use their picture in a big machine learning training thing? Get permission to do that or you don't get to.

FYI: Data from deleted GitHub repos may not actually be deleted

doublelayer Silver badge

Re: "this is expected and documented behavior inherent to how fork networks work"

I don't know how many people this applies to, but there are many who don't know much about Git and use it anyway. When someone first picks up the tool, it looks pretty easy. You add a file, you commit the changes, you push. The code goes up. When someone else has made changes, you pull. Great, I understand Git.

They think that right up until their first merge conflict. Oh, I can't just push while someone else might be doing the same thing. So they learn branches. I push to my branch, you push to your branch, then we merge them. Great, I understand Git.

They think that until they need to get code back. How do I find the code after someone's merged over it? None of my commands do that. So they learn some other ones that work with the history, and they learn some blunt tools for returning to the head. Great, I understand Git.

They think that until a branch merge conflict. Okay, it's time to learn rebase, and rebase isn't a simple command. But they read about it and do some experiments, but now, they know they don't understand Git, at least not fully.

I can't say I do either. I have a relatively good understanding of some of the internals. I know enough to know that you can't simply delete a commit and expect it to become invisible. I can describe some of the internal structures accurately, and I can sound confident when I do it as long as people only ask about the ones I've actually looked at. But since I have not written code inside Git, nor have I memorized every manual page in it, I do not know everything there is to know about Git. Nor am I the least knowledgeable person on my team. We know enough about Git that we get what we need and don't break things. That doesn't make us experts. And we're professional users. There are lots of beginners who know less because they've used it less.

doublelayer Silver badge

Re: Yes and no

Yes, rotating the key is equivalent to changing the password. You can even leave it public. Anyone on the planet can know that "a83dc027b9a62170" used to unlock something, but it doesn't now. The data is now worthless as long as you can make sure it is no longer usable.

If you can't or don't choose to do that, then deleting the repo looks like a second-best solution. The problem is that it's far too weak and people who think that's good enough are failing a necessary security step. Once it's been committed to the public once, there is a chance that someone has seen it and you can neither reliably detect whether they have nor prevent them from having done so. At that point, you have a risk. Deleting a repo does not eliminate that risk. That credential is compromised and absolutely must be revoked as soon as possible. One of the examples from the article was from a user that wanted to keep using the credential after it had been posted, and that is not a good idea.

Oracle's Java pricing brews bitter taste, subscribers spill over to OpenJDK

doublelayer Silver badge

Re: Java Business

"SUN could easily have transformed herself into a Software Business IF they had demanded, from 2003 onwards, moderate license fees for the Java Compiler, the VM, the JRE and so on. Think of 30 Euros per core per year."

Or they could find that people didn't want to and were getting rather tired of being charged. So they stop developing their software in Java, so their customers stop buying JRE licenses. Now they've lost some JDK and JRE licenses. This, in turn, means that the businesses get less value from the licenses they retain, and they start migrating away from Java-based tools. That could happen as well.

It should be rather obvious that it's a possibility if we consider other prices. Had it been €30k per core per year, that would happen almost immediately. Lower the prices and the number of customers you lose decreases and the time between the price change and the loss lengthens. You assume that €30/core/year would be low enough that people wouldn't switch, but Sun's assumptions could easily have been different based on better information about who they would be asking to pay and how much benefit they were already receiving. Pricing is hard, and it can be far too easy to assume that what you would be willing to pay will work for the rest of the market. Oracle is trying this today, but that's no more guarantee that it will work for them.

Sun also had to work around licensing issues meaning they couldn't just apply a unilateral change in terms to every customer overnight. They had to do that because, earlier on, they gave terms that caused people to use the language and they had to. Otherwise, a SaaS business that runs lots of servers and therefore would bear the license increase you describe themselves would have seen Java as too risky a solution and wouldn't use it in the first place. That means fewer developers being paid to learn Sun's language, fewer libraries being made available, which means fewer people choosing to start by learning Java, which means fewer people choosing it in new builds and more people choosing competitors, and all of that means fewer customers for any commercial products Sun did want people to buy. They chose to provide free access, not because they were convinced by an ideological point, but to quickly build up a customer base.

doublelayer Silver badge

Re: FALSE

While I can't claim to have counts, I know several pieces of Java code that have been maintenance problems. I'm thinking of a set of three programs from the same source, written in Java without any other languages or dependencies, which ran on Windows, but not Mac OS or Linux. I can blame the developers for that oversight. What I find it harder to blame them for is that you still need Windows 7 or earlier to run them. For some reason, Windows 10 and 11 won't allow them to run. Newer JREs refuse to run them at all. JREs from the same era try to run them, but then crash without usable error messages. These aren't programs from 1998 that happen to still run under Windows 7. They were originally released in 2012 and had updates until 2017.

I don't have the source for these, so patching them is not an option for me. The original authors might have the source and are still in business, but I've given up on them doing anything. Occasionally the idea has come along to reimplement them. If that idea is executed, Java isn't the tool that comes to mind for doing it. I don't think the language is entirely to blame for this course of events, but neither can I pretend that Java is great from a backwards-compatibility perspective.

doublelayer Silver badge

Re: Not Really

They had the choice to demand payment for commercial use. If they had done that, they would have gotten some short-term cash, but they might have lost users who didn't want to deal with annoying license audits so they used something else. That limits the ability of Sun to sell other services to those people. As with every other company that has ever existed, they have to decide between these alternatives.

Since this seems to be a big issue in your comments, there's a word for this: capitalism. The company chooses its prices in order to compete with alternatives. Oracle is being no less capitalistic than Sun was. Sun made a decision to get lots of people using Java. Oracle's making a decision to get more money out of those people using Java, taking the risk that people will either use the free alternatives or a different language.

doublelayer Silver badge

Re: Guess What ?

You might also want to stop misstating your facts. Sun had a serious decline. That's a point in your favor. Why make your argument weaker by stating incorrectly that they went bankrupt?

Of course, it's also suggesting that Sun's problems were due primarily to Java, rather than any of their other products that didn't go well, but that's not something you've bothered to mention in your comments.

Musk deflects sluggish Tesla car sales with Optimus optimism

doublelayer Silver badge

Re: Optimus optimism

I could think of things that I would do with one if I had it, but those things are not going to be important enough to justify anything near what the most basic of robots might cost. If I had one, I would likely consider automating some tasks that don't interest me, but it would take a lot of hours freed from such chores to equal the price of the machine and the, admittedly more enjoyable, hours programming it to do those things.

There's also a market problem. If I earn a normal amount of money, which I do, then a robot is likely to be out of my price range in any case. If I'm a very wealthy person, I can try hiring someone to do things that I want done and they're likely to make fewer and less damaging mistakes when starting. I'm not sure the first generations of humanoid robots will have enough demand to sell except to people who like having the latest expensive gadgets, and I don't think that's a big enough market to fund development of the second generation.

doublelayer Silver badge

I can answer that one. You absolutely need an LLM running in a car when the company that makes the LLM will supply it for $5 billion in up-front "investment", will charge per use, and you own that company. Free money is wonderful. You just add it onto the price of each car, and when people don't buy as many of them, you make some other promise of what other wonderful feature they'll have and wait for people to bid up the stock again. Eventually, this won't work in the long term, but you'll have extracted plenty of actual money in the meantime.

CrowdStrike blames a test software bug for that giant global mess it made

doublelayer Silver badge

Re: Secure boot?

That doesn't necessarily preclude signing. Step 1 makes a file. Somehow, the output was corrupted during or after step 1. The file is then passed to step 2, which signs it. The result is a file that is properly signed, verifies just fine, and once the signed content is extracted, it's still invalid. Steps 1 and 2 don't even have to be separate processes, though they probably are.

doublelayer Silver badge

Re: It worked on my machine!

"If this were agile, it would be easy to rollback to the previous version."

And from their side, it was. It didn't take them very long to release the version that patched this. The only trouble was that the buggy version damaged things so badly that the users couldn't revert as quickly as the release process could.

doublelayer Silver badge

Re: It worked on my machine!

It really doesn't matter. They should test their releases, no matter what those releases contain, before going public with them. Whether that is code, or configuration, or some other category of data doesn't really matter. If the behavior of the system has changed slightly, testing is required.

Yes, depending on our definitions, we can argue that it's not code. After all, if someone's counting lines of code from me, they usually wouldn't count the lines of the json file I've just written. But when my program will do something different when the json file is different, it can have the same damaging effects as if I changed what we typically call code, and it therefore needs the same kind of testing.

doublelayer Silver badge

I'm guessing there was a difference between the version that was tested and the version that got released. That could happen in a lot of ways. Maybe two changes were merged into this file and building them together makes the bad file. Maybe it had to do with some additional content in the production build which isn't present in the debug build. There are plenty more.

I've seen the latter example from time to time. For instance, a task where two people wrote code. First, my coworker wrote one unit, then sent it to me. I wrote the second unit. In my testing, these units worked just fine together. Correct results, no crashes, positive and negative results handled as expected. Build it for production and the automated tests freak out instantly. The reason: my debug build was writing more to the log file in case anything went wrong. That slowed things down slightly, which was enough to prevent the race condition in the two processes from going wrong. Take out the logging and the processes might have a concurrency problem and fail. But it worked just fine on my machine. Probably it would have failed eventually if I ran it with the extra logging enough times, but it didn't in the maybe thirty runs I actually did.

Sam Altman's basic income experiment finds that money can indeed buy happiness

doublelayer Silver badge

That is calculable. Of course, depending on how you implement it, different numbers and methods would have to be used, but it isn't difficult to get approximate numbers.

For instance, this experiment took place in the US. The current US population is 335 million. We'll also use the same payment amount of this study, $1k per month or $12k per year. This makes a total annual payment of $4.020 trillion. You've asked to have this paid by the top 0.1%, which would be 335,000 people. This makes for an annual payment from them of $12 million if it were divided equally, which it probably wouldn't be. The wealth required to get you into the top 0.1% is $62 million (source). You would need a wealth tax on the order of 20% annually to get that much, and if you did it, it might work for a decade, depending on the return on investment we assume for the funds they retain.

If you try to implement that, expect several consequences, including lots of wealthy people trying to prevent you from doing that. You should also expect that people will search for or create loopholes to get out of it, because you can hire a lot of lawyers and accountants for less than 20% of a 0.1% level wealth. But also remember that it is not going to work forever even if you can get exactly what you asked for. You will need a plan for what you will do after that. Lowering the wealth cap is the most likely solution, but it too won't last very long.

doublelayer Silver badge

Re: Nice to see these tech types...

This is definitely true, although some of those jobs didn't disappear but went to other people. But many definitely did disappear. However, we didn't decrease the number of jobs needed altogether; if the people who lost their jobs were not doing any job at all, there would be a shortage of labor. This means that we have not reduced the need for labor, but we have changed what labor we have a need for, and depending on what jobs are available in your area, we may have made the types of labor we want much worse.

This is why I think focusing on the futuristic theory of labor elimination is the wrong approach. Right now, we have people who have or will lose the jobs they have done for a long time. We need to figure out what is right for them right now, not what would be right for them in a theorized world of complete automation, because neither of us lives in that world. If we promise them things that would make sense in that scenario, we will give them false hope. If we require things of them that would make sense in that scenario, we may unfairly burden them for not living in an unattainable future. By considering a speculative future rather than the reality we're in now, we're making things worse for the people who have lost their jobs today.

doublelayer Silver badge

Re: Nice to see these tech types...

The reason they're expensive is because of the resources it takes to make them. Not just the metal, motors, and chips, but the labor to manufacture them, the work to design and program them, the expense of maintaining the stocks, workers, and expertise needed to repair them. Because they often need to be customized for each task, a lot of that isn't done at scale, making it more expensive (yes, in money, but also in the sense of how many engineers you hire to do the work or how many different sets of plans you have).

doublelayer Silver badge

Re: we have mass underemployment now

In what sense? There are lots of ways to quantify what ideal employment numbers might be, and there are lots of ways to calculate unemployment. I assume that the "flaw" you're referring to is that the denominator is often the number of people in work or actively looking for work, so it doesn't count those who are not trying to get jobs. Maybe you're instead thinking that the flaw is in data collection which can fail to identify some types of people because they don't appear in employment records. However, neither of those flaws is very relevant to the discussion of whether technology has reached a point where we could continue our current lifestyle with significantly lower amounts of labor or when and what level of technology would be needed to get us there. Even that would probably need a better definition of "significant", which I defined at random as 5% of humans of working age needing to work, but you could easily make an argument that this is too strict a threshold.

doublelayer Silver badge

When you say "I 'invented' UBI back in the 90s", what do you mean by that? Because you definitely didn't invent the concept. People have been discussing, recommending, and in some cases implementing something like that for centuries. While I'm not sure about the specific term because it's hard to get a search engine to find the first use of it, I also have references to "basic income" and "universal income" from the 1960s, so I don't think you invented the term either.

doublelayer Silver badge

Re: How would this affect the wider economy?

That's an optimistic picture you have painted, but you're leaving out a lot of things which will probably block you from getting there.

For example, what are the jobs that so many people want to do that they will pay nothing and people will be happy? Remember that many of the jobs that lots of people want now are jobs that pay lots of money; many people don't want to do what a CFO does but plenty of them would be willing to try for the CFO's paycheck. There are some other jobs which are popular enough that you could pay little and still fill them, but those are very limited in supply (often meaning that the people selected end up being paid well anyway).

Now let's consider the jobs that nobody wants. Like, for instance, agricultural work. There are lots of unpleasant jobs there, so the wages for that job would rise significantly. As you said yourself, that means the prices for the products would rise, which in this case is food, which is one of the things the income is supposed to cover. That means the UBI level will have to go up, not down.

The combination of this means that the staples are the most likely to increase in price, and I'm not sure whether anything would decrease, but if it did, it would probably be a luxury good or service. If you don't plan for handling this, the program might fail quickly, which would probably be more risky for future implementation than if it was never tried.

doublelayer Silver badge

Re: Nice to see these tech types...

I don't think mass unemployment is going to happen as quickly as the predictions have made out. If we get tech developed even more, it could eventually happen, but we have many tasks that have proven difficult to automate. General-purpose robots that can serve as drop-in replacements for humans basically aren't available, and more customized ones that do one job are quite expensive, so while they're common in things like manufacturing, they're less common when tasks vary too much or are intermittent.

Theorizing about what we should do in that circumstance isn't bad, but it may be premature in the same way that theorizing about how to run a society across solar systems is. It may not happen for many generations, meaning that when our descendants need to answer the questions, our answers won't be very useful to them. Meanwhile, there are lots of intermediate stages which are going to happen during our lifetimes, and if we've spent our effort thinking about the far-out future, we may have planned insufficiently for those.

Security biz KnowBe4 hired fake North Korean techie, who got straight to work ... on evil

doublelayer Silver badge

Re: Where can I get more of that scam?

They could modify the picture on the identification documents instead, but if they did that, the picture would no longer match any other pictures of the victim that might be found online. I wouldn't search out pictures of people to check them against the documents, but if someone did, that might make it difficult to get away with an ID that's just had someone else's picture swapped in. Meanwhile, if they have good enough software that they can appear in a video as the person whose real face is on the ID, then that might be less likely to be caught by the employer before someone is hired. This is especially true for differences in age, because if I change out the picture on an identification document for someone aged 23 and they claim to have ten years professional experience and a birth date in line with that, it might be more obvious that they're not who they say they are.

doublelayer Silver badge

Re: Where can I get more of that scam?

Maybe the identity they used didn't look much like them, but they didn't have an unlimited supply of fake identities that all look like Koreans of the right age (I think this is mostly young males as North Korea hasn't prioritized computer skills until the last fifteen years and at least one of the technology-focused schools is male only). If they're using stolen US identity documents, they may have to take steps to appear to be the person pictured in them.

doublelayer Silver badge

Re: Real location

There are two really easy ways to get around that.

1. The laptop was delivered to an address in the US. The person at that address has been told to get the package and send it to Hong Kong. The person in Hong Kong has been told to send it to Shenyang. Someone in Shenyang gets it and brings it to wherever they want it.

2. The laptop was sent to an address in the US. There, it was set up with a local internet connection and the IP sent to China or North Korea, where someone set up a remote connection to it.

Philadelphia tree trimmers fail to nip FTC noncompete ban in the bud

doublelayer Silver badge

Re: Josh Robbins of libertarian law group

I've found that "libertarian" usually translates to "those regulations I like and no others". So they'll probably argue here that they don't want people to lose the right to choose whether to accept a non-compete contract, a choice that should come with a compensatory increase in wages. The argument doesn't make a lot of sense in context, but if you find someone and get them to answer, I can pretty much guarantee that's what you'll hear. If you find someone who says they're a libertarian, however, there's a good chance they'll completely disagree with these guys on what is fair and what should be done to make that happen. That's why I don't call myself a libertarian; there are too many people who are using the term who I disagree with and using it would only confuse everything.

Forget security – Google's reCAPTCHA v2 is exploiting users for profit

doublelayer Silver badge

Re: tracking cookies?

I think some version of it, which might not be the same one they're using now, would use cookies set by other Google products as a way to bypass the check. If you identified yourself to Google and they could track you onto the page, then you're allowed through. Otherwise, do a test.

CrowdStrike fiasco highlights growing Sino-Russian tech independence

doublelayer Silver badge

Re: @Doctor Syntax - If Russia gets away with destroying Ukraine :o

Hmm. What countries might be on the borders of Russia and recently joined NATO? Starts with F. What countries aren't on the border but would be if Russia takes Ukraine? Starts with R. What country has moved closer to NATO membership and is in the latter position? Starts with M.

Maybe, if you went with countries that joined in 2023 rather than ones that joined in 2009, you might have already figured this out.

How did a CrowdStrike file crash millions of Windows computers? We take a closer look at the code

doublelayer Silver badge

Yes, they could have implemented a two-stage process where they still have a kernel-level program and it provides data out to something else. There might have been an efficiency drop by doing that, but it would probably be fine enough. The critical point, however, is that this change, while it might have prevented this problem, still involves their being code running at kernel level which, if it broke, would break the kernel. The attempts to blame Microsoft often take the form of explaining that CrowdStrike shouldn't have run anything at kernel level at all, which would not work, and then finding a reason why it's Microsoft's fault that they could, which it isn't.

doublelayer Silver badge

Re: RE: examples that have done better

You mean that one mentioned in the article. It might have read something like this:

"The way that it works is that drivers can set a flag called boot-start," he said.

"So normally, if you've got a driver that's acting kind of buggy and causes a failure like this, Windows can auto resume by simply not loading the driver the next time. But if it is set as boot-start, which is supposed to be reserved for critical drivers, like one for your hard drive, Windows will not eliminate that from the startup sequence and will continue to fail over and over and over and over again, which is what we saw with the CrowdStrike failure."

So they have that by default, and it would have done exactly what you describe except that a flag was set specifically to bypass that safety feature. As it says, there's a good reason to allow something to set itself that way, in case this is required for the system to boot correctly anyway.

doublelayer Silver badge

Several things in your comment are wrong or misleading:

"The issue here is why a vulnerability tool has to go in the kernel. Something like that should only be running in user space: no ifs or buts."

It goes in the kernel so that it has more visibility and control over what happens. There are some things that can't be done from user space at all, for perfectly good security reasons, and others which can't be done efficiently from there.

Next, the Microsoft is to blame for putting it in. They didn't. CroudStrike is not a Microsoft product or dependency. People install it. Just as if I write a kernel module, I didn't ask for or get Linus's sign-off before running it. People are able to install things at kernel level, and they make the choice whether to do so or not. It is not Microsoft's decision to permit it, and if it was, we would be rightly complaining about the level of authority they claim to have to make that choice for us. They should not and do not deny people the right to do something potentially damaging with their own computers.

doublelayer Silver badge

This article explains, if you didn't already know, why Windows has to go down when code which is running as part of the kernel breaks this badly. Guess what would happen if a kernel module I loaded into Linux, Mac OS, or any other operating system had a memory violation. That's right, it would panic. It is required to panic. If it did not panic, that kernel has a serious reliability problem.

Until people understand that, the attempts to find a reason why Microsoft is to blame here will not work. Maybe you or someone else can actually find a thing that Microsoft should be doing differently related to this, but while people continue to post comments trying to blame it for doing something both standard and necessary, you will fail to make any case because it appears that you have a gap in important systems knowledge.

Cellebrite got into Trump shooter's Samsung device in just 40 minutes

doublelayer Silver badge

Re: You know...

I don't agree. Whenever I've seen someone try that, they take a relatively basic approach, one which I don't think gets anywhere. Basically, they follow this plan:

1. Read something the killer wrote. If it's a manifesto, that. If it's not a manifesto, something they posted to social media. If they didn't post on social media, a message sent to someone picked at random.

2. Decide on some opinion that they seem to hold strongly. If this is an opinion you dislike, go to 3. If not, go to 4.

3. Breathless announcement: people who think [opinion] are killers. We should do something about that kind of person.

4. Is there another opinion, one you dislike this time? If so, go to 3. Otherwise, go to 5.

5. Wait for next killer.

Opinions that you can actually make that case about are pretty obvious, because such things often take the form of "I dislike [x] and would like to kill people who, in my mind at least, represent [x]". You don't need much to figure out that a person who says that is potentially murderous. Even then, you have a lot of people who may say that and never actually do anything. If you get any broader, your correlations will be worthless and lead to harmful stereotypes, for instance "The guy who killed people was a soldier, it is not the first time a soldier was responsible for a mass killing of innocent people, that means soldiers are killers". Simplistic to the point of inaccuracy and not something you can do anything about.

doublelayer Silver badge

Re: You know...

I'm sometimes curious to understand the motivations of crazy people to commit murders, but let's call that what it is, curiosity. If we knew that, what would be different at all? Nothing. The victims would be no less dead. Future victims of other people would be no more safe. Maybe, if this was an organized event with other participants, some of them might have been tracked down, but that's not really what you were talking about. Whatever the logic was, we already know that it wouldn't actually make sense to anybody, and unless someone has a stereotype they want to uphold, it would not apply in the same way to anybody else.

That makes it hard to argue for the release of manifestos or the like from murderers when we have them, and it makes it really hard to justify going to the effort to try to break in in the thought that one might exist or be reconstituted from other data. Basically, calls to do so sound to me like "Let's go to significant effort and expense to guess the content of something that might not exist and wouldn't be useful even if we got it".

HCL's back-to-office plan: Come in three days a week, or forget about holidays

doublelayer Silver badge

Why this method

This seems like a really bad method to get people into the office, even if we assume that we want the goal. For a moment, skip whether getting people into the office is a good thing and just assume that you're in management and for some good reason, you want the people there and you're going to get it. The workers probably don't have a contract that states they can work remotely for as long as they want, meaning that management can, by fiat, just tell them that they have to be in the office. This might not apply to certain groups, for instance if they hired remotely during the pandemic and hired some people in a place where they don't have offices, but anyone who was in an office before the pandemic or could get to one now probably would be subject to such an order. Anyone who refuses can be penalized for not doing what they were told to, up to and including firings. If the company doesn't want to fire them, they can also use a number of smaller sticks against them. Why, when all those levers are available, would they pursue something complicated and potentially illegal with the leave policy?

CrowdStrike's Falcon Sensor also linked to Linux kernel panics and crashes

doublelayer Silver badge

Re: The problem is operational

Often, it is considered the OS's job to execute the software provided, and if you've chosen to let that software run at kernel level because you want it to have access to everything, that means it can mess up the kernel. An operating system that allows you to install software at that level is not compatible with one that can prevent errors executed at that level from having deleterious effects.

So we move on to your next suggestion, which is more plausible, of automatic recovery. That one can work. Have a versioned filesystem, and whenever you have a kernel panic, rewind to an older version and boot that. Of course, if the panic happened because some hardware failure triggered a kernel bug, then you'll end up rewinding yourself to the earliest version available as it panics every time, and it might provide a method for an attacker to remove recent updates in order to reactivate a vulnerability, but in principle the idea would work and those additional dangers could be mitigated by other protections. We would have to figure out what those protections should be and design them, but your second suggestion is possible.

doublelayer Silver badge

Re: And this, ladies & gentlemen, is how you DDoS the entire world.

And all you have to do is get it to run at kernel-level permissions. If you have the kind of access needed to install this file to break a computer, you don't need it. If you have that access, you could obtain a similar, if not more severe, action just by deleting files at random until you are no longer able to delete files. That computer is not booting without a reinstall. No booting to recovery and deleting a file will fix it. The benefit to the hacking community, any section, is zero.

CrowdStrike shares sink as global IT outage savages systems worldwide

doublelayer Silver badge

Re: The fault's with Microsoft

Basically, no. If I put in a program which works at kernel level, configure that program to start early in the boot process, and then do something in that process which takes down the kernel, having a Linux kernel instead of an NT kernel won't prevent that from crashing the system nor from making the recovery process annoying. There are some differences meaning that I might not have to run at kernel level for the same purposes, and then maybe my mistake will happen at a higher level and the boot will complete, but there is no guarantee that this will happen. Linux gives the user the ability to run software with very elevated permissions, enough to cause serious faults if that software is badly written.

doublelayer Silver badge

Re: The fault's with Microsoft

Yes, like that. If my laptop was like a non-smart phone, as in it can run the three programs that the manufacturer came up with, with the small subset of supported protocols that they chose to put in, and if I needed anything else at all I had to buy new hardware to get it, it would be a pretty bad laptop.

doublelayer Silver badge

That if there are repeated instances of this assumption that turn out to be wrong, then the assumption is probably bad and people are sticking with it out of habit and getting themselves into error? Isn't that what you do with assumptions which are repeatedly wrong?

doublelayer Silver badge

"If it were an attack, it would be CrowdStrike's data that leaked, they would be the ones suffering the continued problems."

Supply chain attacks don't work like that. If it had been one, and it wasn't, then customer data would be at risk.

"Was this due to an attack on CrowdStrike or "merely" their incompetence? Who gives a damn?!"

Me. If the data I'm responsible for has been copied to an attacker's systems, I need to start dealing with it, and I need to start doing that right now. If it hasn't, then someone else needs to clean up the systems, and I would likely pitch in to help. Depending on whether it's an attack or a malfunction, my next steps are different, the situation for the users and customers is different, the likelihood of substantial damage to my employer is different, so I care. If you work in any area related to this, you should care too.

doublelayer Silver badge

No, you would not be correct. Read again. It's not Defender. It wasn't pulled through Microsoft. The central fact, and one that's usually in the second paragraph of most stories, is that if CroudStrike was not installed, you don't have a problem.

I'm not sure if this is another attempt to find a reason why this is actually Microsoft's fault or not, but you have critical facts missing from your model.

doublelayer Silver badge

Sure, apart from the active attacker having copies of the data and continuing to do even more damage. Not all bad incidents are the same, and this is different from a cyberattack in several ways. That doesn't make it good, but it's akin to saying that a car crash is exactly the same thing as falling down the stairs, because the injuries you received are basically the same.

FTC grabs controller as Microsoft jacks up Game Pass price by 81%

doublelayer Silver badge

Re: I’m shocked.

The US regulators didn't "cave in", they tried to block it repeatedly and a judge wouldn't let them block it indefinitely. That's why they're still appealing it. The EU agreed to the merger after getting some promises from Microsoft, promises that haven't yet been broken, but the US and UK regulators hung on for longer. While the UK eventually agreed to the merger, the US's regulator has never approved it and is still trying to retroactively disassemble the two.

Big Music reprises classic hit 'ISPs need to stop their customers torrenting or we'll sue'

doublelayer Silver badge

Re: Did they actually look at what was being torrented?

Yes, the detection based on hashes alone would fail. They would have to download the file to check its content. There are several problems with the suggestion from the perspective of someone wanting to pirate and allow others to pirate without getting caught:

1. You can't do that with a torrent. Torrents only work when they can deliver identical, byte-for-byte copies. Deliver ones with additional noise that's different per user and all your seeds will stop being able to deliver the content anymore. You can do that if you're operating a central server that hosts the pirated content, but now you're incurring a lot more bandwidth usage to deliver the same number of copies.

2. The copyright owners can still download the file and identify that it's their music in there. Just having someone listen would be enough, and there are also pieces of software intended to detect similarity between audio files of different encodings or qualities which would instantly figure it out from a downloaded file.

At the end of the day, it wouldn't be effective enough to produce any notable change.

doublelayer Silver badge

Re: Did they actually look at what was being torrented?

You do realize that they can download the file, which they have reason to believe is their copyrighted content, because they own the copyright? It is not infringement to download an illegally distributed copy of something you have the right to. From a technical perspective, they don't have to seed the file, because just downloading proves both what is in it and who sent the data. They have no need and no reason to upload. They might not even have to download to figure out what is in it because, with a torrent, they have both the file names and, crucially, the hashes of the chunks of the file. If those hashes match an illegal encoding they already have, that will be clear enough to stand as evidence, at least enough that the person charged will have to show their file that just happens to have a hash collision for every 2 MB chunk if they want to disprove it.

I get it. You're looking for some reason why their legal actions should be invalid. I think you'll find one for the ones mentioned in the article where they try to have automatic rights over everyone's network connection. There's no law giving them that power. However, when it comes to torrents, your excuses for why their legal arguments won't work are getting both the technology and the law wrong. No matter how annoying I find their actions, I can't just decide that it isn't legal. Courts do not work that literally and if they did, the law is specific enough that it would still work.

Google to kill off URL shortener once and for all

doublelayer Silver badge

Re: Good riddance

I think there are several more problems with bidirectional links and they basically only worked in TBL's internal data system which had a, compared to the internet, very small scope of data to be catalogued. With anything too large, links tend to make sense only in one direction. If my project links to a library I used, that makes sense, because someone modifying my project might want to find the canonical source of the component. If the library links back to my project, it makes much less sense, because that library does not use my project, so at best it can be an example of something you can do with their library and it might not be a good one.

"I think there is a potential role for something between a link shortener and a URN: a service owned by an identifiable authority, with established criteria for cataloguing resources that could issue permanent "handles" for resources whose actual target could be transparently changed to match their present physical location."

I'm not sure when that would be more useful than a more efficient alternative. For instance, we could do that for scientific journal articles, which are relatively easy to name uniquely, and the trusted authority could index them and keep a database of the URLs where you can find the paper. Fine, but nothing prevents someone who operates the server it's pointing to from accidentally shutting it down and disconnecting access. Presumably, the cataloguing authority has to detect that and get the server to come back or find another source. In comparison, if they just copied the thing, then they just have to keep some disk space around and stay online themselves. Less administrative effort and therefore expense means they're more likely to do something like that. That applies as well, if not better, to something that's less organized than scientific papers, because unless the files are very big, the administrative effort of keeping track of their locations is likely higher than the disk space needed to store them.

doublelayer Silver badge

Re: Good riddance

There are times when a shortened link is either necessary altogether or necessary given the constraints of the control over the system. For instance, when someone has decreed the use of a certain CMS on a website which generates long links and might be changed in the future, but the link has to be read out and typed in manually because someone's going to mention it in a speech, video, or advertisement. However, my solution when this has proven necessary is to build my own link shortener. At least then, the domain name is the same and the user can know which organization created the link. It also makes them easier to maintain in the future because no external organization can shut them down and, if the destination moves, the shortened link can be updated.

Dangerous sandwiches delayed hardware installation

doublelayer Silver badge

I used to work in a corner of an office where the sensors weren't very good. They did not detect my normal movements and would switch off if I was the only one there. If they did this and I simply raised my arms and waved them, that wasn't enough movement to register from the corner, so they'd stay off. If I wanted them to go back on, I had to stand up and walk away from my desk, then walk back. While it was a good reminder when alone to stand up sometimes, there were other times when the thing I was debugging had gotten enough of my attention that I just lived with the darkness.

UK comms watchdog banning inflation-linked mid-contract price rises

doublelayer Silver badge

Re: Prospects

Which they can do just fine, but it also means that, if they choose to put the prices up, customers can leave them almost immediately. That is why a lot of places that actually do month-to-month contracts don't mess with the prices too often. They know that doing that will cause people to leave and that they often attract their customers with simple and stable prices because their customers are those who shopped around to find them and can shop around again if they don't like them.

Firms skip security reviews of major app updates about half the time

doublelayer Silver badge

Re: Why security reviews are so time and money-consuming :o

No, that's not it, or at least that's not the major reason. It's because security and vulnerabilities are such large sets that there's no simple formal method of defining something secure. Take the operation of opening a file and writing something to it. The OS doesn't make that insecure. While you might find a filesystem bug that makes that operation vulnerable or a bug in a kernel or process that can be invoked by doing so, those aren't that common. Yet there are still lots of possible vulnerabilities any time that is done, most of them intra-program. The file could be subject to a deserialization attack when it's read back in later. It could be used to use up some resources and provide a DoS method. It could be used to inhibit performance. If the program mishandles paths, it could be used in a directory traversal attack. There are some inter-program methods as well, or at least inter-process. None of these things are due to the platform and tend to be as available on any operating system, but they're down to practices during development of that application. Many of them won't apply just because of the way the program is designed. If you don't let the user name the created file, that excludes some classes of possible vulnerabilities right there. That's not a universal rule that the user must never supply file names, but one consideration among others when making implementation decisions.

A security review is supposed to identify risks like this, but only some of those are easily detected by an automated tool. Tools are improving, but there are still many that will be difficult or impossible to detect that way. Often, vulnerabilities in a piece of software are not carried over from its platform, but come from that software itself. Blaming the platform when bugs are found elsewhere is just going to let writers of insecure code off the hook.

Agile Manifesto co-author blasts failure rates report, talks up 'reimagining' project

doublelayer Silver badge

Re: The more process you have the less agile you are.

And that's great right up until the point where that team doesn't want to do something, so they just don't. The typical example is documentation. I know a lot of developers who don't want to write it. I know a lot of companies that don't want to employ someone else to write it, and if they did, the developers don't want to tell those people the kind of stuff necessary to write it. I've seen both those groups use the line about valuing working software over documentation in the Agile Manifesto as an excuse for why their stump of a readme and error messages is enough documentation. It isn't.

There are a lot of good processes that come about organically from a team just trying to get something done, but sometimes, that team needs to get a very specific thing done, the kind of thing that no team just decides they want to do. Few or no people have gotten together with the dream that, if they put in some time, they could build a really nice web interface for forms and processes of a local bureaucracy, but someone eventually has to write the software that does that. The processes that work for one do not necessarily work well for the other, because the bureaucracy in question doesn't understand the technical reality of what they need, the devs don't understand the processes the code is supposed to deal with, the customer does not have the time or inclination to test a gazillion intermediate versions that don't do anything of use because not everything is connected up yet, and the local government has fixed budgets and timelines because they are required to do so. That can be resolved in a variety of ways, and in some of them, the more Agile approach is the better one. However, the completely Agile approach, where the customer's whim is sufficient to change things at the last minute and there will be lots of those because nobody planned out all the needed functionality at the start, is bound to create chaos when the scope has changed but the timelines have not.