The Register Home Page

* Posts by doublelayer

10859 publicly visible posts • joined 22 Feb 2018

CrowdStrike blames a test software bug for that giant global mess it made

doublelayer Silver badge

Re: It worked on my machine!

It really doesn't matter. They should test their releases, no matter what those releases contain, before going public with them. Whether that is code, or configuration, or some other category of data doesn't really matter. If the behavior of the system has changed slightly, testing is required.

Yes, depending on our definitions, we can argue that it's not code. After all, if someone's counting lines of code from me, they usually wouldn't count the lines of the json file I've just written. But when my program will do something different when the json file is different, it can have the same damaging effects as if I changed what we typically call code, and it therefore needs the same kind of testing.

doublelayer Silver badge

I'm guessing there was a difference between the version that was tested and the version that got released. That could happen in a lot of ways. Maybe two changes were merged into this file and building them together makes the bad file. Maybe it had to do with some additional content in the production build which isn't present in the debug build. There are plenty more.

I've seen the latter example from time to time. For instance, a task where two people wrote code. First, my coworker wrote one unit, then sent it to me. I wrote the second unit. In my testing, these units worked just fine together. Correct results, no crashes, positive and negative results handled as expected. Build it for production and the automated tests freak out instantly. The reason: my debug build was writing more to the log file in case anything went wrong. That slowed things down slightly, which was enough to prevent the race condition in the two processes from going wrong. Take out the logging and the processes might have a concurrency problem and fail. But it worked just fine on my machine. Probably it would have failed eventually if I ran it with the extra logging enough times, but it didn't in the maybe thirty runs I actually did.

Sam Altman's basic income experiment finds that money can indeed buy happiness

doublelayer Silver badge

That is calculable. Of course, depending on how you implement it, different numbers and methods would have to be used, but it isn't difficult to get approximate numbers.

For instance, this experiment took place in the US. The current US population is 335 million. We'll also use the same payment amount of this study, $1k per month or $12k per year. This makes a total annual payment of $4.020 trillion. You've asked to have this paid by the top 0.1%, which would be 335,000 people. This makes for an annual payment from them of $12 million if it were divided equally, which it probably wouldn't be. The wealth required to get you into the top 0.1% is $62 million (source). You would need a wealth tax on the order of 20% annually to get that much, and if you did it, it might work for a decade, depending on the return on investment we assume for the funds they retain.

If you try to implement that, expect several consequences, including lots of wealthy people trying to prevent you from doing that. You should also expect that people will search for or create loopholes to get out of it, because you can hire a lot of lawyers and accountants for less than 20% of a 0.1% level wealth. But also remember that it is not going to work forever even if you can get exactly what you asked for. You will need a plan for what you will do after that. Lowering the wealth cap is the most likely solution, but it too won't last very long.

doublelayer Silver badge

Re: Nice to see these tech types...

This is definitely true, although some of those jobs didn't disappear but went to other people. But many definitely did disappear. However, we didn't decrease the number of jobs needed altogether; if the people who lost their jobs were not doing any job at all, there would be a shortage of labor. This means that we have not reduced the need for labor, but we have changed what labor we have a need for, and depending on what jobs are available in your area, we may have made the types of labor we want much worse.

This is why I think focusing on the futuristic theory of labor elimination is the wrong approach. Right now, we have people who have or will lose the jobs they have done for a long time. We need to figure out what is right for them right now, not what would be right for them in a theorized world of complete automation, because neither of us lives in that world. If we promise them things that would make sense in that scenario, we will give them false hope. If we require things of them that would make sense in that scenario, we may unfairly burden them for not living in an unattainable future. By considering a speculative future rather than the reality we're in now, we're making things worse for the people who have lost their jobs today.

doublelayer Silver badge

Re: Nice to see these tech types...

The reason they're expensive is because of the resources it takes to make them. Not just the metal, motors, and chips, but the labor to manufacture them, the work to design and program them, the expense of maintaining the stocks, workers, and expertise needed to repair them. Because they often need to be customized for each task, a lot of that isn't done at scale, making it more expensive (yes, in money, but also in the sense of how many engineers you hire to do the work or how many different sets of plans you have).

doublelayer Silver badge

Re: we have mass underemployment now

In what sense? There are lots of ways to quantify what ideal employment numbers might be, and there are lots of ways to calculate unemployment. I assume that the "flaw" you're referring to is that the denominator is often the number of people in work or actively looking for work, so it doesn't count those who are not trying to get jobs. Maybe you're instead thinking that the flaw is in data collection which can fail to identify some types of people because they don't appear in employment records. However, neither of those flaws is very relevant to the discussion of whether technology has reached a point where we could continue our current lifestyle with significantly lower amounts of labor or when and what level of technology would be needed to get us there. Even that would probably need a better definition of "significant", which I defined at random as 5% of humans of working age needing to work, but you could easily make an argument that this is too strict a threshold.

doublelayer Silver badge

When you say "I 'invented' UBI back in the 90s", what do you mean by that? Because you definitely didn't invent the concept. People have been discussing, recommending, and in some cases implementing something like that for centuries. While I'm not sure about the specific term because it's hard to get a search engine to find the first use of it, I also have references to "basic income" and "universal income" from the 1960s, so I don't think you invented the term either.

doublelayer Silver badge

Re: How would this affect the wider economy?

That's an optimistic picture you have painted, but you're leaving out a lot of things which will probably block you from getting there.

For example, what are the jobs that so many people want to do that they will pay nothing and people will be happy? Remember that many of the jobs that lots of people want now are jobs that pay lots of money; many people don't want to do what a CFO does but plenty of them would be willing to try for the CFO's paycheck. There are some other jobs which are popular enough that you could pay little and still fill them, but those are very limited in supply (often meaning that the people selected end up being paid well anyway).

Now let's consider the jobs that nobody wants. Like, for instance, agricultural work. There are lots of unpleasant jobs there, so the wages for that job would rise significantly. As you said yourself, that means the prices for the products would rise, which in this case is food, which is one of the things the income is supposed to cover. That means the UBI level will have to go up, not down.

The combination of this means that the staples are the most likely to increase in price, and I'm not sure whether anything would decrease, but if it did, it would probably be a luxury good or service. If you don't plan for handling this, the program might fail quickly, which would probably be more risky for future implementation than if it was never tried.

doublelayer Silver badge

Re: Nice to see these tech types...

I don't think mass unemployment is going to happen as quickly as the predictions have made out. If we get tech developed even more, it could eventually happen, but we have many tasks that have proven difficult to automate. General-purpose robots that can serve as drop-in replacements for humans basically aren't available, and more customized ones that do one job are quite expensive, so while they're common in things like manufacturing, they're less common when tasks vary too much or are intermittent.

Theorizing about what we should do in that circumstance isn't bad, but it may be premature in the same way that theorizing about how to run a society across solar systems is. It may not happen for many generations, meaning that when our descendants need to answer the questions, our answers won't be very useful to them. Meanwhile, there are lots of intermediate stages which are going to happen during our lifetimes, and if we've spent our effort thinking about the far-out future, we may have planned insufficiently for those.

Security biz KnowBe4 hired fake North Korean techie, who got straight to work ... on evil

doublelayer Silver badge

Re: Where can I get more of that scam?

They could modify the picture on the identification documents instead, but if they did that, the picture would no longer match any other pictures of the victim that might be found online. I wouldn't search out pictures of people to check them against the documents, but if someone did, that might make it difficult to get away with an ID that's just had someone else's picture swapped in. Meanwhile, if they have good enough software that they can appear in a video as the person whose real face is on the ID, then that might be less likely to be caught by the employer before someone is hired. This is especially true for differences in age, because if I change out the picture on an identification document for someone aged 23 and they claim to have ten years professional experience and a birth date in line with that, it might be more obvious that they're not who they say they are.

doublelayer Silver badge

Re: Where can I get more of that scam?

Maybe the identity they used didn't look much like them, but they didn't have an unlimited supply of fake identities that all look like Koreans of the right age (I think this is mostly young males as North Korea hasn't prioritized computer skills until the last fifteen years and at least one of the technology-focused schools is male only). If they're using stolen US identity documents, they may have to take steps to appear to be the person pictured in them.

doublelayer Silver badge

Re: Real location

There are two really easy ways to get around that.

1. The laptop was delivered to an address in the US. The person at that address has been told to get the package and send it to Hong Kong. The person in Hong Kong has been told to send it to Shenyang. Someone in Shenyang gets it and brings it to wherever they want it.

2. The laptop was sent to an address in the US. There, it was set up with a local internet connection and the IP sent to China or North Korea, where someone set up a remote connection to it.

Philadelphia tree trimmers fail to nip FTC noncompete ban in the bud

doublelayer Silver badge

Re: Josh Robbins of libertarian law group

I've found that "libertarian" usually translates to "those regulations I like and no others". So they'll probably argue here that they don't want people to lose the right to choose whether to accept a non-compete contract, a choice that should come with a compensatory increase in wages. The argument doesn't make a lot of sense in context, but if you find someone and get them to answer, I can pretty much guarantee that's what you'll hear. If you find someone who says they're a libertarian, however, there's a good chance they'll completely disagree with these guys on what is fair and what should be done to make that happen. That's why I don't call myself a libertarian; there are too many people who are using the term who I disagree with and using it would only confuse everything.

Musk deflects sluggish Tesla car sales with Optimus optimism

doublelayer Silver badge

I can answer that one. You absolutely need an LLM running in a car when the company that makes the LLM will supply it for $5 billion in up-front "investment", will charge per use, and you own that company. Free money is wonderful. You just add it onto the price of each car, and when people don't buy as many of them, you make some other promise of what other wonderful feature they'll have and wait for people to bid up the stock again. Eventually, this won't work in the long term, but you'll have extracted plenty of actual money in the meantime.

Forget security – Google's reCAPTCHA v2 is exploiting users for profit

doublelayer Silver badge

Re: tracking cookies?

I think some version of it, which might not be the same one they're using now, would use cookies set by other Google products as a way to bypass the check. If you identified yourself to Google and they could track you onto the page, then you're allowed through. Otherwise, do a test.

CrowdStrike fiasco highlights growing Sino-Russian tech independence

doublelayer Silver badge

Re: @Doctor Syntax - If Russia gets away with destroying Ukraine :o

Hmm. What countries might be on the borders of Russia and recently joined NATO? Starts with F. What countries aren't on the border but would be if Russia takes Ukraine? Starts with R. What country has moved closer to NATO membership and is in the latter position? Starts with M.

Maybe, if you went with countries that joined in 2023 rather than ones that joined in 2009, you might have already figured this out.

How did a CrowdStrike file crash millions of Windows computers? We take a closer look at the code

doublelayer Silver badge

Yes, they could have implemented a two-stage process where they still have a kernel-level program and it provides data out to something else. There might have been an efficiency drop by doing that, but it would probably be fine enough. The critical point, however, is that this change, while it might have prevented this problem, still involves their being code running at kernel level which, if it broke, would break the kernel. The attempts to blame Microsoft often take the form of explaining that CrowdStrike shouldn't have run anything at kernel level at all, which would not work, and then finding a reason why it's Microsoft's fault that they could, which it isn't.

doublelayer Silver badge

Re: RE: examples that have done better

You mean that one mentioned in the article. It might have read something like this:

"The way that it works is that drivers can set a flag called boot-start," he said.

"So normally, if you've got a driver that's acting kind of buggy and causes a failure like this, Windows can auto resume by simply not loading the driver the next time. But if it is set as boot-start, which is supposed to be reserved for critical drivers, like one for your hard drive, Windows will not eliminate that from the startup sequence and will continue to fail over and over and over and over again, which is what we saw with the CrowdStrike failure."

So they have that by default, and it would have done exactly what you describe except that a flag was set specifically to bypass that safety feature. As it says, there's a good reason to allow something to set itself that way, in case this is required for the system to boot correctly anyway.

doublelayer Silver badge

Several things in your comment are wrong or misleading:

"The issue here is why a vulnerability tool has to go in the kernel. Something like that should only be running in user space: no ifs or buts."

It goes in the kernel so that it has more visibility and control over what happens. There are some things that can't be done from user space at all, for perfectly good security reasons, and others which can't be done efficiently from there.

Next, the Microsoft is to blame for putting it in. They didn't. CroudStrike is not a Microsoft product or dependency. People install it. Just as if I write a kernel module, I didn't ask for or get Linus's sign-off before running it. People are able to install things at kernel level, and they make the choice whether to do so or not. It is not Microsoft's decision to permit it, and if it was, we would be rightly complaining about the level of authority they claim to have to make that choice for us. They should not and do not deny people the right to do something potentially damaging with their own computers.

doublelayer Silver badge

This article explains, if you didn't already know, why Windows has to go down when code which is running as part of the kernel breaks this badly. Guess what would happen if a kernel module I loaded into Linux, Mac OS, or any other operating system had a memory violation. That's right, it would panic. It is required to panic. If it did not panic, that kernel has a serious reliability problem.

Until people understand that, the attempts to find a reason why Microsoft is to blame here will not work. Maybe you or someone else can actually find a thing that Microsoft should be doing differently related to this, but while people continue to post comments trying to blame it for doing something both standard and necessary, you will fail to make any case because it appears that you have a gap in important systems knowledge.

Cellebrite got into Trump shooter's Samsung device in just 40 minutes

doublelayer Silver badge

Re: You know...

I don't agree. Whenever I've seen someone try that, they take a relatively basic approach, one which I don't think gets anywhere. Basically, they follow this plan:

1. Read something the killer wrote. If it's a manifesto, that. If it's not a manifesto, something they posted to social media. If they didn't post on social media, a message sent to someone picked at random.

2. Decide on some opinion that they seem to hold strongly. If this is an opinion you dislike, go to 3. If not, go to 4.

3. Breathless announcement: people who think [opinion] are killers. We should do something about that kind of person.

4. Is there another opinion, one you dislike this time? If so, go to 3. Otherwise, go to 5.

5. Wait for next killer.

Opinions that you can actually make that case about are pretty obvious, because such things often take the form of "I dislike [x] and would like to kill people who, in my mind at least, represent [x]". You don't need much to figure out that a person who says that is potentially murderous. Even then, you have a lot of people who may say that and never actually do anything. If you get any broader, your correlations will be worthless and lead to harmful stereotypes, for instance "The guy who killed people was a soldier, it is not the first time a soldier was responsible for a mass killing of innocent people, that means soldiers are killers". Simplistic to the point of inaccuracy and not something you can do anything about.

doublelayer Silver badge

Re: You know...

I'm sometimes curious to understand the motivations of crazy people to commit murders, but let's call that what it is, curiosity. If we knew that, what would be different at all? Nothing. The victims would be no less dead. Future victims of other people would be no more safe. Maybe, if this was an organized event with other participants, some of them might have been tracked down, but that's not really what you were talking about. Whatever the logic was, we already know that it wouldn't actually make sense to anybody, and unless someone has a stereotype they want to uphold, it would not apply in the same way to anybody else.

That makes it hard to argue for the release of manifestos or the like from murderers when we have them, and it makes it really hard to justify going to the effort to try to break in in the thought that one might exist or be reconstituted from other data. Basically, calls to do so sound to me like "Let's go to significant effort and expense to guess the content of something that might not exist and wouldn't be useful even if we got it".

HCL's back-to-office plan: Come in three days a week, or forget about holidays

doublelayer Silver badge

Why this method

This seems like a really bad method to get people into the office, even if we assume that we want the goal. For a moment, skip whether getting people into the office is a good thing and just assume that you're in management and for some good reason, you want the people there and you're going to get it. The workers probably don't have a contract that states they can work remotely for as long as they want, meaning that management can, by fiat, just tell them that they have to be in the office. This might not apply to certain groups, for instance if they hired remotely during the pandemic and hired some people in a place where they don't have offices, but anyone who was in an office before the pandemic or could get to one now probably would be subject to such an order. Anyone who refuses can be penalized for not doing what they were told to, up to and including firings. If the company doesn't want to fire them, they can also use a number of smaller sticks against them. Why, when all those levers are available, would they pursue something complicated and potentially illegal with the leave policy?

CrowdStrike's Falcon Sensor also linked to Linux kernel panics and crashes

doublelayer Silver badge

Re: The problem is operational

Often, it is considered the OS's job to execute the software provided, and if you've chosen to let that software run at kernel level because you want it to have access to everything, that means it can mess up the kernel. An operating system that allows you to install software at that level is not compatible with one that can prevent errors executed at that level from having deleterious effects.

So we move on to your next suggestion, which is more plausible, of automatic recovery. That one can work. Have a versioned filesystem, and whenever you have a kernel panic, rewind to an older version and boot that. Of course, if the panic happened because some hardware failure triggered a kernel bug, then you'll end up rewinding yourself to the earliest version available as it panics every time, and it might provide a method for an attacker to remove recent updates in order to reactivate a vulnerability, but in principle the idea would work and those additional dangers could be mitigated by other protections. We would have to figure out what those protections should be and design them, but your second suggestion is possible.

doublelayer Silver badge

Re: And this, ladies & gentlemen, is how you DDoS the entire world.

And all you have to do is get it to run at kernel-level permissions. If you have the kind of access needed to install this file to break a computer, you don't need it. If you have that access, you could obtain a similar, if not more severe, action just by deleting files at random until you are no longer able to delete files. That computer is not booting without a reinstall. No booting to recovery and deleting a file will fix it. The benefit to the hacking community, any section, is zero.

CrowdStrike shares sink as global IT outage savages systems worldwide

doublelayer Silver badge

Re: The fault's with Microsoft

Basically, no. If I put in a program which works at kernel level, configure that program to start early in the boot process, and then do something in that process which takes down the kernel, having a Linux kernel instead of an NT kernel won't prevent that from crashing the system nor from making the recovery process annoying. There are some differences meaning that I might not have to run at kernel level for the same purposes, and then maybe my mistake will happen at a higher level and the boot will complete, but there is no guarantee that this will happen. Linux gives the user the ability to run software with very elevated permissions, enough to cause serious faults if that software is badly written.

doublelayer Silver badge

Re: The fault's with Microsoft

Yes, like that. If my laptop was like a non-smart phone, as in it can run the three programs that the manufacturer came up with, with the small subset of supported protocols that they chose to put in, and if I needed anything else at all I had to buy new hardware to get it, it would be a pretty bad laptop.

doublelayer Silver badge

That if there are repeated instances of this assumption that turn out to be wrong, then the assumption is probably bad and people are sticking with it out of habit and getting themselves into error? Isn't that what you do with assumptions which are repeatedly wrong?

doublelayer Silver badge

"If it were an attack, it would be CrowdStrike's data that leaked, they would be the ones suffering the continued problems."

Supply chain attacks don't work like that. If it had been one, and it wasn't, then customer data would be at risk.

"Was this due to an attack on CrowdStrike or "merely" their incompetence? Who gives a damn?!"

Me. If the data I'm responsible for has been copied to an attacker's systems, I need to start dealing with it, and I need to start doing that right now. If it hasn't, then someone else needs to clean up the systems, and I would likely pitch in to help. Depending on whether it's an attack or a malfunction, my next steps are different, the situation for the users and customers is different, the likelihood of substantial damage to my employer is different, so I care. If you work in any area related to this, you should care too.

doublelayer Silver badge

No, you would not be correct. Read again. It's not Defender. It wasn't pulled through Microsoft. The central fact, and one that's usually in the second paragraph of most stories, is that if CroudStrike was not installed, you don't have a problem.

I'm not sure if this is another attempt to find a reason why this is actually Microsoft's fault or not, but you have critical facts missing from your model.

doublelayer Silver badge

Sure, apart from the active attacker having copies of the data and continuing to do even more damage. Not all bad incidents are the same, and this is different from a cyberattack in several ways. That doesn't make it good, but it's akin to saying that a car crash is exactly the same thing as falling down the stairs, because the injuries you received are basically the same.

FTC grabs controller as Microsoft jacks up Game Pass price by 81%

doublelayer Silver badge

Re: I’m shocked.

The US regulators didn't "cave in", they tried to block it repeatedly and a judge wouldn't let them block it indefinitely. That's why they're still appealing it. The EU agreed to the merger after getting some promises from Microsoft, promises that haven't yet been broken, but the US and UK regulators hung on for longer. While the UK eventually agreed to the merger, the US's regulator has never approved it and is still trying to retroactively disassemble the two.

Big Music reprises classic hit 'ISPs need to stop their customers torrenting or we'll sue'

doublelayer Silver badge

Re: Did they actually look at what was being torrented?

Yes, the detection based on hashes alone would fail. They would have to download the file to check its content. There are several problems with the suggestion from the perspective of someone wanting to pirate and allow others to pirate without getting caught:

1. You can't do that with a torrent. Torrents only work when they can deliver identical, byte-for-byte copies. Deliver ones with additional noise that's different per user and all your seeds will stop being able to deliver the content anymore. You can do that if you're operating a central server that hosts the pirated content, but now you're incurring a lot more bandwidth usage to deliver the same number of copies.

2. The copyright owners can still download the file and identify that it's their music in there. Just having someone listen would be enough, and there are also pieces of software intended to detect similarity between audio files of different encodings or qualities which would instantly figure it out from a downloaded file.

At the end of the day, it wouldn't be effective enough to produce any notable change.

doublelayer Silver badge

Re: Did they actually look at what was being torrented?

You do realize that they can download the file, which they have reason to believe is their copyrighted content, because they own the copyright? It is not infringement to download an illegally distributed copy of something you have the right to. From a technical perspective, they don't have to seed the file, because just downloading proves both what is in it and who sent the data. They have no need and no reason to upload. They might not even have to download to figure out what is in it because, with a torrent, they have both the file names and, crucially, the hashes of the chunks of the file. If those hashes match an illegal encoding they already have, that will be clear enough to stand as evidence, at least enough that the person charged will have to show their file that just happens to have a hash collision for every 2 MB chunk if they want to disprove it.

I get it. You're looking for some reason why their legal actions should be invalid. I think you'll find one for the ones mentioned in the article where they try to have automatic rights over everyone's network connection. There's no law giving them that power. However, when it comes to torrents, your excuses for why their legal arguments won't work are getting both the technology and the law wrong. No matter how annoying I find their actions, I can't just decide that it isn't legal. Courts do not work that literally and if they did, the law is specific enough that it would still work.

doublelayer Silver badge

Re: Whack-A-File

They wouldn't, so if you're going to send some copyrighted content, that will probably work. Not many torrents are done that way, though, because it makes it really hard for anyone else to find the stuff they want. You can easily hide what you're transmitting by doing that, but only if you've somehow told anyone else who might want the content you're hiding that it can be found there. Meanwhile, if you have a pirate site that just calls every torrent "LibreOffice_24.2.5_MacOS_x86-64.dmg.torrent", it won't have any protective effect at all because those trying to find torrents will start on that site that has the real names, and if they own the copyright to the content, they have committed no crime by downloading it to verify what is there.

doublelayer Silver badge

Re: Did they actually look at what was being torrented?

I think what they meant to say but didn't is that you just have to find an existing torrent and, without needing to actually download any chunks, log the address of anyone who offers a chunk. You don't need to send the file, or even have the file, in order to do it.

Google to kill off URL shortener once and for all

doublelayer Silver badge

Re: Good riddance

I think there are several more problems with bidirectional links and they basically only worked in TBL's internal data system which had a, compared to the internet, very small scope of data to be catalogued. With anything too large, links tend to make sense only in one direction. If my project links to a library I used, that makes sense, because someone modifying my project might want to find the canonical source of the component. If the library links back to my project, it makes much less sense, because that library does not use my project, so at best it can be an example of something you can do with their library and it might not be a good one.

"I think there is a potential role for something between a link shortener and a URN: a service owned by an identifiable authority, with established criteria for cataloguing resources that could issue permanent "handles" for resources whose actual target could be transparently changed to match their present physical location."

I'm not sure when that would be more useful than a more efficient alternative. For instance, we could do that for scientific journal articles, which are relatively easy to name uniquely, and the trusted authority could index them and keep a database of the URLs where you can find the paper. Fine, but nothing prevents someone who operates the server it's pointing to from accidentally shutting it down and disconnecting access. Presumably, the cataloguing authority has to detect that and get the server to come back or find another source. In comparison, if they just copied the thing, then they just have to keep some disk space around and stay online themselves. Less administrative effort and therefore expense means they're more likely to do something like that. That applies as well, if not better, to something that's less organized than scientific papers, because unless the files are very big, the administrative effort of keeping track of their locations is likely higher than the disk space needed to store them.

doublelayer Silver badge

Re: Good riddance

There are times when a shortened link is either necessary altogether or necessary given the constraints of the control over the system. For instance, when someone has decreed the use of a certain CMS on a website which generates long links and might be changed in the future, but the link has to be read out and typed in manually because someone's going to mention it in a speech, video, or advertisement. However, my solution when this has proven necessary is to build my own link shortener. At least then, the domain name is the same and the user can know which organization created the link. It also makes them easier to maintain in the future because no external organization can shut them down and, if the destination moves, the shortened link can be updated.

Dangerous sandwiches delayed hardware installation

doublelayer Silver badge

I used to work in a corner of an office where the sensors weren't very good. They did not detect my normal movements and would switch off if I was the only one there. If they did this and I simply raised my arms and waved them, that wasn't enough movement to register from the corner, so they'd stay off. If I wanted them to go back on, I had to stand up and walk away from my desk, then walk back. While it was a good reminder when alone to stand up sometimes, there were other times when the thing I was debugging had gotten enough of my attention that I just lived with the darkness.

UK comms watchdog banning inflation-linked mid-contract price rises

doublelayer Silver badge

Re: Prospects

Which they can do just fine, but it also means that, if they choose to put the prices up, customers can leave them almost immediately. That is why a lot of places that actually do month-to-month contracts don't mess with the prices too often. They know that doing that will cause people to leave and that they often attract their customers with simple and stable prices because their customers are those who shopped around to find them and can shop around again if they don't like them.

Firms skip security reviews of major app updates about half the time

doublelayer Silver badge

Re: Why security reviews are so time and money-consuming :o

No, that's not it, or at least that's not the major reason. It's because security and vulnerabilities are such large sets that there's no simple formal method of defining something secure. Take the operation of opening a file and writing something to it. The OS doesn't make that insecure. While you might find a filesystem bug that makes that operation vulnerable or a bug in a kernel or process that can be invoked by doing so, those aren't that common. Yet there are still lots of possible vulnerabilities any time that is done, most of them intra-program. The file could be subject to a deserialization attack when it's read back in later. It could be used to use up some resources and provide a DoS method. It could be used to inhibit performance. If the program mishandles paths, it could be used in a directory traversal attack. There are some inter-program methods as well, or at least inter-process. None of these things are due to the platform and tend to be as available on any operating system, but they're down to practices during development of that application. Many of them won't apply just because of the way the program is designed. If you don't let the user name the created file, that excludes some classes of possible vulnerabilities right there. That's not a universal rule that the user must never supply file names, but one consideration among others when making implementation decisions.

A security review is supposed to identify risks like this, but only some of those are easily detected by an automated tool. Tools are improving, but there are still many that will be difficult or impossible to detect that way. Often, vulnerabilities in a piece of software are not carried over from its platform, but come from that software itself. Blaming the platform when bugs are found elsewhere is just going to let writers of insecure code off the hook.

Agile Manifesto co-author blasts failure rates report, talks up 'reimagining' project

doublelayer Silver badge

Re: The more process you have the less agile you are.

And that's great right up until the point where that team doesn't want to do something, so they just don't. The typical example is documentation. I know a lot of developers who don't want to write it. I know a lot of companies that don't want to employ someone else to write it, and if they did, the developers don't want to tell those people the kind of stuff necessary to write it. I've seen both those groups use the line about valuing working software over documentation in the Agile Manifesto as an excuse for why their stump of a readme and error messages is enough documentation. It isn't.

There are a lot of good processes that come about organically from a team just trying to get something done, but sometimes, that team needs to get a very specific thing done, the kind of thing that no team just decides they want to do. Few or no people have gotten together with the dream that, if they put in some time, they could build a really nice web interface for forms and processes of a local bureaucracy, but someone eventually has to write the software that does that. The processes that work for one do not necessarily work well for the other, because the bureaucracy in question doesn't understand the technical reality of what they need, the devs don't understand the processes the code is supposed to deal with, the customer does not have the time or inclination to test a gazillion intermediate versions that don't do anything of use because not everything is connected up yet, and the local government has fixed budgets and timelines because they are required to do so. That can be resolved in a variety of ways, and in some of them, the more Agile approach is the better one. However, the completely Agile approach, where the customer's whim is sufficient to change things at the last minute and there will be lots of those because nobody planned out all the needed functionality at the start, is bound to create chaos when the scope has changed but the timelines have not.

doublelayer Silver badge

Because, when originally published, it started conversations here about whether Agile is a good thing. Not because the report was any good. If you review the comments when it was originally talked about here, you'll see many people making the same points about the uselessness of the report and you'll notice that few if any of the criticisms of Agile are related to the content of that report, but are instead about our experience of Agile, its theory, its execution, and its results.

In short, nobody is talking about the report except for the Agile promoters who, wanting to argue against those of us who have problems with the manifesto, have started with the obvious. They have correctly pointed out problems with a report that none of us care about, but they have not responded to any of the criticisms raised in the comments. Only one of the creators has actually joined a conversation here, and only to say that he didn't bother reading most of the discussion but was sure that whatever we were complaining about wasn't Agile anyway. Not that they need to, but if you want to complain about the people still referencing the report, those are the people you should look at.

doublelayer Silver badge

Re: I love it !!!

In fairness to Agile (I'm not an adherent as you can see from my comment above), communication with the customer is one of the things it calls for. I think that would have been better if they made it explicit that the customer is the user, not the person paying, but still, they agree with you there. It is also one of the reasons I say that Agile only works in some cases. If you can frequently bring things to the user and get their reaction, then it works as an approach and is, I think, similar to what you're suggesting. If the users are giving you good feedback during development, you can head off usability problems and stop working on things that nobody wants.

The problem with this is that there are times where the work that is necessary is not something users can comment on throughout development and that sometimes, they won't even when they can. If this applies to a project, then something needs to be done to accommodate for that lack. You should communicate often except when you can't, but those two alternatives need to be handled quite differently.

doublelayer Silver badge

Even simpler, the answer to any critique of Agile is "if it didn't work, it wasn't Agile". Handily unprovable and tautological. It means that no criticism is considered valid; if they don't like the thing that you don't like, then it was never a part of Agile, even if it's written right there. If they do like the thing you don't like, then clearly you weren't doing it right, so that's why you are wrong.

There are times when an Agile-like approach is the best one. Knowing when to apply that is important. It is not all the time, and it would do its adherents well to at least understand why we say that rather than try to dismiss immediately every time something is questioned.

The graying open source community needs fresh blood

doublelayer Silver badge

Re: Realization

"Nothing, but arguably as someone who believes in open source you wouldn't do that anyway. Neither would anyone else."

Yes, as someone who believes in actual open source, I wouldn't. The people who mandate payment don't, hence why they violate every definition and tradition of open source, and they easily could. Why wouldn't they when they could theoretically get more money by doing so? The companies that switched from open source licenses to faux-open ones didn't universally limit the "who has to pay" set to big cloud providers. Some of them changed it to all commercial use at all. They did this, taking the work of all the independent contributors for free and making a profit from it. The thing you're accusing the big companies of doing actually applies better to people who switch the licenses.

"In this case corporations making billions in profit from open source projects and generating more work for the project but not giving them any money causes harm."

There's a great thing to do about this: don't do the extra work they generate. They want a feature added and have requested it but nobody else needs it? Hey guys, how about you find a programmer to write it or you pay us to do it. And if you do get a programmer to write it and it requires a lot of reviewing, we might not do that either unless you pay us, so choose between having your own version with your feature or donating some cash so it can be upstreamed and you have less maintenance work in your future. There, you have a method of getting resources from any person or company that is actually increasing the workload, but you're not doing that by abandoning the freedoms.

People use all sorts of licenses with random or counterproductive terms. Someone writes some software, but they have a bone to pick with the UK, so they state that UK-based individuals or companies are forbidden from using it. Yes, the UK is not the most common country subjected to this, but the point remains. There is a reason why we have made clear that that is not open source. Similar modifications aren't either. You can do anything you want, but I prefer to use, write, and contribute with money, code, or other support to projects where those freedoms remain, and I am well aware that if you start taking them away from someone, you will eventually take them away from someone I care about.

doublelayer Silver badge

Re: Realization

Yes, it is taking it to extremes to point out why open source and free software forbid it. There is a reason why the definitions do not allow for discrimination against fields of endeavor or permit mandatory payment no matter where the software came from. They forbid this to protect important freedoms.

If you agree that a mandatory payment from anyone full stop is not open source, then even when that payment is mandatory on a smaller set of users that doesn't include you, it is doing the same thing. It should also be clear that, if we let any author set their own terms for who has to pay, that group can grow to include you at the whim of the author. What stops me, as an author of open source software, from deciding that you should be paying as well. After all, you're not the proverbial resident of the third world earning $2 per day, so you could afford to toss some money my way and I'm going to make sure you do. The truth is that you probably could give me some money, and that's why there is a donation button, but that if I wanted to be able to require you to pay me for my work, I would have sold this software as a product. There is a fundamental disconnect between an open source project, which anyone can develop and distribute, and something that a single person or organization can own and sell. Open source software has been dealing with this problem for decades, and it has been important to clarify that it is not the same as big companies who release some source, but if you so much as look at it without permission, they'll try to charge you license payments. That difference is as important, if not more, if some authors of formerly open source change to that model.

doublelayer Silver badge

This implies that you've had success explaining the benefits to older people. Have you really?

In my experience, explaining the benefits of open source tends to fail with people of all ages if they don't write code or do something very similar themselves. Some people grasp the idea of "you could theoretically fix it yourself", more people grasp the idea of "you don't have to pay for it", but I have had little success explaining why the freedom to modify and distribute at will is important to me. Unfortunately, given some conversations elsewhere in this thread, I think I'm failing to explain that to someone who presumably has technical knowledge already. I have not noticed this being any harder to explain to young people. I know many young programmers who understand and agree with the goals of open source and many old non-developers who think it's really cute how I'm into this open free thing I just made up, but it surely could never go anywhere because all the software running the internet is owned by big businesses, right.

doublelayer Silver badge

Re: Realization

Would it be open source if I wrote a license that says the following:

In order to use this at all, no matter where you got it, you must pay me all the money in your bank account.

In order to distribute modifications, you must charge everyone who receives it, whether from you directly or not, and send the money to me?

Hey, the source is still there for you to read. You can still modify it. Isn't that open source by your definition?

By the definition we have used, there are important freedoms that are lost. The freedom to modify and distribute without seeking permission from the original author being an important one. The problem of companies not donating to projects they use does not change the fact that, if you fix it by removing the freedoms, you have done a lot of harm to those who benefited from those freedoms and made yourself not unlike those companies, because you have taken the contributions of others in order to make a profit from the users without giving anything to them.

If you don't want users to have those freedoms, you have the choice not to give them those freedoms. Proprietary software is not evil. However, don't take them away and try to pretend that you have not. There is a reason that open source software is often preferable to proprietary, but proprietary software masquerading as open source is not.

doublelayer Silver badge

Re: Realization

Open source has come to mean something, and it is not that the author simply decides on a license cost and gets to impose it. That's fine as well, but it is different. All I am asking here is for open source to continue to mean what it has meant before, which does not allow for mandatory payment for everyone who uses the software. People who don't write it should not claim to do so.

Otherwise, I must make the following decision. If open source can be redefined to allow anyone to impose restrictive license terms and mandatory payment, then Windows is the most successful open source operating system in existence. By the definitions we have used for decades, this is not true. By the one that lets "open source" mandate payments from users, it is.