* Posts by doublelayer

9408 publicly visible posts • joined 22 Feb 2018

That hardware will be more reliable if you stop stabbing it all day

doublelayer Silver badge

Re: The boxes are labelled on the outside

That works equally well no matter how you note the contents. Under the theory in the comment, when they were using paper, they only cared about the outside of the box, but with scanners, they now cared about scanning each of the contents. That theory doesn't make a lot of sense. I think it is likely wrong.

Instead, I assume that the people cut open the box, put the cutter down, and wrote down what they saw inside. Then they got scanners that looked like they'd work just fine as handles and decided they could speed this up a bit. Requiring them to put the cutters down wouldn't have decreased their speed relative to the paper method, but making them more robust would have helped with speed at least somewhat. Of course, my theory is only another one and I can't prove it either.

doublelayer Silver badge

Re: Common-Sense Failure

And there are a number of possible solutions to that. For example, you could attach the scanner to the user so that they could drop it without it landing on the floor. Or you could give them places to put it down which are convenient for picking it up. In fact, they probably used the taping method only because this scanner was shaped conveniently for use as a handle. I've seen such devices in a variety of shapes, not all of which would work very well taped to a box cutter. There were a number of good options, and although making them more robust worked in this case, there are times where it might not be a feasible option and a different workaround is preferable.

doublelayer Silver badge

Sometimes, especially with internal software written specifically for one set of users, that is the best approach. You can get away with the opposite under either of two conditions:

1. The people specifying what they want are very knowledgeable and have already designed something perfect. You just have to make sure your code does exactly what they said. If you're working in an environment like this, I have one piece of advice for you: don't wake up.

2. You don't care whether it works. You get paid if you build what they asked for, so you build that. If it's broken, that's their problem.

If you're not in either of those, you will need to work with users to figure out what they need, and if you can, presenting them with a partial solution and getting them to fill in some blanks is more efficient than talking to everyone and trying to distill what you need from the conversations. It doesn't always work, and trying to be Agile when you can't do that is a recipe for disaster. However, sometimes the other options also don't work, and doing Agile right can be easier to force than getting perfect requirements.

If you're building something else, this may not apply as strongly. Mass-market software still needs some user testing, but a different kind than internal use. Since you'll be selling it to a larger number of users, it needs to be more generally useful rather than narrowly targeted to your testers, even if it means that their tasks aren't as simple as they could be.

Lenovo China clones the ThinkPad X1 Carbon with an old, slow, local x86

doublelayer Silver badge

Re: Desktops and oranges

There isn't a lot of information about the chip, but a report suggests that it is less powerful than The Register claims but has a 15 W TDP. A couple samples have been benchmarked, though only four so it is possible these numbers aren't the best. If these are correct, a comparable 15 W chip with similar figures would be the laptop-class AMD PRO A8-8600B if I try to match multi-thread and single-thread figures although the AMD is 20% higher on single-thread.

As with most processors, there are people who can work with rather little performance, including many of us, who can make this work. Others are going to find this less speedy than they're used to and can't get around it.

doublelayer Silver badge

Re: inferior to Intel and AMD desktop chips released three to five years ago.

Of course it runs Linux. They've intentionally matched as much of the X64 interface as possible, including typical firmware. Compatibility is the point of Zhaoxin, which is partially why their efficiency is lower. Drivers are always a possible problem, but most of them are probably the same as the Intel and AMD-equipped X1s and I'm guessing there are Linux drivers for anything else in there because at least some of the people who are willing to buy a Zhaoxin-equipped laptop are going to run Kylin on it and those drivers would work on other distros.

doublelayer Silver badge

If they were going to do that, it probably would be a little more hidden than that. A firmware component that connects to the internet is possible, but shouting out that it exists would break the intent. Also, if they were going to do that, they wouldn't only do it for the Zhaoxin model unless they're trying to spy on Chinese government users, who are the only people who will buy it. You are welcome to try. In fact, I'm happy to try, although I'm not going to buy one, but I would do the analysis if someone asked. But I doubt you'll find it as interesting as you expect.

doublelayer Silver badge

Headscratcher?

It doesn't seem that confusing to me. $1500 per government employee is a lot of money and, for anyone told to buy local*, it's not like this has a lot of competition. Maybe the Huawei Qingyun line, but that isn't really designed to run the same OS that they're already running, and it doesn't matter whether they were running Windows or Kylin or Ubuntu. The Zhaoxin model would run any of those and the Qingyun would require some hacking. Meanwhile, Lenovo doesn't have to spend much money if they can use as much of the X1 production line as possible with just a different board. Producing one such model seems a reasonable commercial choice to me.

* Buy local, or at least just don't buy from a few easily identified massive corporations. If there are a bunch of Korean components in there, it's fine as long as nobody really talks about it.

WP Engine revs Automattic lawsuit with antitrust claim

doublelayer Silver badge

Re: Did you sign a contract?

Why do you see their actions as defending the community? Defending them from what? So far, the things they've done that affect the community, as opposed to just WPE and themselves are the following:

1. Forking a WPE-developed plugin without clarifying why and whether they'll keep up with fixes and improvements.

2. Broken access to updates to any customer of WPE.

3. Cut off WPE funding to community events.

4. Demanded extra control over any community event or organization to prevent them from saying nice things about WPE.

5. Released information about current and former WPE users and sites to the public to brag, information which probably but not definitely won't cause those users problems.

What has WPE done to the community that makes them worse? I have nothing so far. Their biggest "crime" is that they haven't given Automattic, not even the Word Press Foundation, but the for-profit company, lots of money. A lot of other Word Press hosts haven't done that either, but somehow it's WPE that is solely at fault.

That's why there is negativity. If this was yet another argument about money, we might decide who seems sympathetic based on opinions about private equity. Automattic have made it easier by breaking a lot of things for a lot of people, not just the company they dislike.

Qualcomm's Windows on Arm push would be great – if only it ran all your software

doublelayer Silver badge

Re: Microsoft remains its own worst enemy

I also have to wonder a few things about that observation. The author was unaware that Adobe software had native builds, so I have to question what they actually did. I'm not sure whether they ran AutoCAD through emulation at all, let alone enough to estimate its performance. I would also note that many of the Windows on ARM machines are laptops with mid-range CPUs. The Microsoft SQ3 and Snapdragon 8cx Gen 3, for instance, benchmark about the same as the Intel Core i7-1250U or the AMD Ryzen 3 5425U. I wonder how well either of those would do against an AutoCAD system requirements page which suggests "Recommended: 3+ GHz processor (base), 4+ GHz (turbo) Basic: 2 GB GPU with 29 GB/s Bandwidth and DirectX 11 compliant

Recommended: 8 GB GPU with 106 GB/s Bandwidth and DirectX 12 compliant". Maybe this kind of software wasn't really intended for the market segment where these laptops are being sold.

doublelayer Silver badge

Re: Microsoft remains its own worst enemy

You can buy a retail version easily. It isn't distinct from X86 versions, and the license key process is basically the same. What isn't as easy is making it actually install where you want it, and that's because ARM isn't standardized in the same way X86 is. So for example, you can buy a license key and run it in a virtual machine host on an ARM Mac, and you've been able to do that for quite a long time. If you want to install it directly on the Mac concerned, it's not going to work, but you are welcome to try without any guarantee that anything will stay the same.

doublelayer Silver badge

Re: Value proposition

I think the main value proposition for users is better battery life, which is something people comment on quite frequently with ARM Macs. Most of the other advantages of current ARM-powered laptops are things you could get elsewhere. For instance, although they can often come with 5G support or an NPU, if you need one or both of those, you could get them on an X64 machine or add them externally. Still, some users may want an integrated unit and choose this anyway, so access to that market could be a good reason to do some level of development.

For a lot of software, it can be as simple as changing a compiler target and building two binaries. There are always exceptions and things for which ARM won't work without a lot of effort, but a lot of basic software isn't using anything complex enough that a direct cross-compile wouldn't be good enough.

doublelayer Silver badge

They've said quite clearly that they think it doesn't exist, because they say "Adobe Photoshop can technically run on Arm through emulation" even though it can also technically run on ARM using the native binary. I think this author is behind the times.

The problem with that is it makes it hard to know whether the author understands what they're talking about. I don't have a clue whether Photoshop's native ARM build runs properly, and I neither have a Windows on ARM machine nor use Photoshop, so I'm not going to find out. With an author who isn't aware that it's an option that they should have included in their analysis, I also have to question whether their judgements on other aspects are correct either. So far, I have not considered a Windows on ARM machine because I don't know how good the emulation is. I have software which doesn't and isn't going to have an ARM build, so that could be a limitation if the emulation isn't good enough. I'm also hoping that Linux support will get more thorough; even if a user runs Windows most of the time, I appreciate having options.

Australia tells tots: No TikTok till you're 16... or X, Instagram and Facebook

doublelayer Silver badge

Re: who's responsibility?

I think this demonstrates what I expected, but you did allege that I'm assuming something that doesn't represent you, so let's give it a try. We can limit ourselves to the talking on the phone bit, because it perfectly demonstrates what I was talking about. I see your post as assuming why people don't talk on the phone: "fear of talking on the phone rather than some form of texting (thus taking away their "shield from emotional vulnerability")". That would appear to me to be a rather clear generalization, even though there are lots of reasons why people avoid talking on the phone.

Here's one. A change in attitudes, not because of any "emotional vulnerability". One change in attitude has come along because the phone is no longer the only way of contacting someone. A while ago, if you wanted to talk with someone, the phone was the best way of doing it, but now, an email or chat message will arrive just as quickly. Some people have grown less eager to use the phone because it means they have to have a conversation when the call happens rather than an asynchronous one at a time of their choosing. And that is recipients of calls. I have certainly noticed this. There are people I want to talk to who just don't answer the phone and don't listen to or respond to voicemails. If you need their attention, you have to send them an email and schedule a call if needed. That applies to many older people as well, and there's at least some logic in it because it is less disruptive to whatever else they were doing that day. That, in turn, means I'm less likely to call people because there's at least some chance that I am wasting my time because they won't answer or respond, so although I quite like calling people to communicate with them, I still generally start with an email.

That set of people are not doing that for anything related to "emotional vulnerability". Incidentally, what is your theory for why there is more emotional vulnerability on the phone? It's pretty easy to insult someone or to be insulted over text chat. There's more direct vulnerability on a video call, but those are rather popular.

However, just because that or other non-emotional reasons exist doesn't mean everyone is as I describe. There are indeed some people who dislike talking on the phone for some different reason. However, by making a statement like you have, you've ignored many alternatives, applied your own assumption for why people do things differently to how others did them decades ago, and then used your assumptions as evidence of a different problem with a tenuous connection. Even if we were limiting this to those who feel anxiety about making a phone call, we'd still have to figure out why before we could blame it on any particular change in activity, but we didn't get that far because I saw you lumping all sorts of things in.

doublelayer Silver badge

Re: who's responsibility?

Some of that is true of a significant subset, and a lot of that is just the typical intergenerational stereotypes that always crop up. For example, teachers annoyed that their adolescent students aren't paying attention and don't put the dedication into their homework that they should... nobody's ever said that before. It simply didn't happen before 2010.

Another one I can explain is the office environment. Yes, people wear headphones. They wear those because there are no walls and they want to reduce noise pollution so they can work with less distraction. Some of them might be playing music which they think helps them work. That's not antisocial, it's attempting to improve productivity. People of all ages can get annoyed at interruptions. Unless they called you there, there's a chance that they were working on something and don't particularly want to talk about your thing, hence the negative reaction, and one I think you're likely overestimating because most people have learned not to show their annoyance at interruptions that they can't prevent.

Other ones are less clear. I could try to explain the changes in etiquette that has reduced the acceptance of just calling someone when you want to speak with them. However, there are some people who dislike calls even outside of that. However, I'm not sure you would care about those different subsets as you've already decided what the explanation for all these things must be.

doublelayer Silver badge

Re: You'd think...

They generally can. Android works similarly to what you've described, while IOS uses a slightly different system where there is only one user account with one unlock pin, but you can put a different pin on features you want restricted. The child would only be given the former pin and could only use the allowed functions. Not perfect, but it can do a lot of what people are asking for. I know of no people who actually use that feature, but it is there.

doublelayer Silver badge

No, I'm sure they are well aware that people will lie. That will eventually require them to pass another law that makes it difficult to lie, which will probably involve identity verification. I'm guessing they are doing it in two stages because they think identity verification for all users is difficult to get support for, but a measure to enforce an existing law may be easier.

Why they want it is a different issue. Maybe they only want it because they really think this is important enough and that the loss of privacy for other social media users is a cost worth paying. However, I expect that, even if this is their thought process, they will extend identity verification to many other sites. I don't use the social media sites they mention, but I oppose this because I think the effects on privacy will begin to affect myself and others on sites where there is no valid excuse for it.

Apple hit with £3 billion claim of ripping off 40 million UK iCloud users

doublelayer Silver badge

Re: Apple? Locking customers in?

An IOS app can request access to photos and then back up those photos. There are probably quite a few that do. However, they are not really able to back up anything else. Android is not a lot better. While there is a storage location for general files which could conceivably contain app data you care about and be easy to back up, Android has been designed to lock up most app data in unreadable directories. Of course, system apps can read those at will, so Google's backup system can fully back up your app data, but using another app to do it won't work. You used to be able to back up some of that over ADB, but that has been deprecated. So now, if you have root, you can back up all of that, and if you don't, you can only back up what your app developers have bothered to make readable to you. It's just your phone and your data, why should you have access to it?

Academic papers yanked after authors found to have used unlicensed software

doublelayer Silver badge

Re: The connection is copyright

The point was not that they were compensating peer review in a deserved way or that they weren't making massive profits off others' work. The point was that, in comparison to one that doesn't do those things at all, they can still be more respected. Reputation can be an important thing to researchers and those looking for other research to build from. Although a lot of journals have a long history of allowing bad papers in and taking too long to remove them, the reputation of those who filter a lot of them out at source and do retract them when they're found faulty is higher than ones that don't filter much at all and retract only after something egregious.

EU irate about geo-locked Apple IDs

doublelayer Silver badge

Or it was literal, pasting the fifteen-step process would be kind of ridiculous because anyone who cares would click the link. And yes, it's fifteen. The six points are just what you do before switching your country, and the process on an IOS device involves nine more steps, although all of those are pretty simple steps. So yes, I'd rather not have to find fifteen steps copied and pasted in here when it's irrelevant to everything else.

doublelayer Silver badge

I'm not sure why you chose that comparison. The complexity of a process is not the number of steps, but the sum of the complexities of those steps. A two step process of 1) deactivate the safety systems on the nuclear reactor, manually preventing anything from going wrong and 2) rearrange the parts so it works on a different type of fuel is a lot more complex than a forty-step process for cooking instructions where a single step might read "remove bag of flour from cupboard" and the next one "place it onto a work surface". We're not going to get anywhere by counting list items.

Nor are we going to get anywhere by debating about the complexity of pushing the "change country" button. The EU's complaint is not about the user interface and how hard or easy it is to find that option. It is about other problems, for example the problems of what happened to the stuff you had in your previous country, which seems to divide itself into two big categories: 1) you have to buy it again and 2) you can't even buy it again. That, of course, is merely one list item. You've repeatedly suggested that you disagree with Apple's choices, so the entire debate in this thread seems a little weird. Having discussed with you before, I know you tend to defend Apple on most things, but since you disagree with them on this one, what is the objection you are raising?

Apple drops soldered storage for 2024 Mac Mini

doublelayer Silver badge

In my experience, storage is both the most commonly damaged component in a desktop, only exceeded by a laptop battery which doesn't apply to this, and the most common spec where users want to increase it before there is a problem with the rest of the computer. I might agree a little more if we were talking about RAM upgrades, because a lot of people don't know when they're doing something RAM-intensive and therefore don't bother to upgrade it. Running out of storage space and not wanting to delete the files there is something that a lot of users can understand and some of them want to do something about it.

The NPU: Neural processing unit or needless pricey upsell?

doublelayer Silver badge

Re: What does "upsell" (noun) mean?

If you know the verb upsell, then the noun is the same. To upsell something, you try to add on extra things that cost more money or to replace the cheap thing with an expensive alternative after the customer has decided what they actually want. An upsell is the thing you add on to do that or, alternatively, the act of upselling itself. In this case, it's not really an upsell because it's just put there and you don't really get a choice about it; if you want a certain type of CPU and it only comes with an NPU, then you'll have to buy an NPU to get the rest of it. To be a more traditional upsell, you would have to persuade the customer to add it on or to choose a more expensive board by marketing the included NPU.

I've heard people in the UK use both words. I don't know if it was an American term when it started, and it could easily have come from any country to begin with, but it's pretty global now.

doublelayer Silver badge

Re: Strikes me...

Because then we'd have an article asking whether the FPGA is a field-programmable gate array or a futile profit-gaining addition. An NPU can be used to efficiently compute some classes of software, and if you don't run those or you do but you can't or don't optimize to use it, it's useless. An FPGA on a general-purpose computer is quite similar, just with a different set of things it's useful for. Since most things where you would use an FPGA are also done in software with an efficiency cost, you have to ask how many things an average computer does where access to an FPGA would provide an efficiency boost to justify the cost of the silicon and how likely each is to be written in a way that can use the FPGA.

doublelayer Silver badge

If the AI is running:

Power draw: Better than if it's on a CPU and/or GPU.

Heat: See above.

Memory usage: Basically unchanged. Large models use lots of RAM and it doesn't much matter where they're processed.

Performance: By running as much of the AI on the NPU as possible, load on the CPU is reduced.

Reliability: I don't have any reason to expect the NPU to have any reliability differences compared to the other parts of the chip. If your CPU cores don't start failing randomly, the NPU probably won't either.

If the AI is not running:

Power draw: Mostly negligible. There will be some current going to the NPU, but like most other components, it can be put into a deep sleep mode when nothing is using it.

Heat: See power.

Others: Since nothing is using the NPU, your other resources won't be affected much.

Cost: You've paid for silicon you're not using.

doublelayer Silver badge

Re: Poorly written set of poll options

I agree. My view would go along the lines of "You probably won't miss an NPU if you don't have one, but there are some possible benefits, and if enough people have one then some software will use it and it will become standard like all the other things that every desktop has now but was an optional extra before". It's not the most concise option, but none of the provided options comes close.

doublelayer Silver badge

Re: You forgot the evil empire...

To the extent that Apple is stealing your data, they're not using the NPU to do it. They use the NPU for a few functions that run on the device, including speech recognition and a bunch of things the camera can do, both while taking photos and sorting them. Any data stealing is done on the classic CPU. You don't need special hardware to manage it. The degree to which your data is being stolen and by whom is left as an exercise for the reader or the reader's unjustified assumption if they have one.

Tech support world record? 8.5 seconds from seeing to fixing

doublelayer Silver badge

Re: A bit more recent

It is not on the private range, and the 192.178.0.0/15 block is registered to Google. So if they really were using that, they shouldn't be.

doublelayer Silver badge

Re: WiFi hardware switches...

And now that's the more common thing on laptops. At some point, they all decided not to have separate volume keys, but since those were commonly used, they decided to make the default function of the function keys the additional one rather than the system one. The main difference between models is how difficult it is to get back to normal. On some machines, it's just a key command (FN+ESC or FN+Shift are common). Other times, it's a BIOS setting. Other times, there is some hidden application which sends a command to the keyboard firmware. Sometimes, there's nothing at all.

doublelayer Silver badge

I'll grant that for the UPS bit, but "IT equipment only" is not so jargony, and even if you think it is, it makes it pretty clear that you should assume that your heater is not IT equipment unless an IT person says otherwise.

doublelayer Silver badge

Re: 8.5 seconds...

That's how the switch is designed. It doesn't disable the WiFi interface, because disappearing interfaces could cause problems, now mostly fixed, with a variety of operating systems. Yes, that includes Linux. Notifying the software was considered unimportant because the operating system couldn't turn it on; the user had to do that by physically toggling the switch again. Windows didn't have access to anything that would indicate that it had been switched off.

For the same reason, old laptops with a physical brightness control for the screen didn't report that number to the operating system, nor could the operating system override it.

Intel: Our finances are in the toilet, we're laying off 15K, but the free coffee is back!

doublelayer Silver badge

Re: Is it decent?

That's probably true. I've seen from experience that taking breaks can help find good answers to lots of questions. However, from a company's perspective, there are three categories of time savings that, even if they understand this effect, still exist:

1. A lot of people will get some caffeine close to the start of the day. Most of them are not yet trying to find a solution to something; it's just their starting routine. Any time saved by not having them leave for somewhere else results in an earlier start to work.

2. Some people don't frequently have to find solutions like this, and their time working may correlate more closely with productivity.

3. It means you don't have to locate your office near to a good source of whatever caffeinated drinks the employees prefer, which is not generally a challenge if it's in a dense urban center but can be more challenging if there isn't a decent one in convenient walking distance.

Companies who are thinking about this can easily put in something else that allows someone to take a short break while thinking of solutions, and if I (back as an employee) want to do so, I don't have to lump it into getting coffee. I can just take a walk and, if I want one, get one on my way back with the answer.

Watchdog finds AI tools can be used unlawfully to filter candidates by race, gender

doublelayer Silver badge

"What are these "real benefits" which AI can bring to recruitment and hiring?"

Speed. You get way more resumes than you want to read, so you just throw them all into the AI and read the three it puts highest on the viability score list. Throwing all the resumes down a staircase, reading the first three you pick up, and recycling the rest might only be slightly worse for finding a good candidate, but doing that makes it really obvious that you're too lazy to do the job right. Having a big AI company with a complex statistical report of usefulness which they refer to but you can't read makes it feel like you're getting useful help with your task.

doublelayer Silver badge

Re: Think different

I think that was intended as an age-based question, though I don't know what kind of age discrimination Elsergiovolador was referring to and/or advocating for. Yes, there is more than one kind.

Judge tosses publishers' copyright suit against OpenAI

doublelayer Silver badge

Re: What is the point...

Doing this again, are we? It's been explained lots and lots and lots of times, so I'll do the short version.

"When a work is used to train an AI model, is a copy created?": Yes, at least two. One in the training set, where the company can retrieve it for any future trainings or anything else they may choose to do, stored on their hard drives in the same form they got it in the first place. Another in the model, assuming the model has bothered to retain it, in a chopped up form where exact extraction is more difficult. And if it did get retained, chances are that there is a third copy, the copy that is emitted by the model, either verbatim or with inaccuracies.

"Copyright holders can't make it illegal for you to read and digest a book, or even to memorise it down to the last comma."

They can't make it illegal for me to buy a copy and read it, but it is illegal if I don't have a legal copy. If I buy it, no problem. If I get it from a library, no problem. If I download an unauthorized copy someone pirated, I am not allowed to read it. True, they're not going to stop me because they don't really care, but it isn't legal for me to have it. This might be a non-issue if the creators of the models had bought copies. Then this would just be a discussion about what you're allowed to do with works you've legally accessed, but they skipped that part and got illegal copies.

Robots crush career opportunities for low-skilled workers

doublelayer Silver badge

Re: Stop Worrying and Love the Bomb

I can easily imagine a post-scarcity world. I want something, so I merely step to my omnimaker and press one of the preset buttons. Minutes later, what I wanted has come into existence.

What's much harder for me to imagine is how we're going to get there. I mean I do have this omnimaker in my closet; it's been a hobby of mine for a while, but I don't think anyone else has one yet and when I set up the preset button that causes my omnimaker to make more omnimakers, time travelers came by and told me I must never push that button again. In the real world, there is a finite limit of everything. That limit has increased dramatically over recent history, but nothing is unlimited. Acting like it is tends to break things quickly.

Another thing that's hard to imagine is how we would prevent things from breaking if we got there. We already have costs from being able to consume a lot more than we once did. Because it's cheap to buy things, we have e-waste problems, regular waste problems, toxic waste problems, pollution problems, climate problems, health problems, and several more problems. A few of those can be blamed, convincingly or just because it's convenient, on some of the "egregiously wealthy and powerful" you are unhappy with. Others are much more down to us. For example, some e-waste is generated because people can't repair it, and I'm willing to accept that it was probably a wealthy person who made the decision to make it hard to repair. However, quite a bit more of the e-waste is because it wasn't hard to repair but the user couldn't be bothered or even the thing wasn't broken but the user decided to get another one anyway. That's on the user, and sometimes, that user is me. Even if we could get to a post-scarcity world (if you have an idea of how, I'm interested), we would still have lots of problems to solve.

doublelayer Silver badge

Re: No shit, Sherlock?

"There's a lot of interesting history here, especially around the Luddites - a group that is very widely misunderstood, who as I understand it were protesting against exactly this. It wasn't about the introduction of new, more efficient technology, as such, but more about that introduction being badly thought out, giving no opportunity for daily-paid workers to go along with the changes and stay in employment."

Meanwhile, I see the Luddites being widely misunderstood in a different direction. They weren't arguing in generalities, for a new policy towards the working class. They were arguing very personally and against much of the working class. New machines would increase employment, allowing lots of people much poorer than they were to get a job where they would be paid better (in the 1800s, so not well, but it was still an increase for those new workers). The only problem was that the people who had learned a specialized skill that was no longer necessary were going to lose their comparatively well-paid position. The Luddites were, in fact, relatively rich laborers who really hated that some stupid poor people were going to make the money instead of them.

Unfortunately, this is a tendency that still happens today. I can understand someone who dislikes automation altogether, even though I disagree with them. We can discuss our differences and attempt to convince one another of its benefits and drawbacks. Someone who has an idea for how we can have automation but protect those whose labor is no longer needed is very welcome, though many of the ideas I've seen aren't very thorough and are difficult to implement. However, there are many who cheerfully accept automation whenever it lets them have something for cheaper, but when technology improves the thing they do, they begin to protest that the technology should be banned so nothing related to their job has to change. This attitude is unhelpful to laborers because only small subsets get automated at once, and it doesn't make general policy about the situation because the people espousing it don't care to help others going through the same situation. The Luddites didn't do what we need in an era of automation and they're the wrong people to emulate now.

Microsoft rolls out AI-enabled Notepad to Windows Insiders

doublelayer Silver badge

In Notepad? Really?

So they've decided that this is a feature people want. I don't, but surely someone will, so why not throw it in. As demonstrated, my opinions are not important. However, why do they think the people who are likely to use this are using Notepad? It's not a very popular application. I know very few people who use it often. The nontechnical mostly use actual word processors that had such features as spell check*. Technical people often use a text editor that displays line numbers, uses the chosen characters for indentation, doesn't automatically stick .txt on the end of file names which you have to remove later, or can handle multiple kinds of line endings*. They and I sometimes use Notepad if it's someone else's computer and a small edit is needed, but that's still not very often. So who is going to use Notepad enough that this feature is going to sell many AI credits?

* Notepad actually can do the things with asterisks, but as of about three years ago. Most users decided what to run more than three years ago when it couldn't do them.

Another official four-day week pilot kicks off in the UK

doublelayer Silver badge

I think pretty much everyone would like a four-day week. The problem is that the studies of four-day weeks do things which won't happen in practice if it became the norm. Nearly every article I've read, including this one, mentions that one of the main things they do when running the study is cancel some meetings. Unsurprisingly, this improves productivity. However, they could probably have increased productivity even more by canceling pointless meetings but still working five days a week, which leads us to conclude that they could have had good productivity by not starting the pointless meetings in the first place. Yet, they did, and I don't think they would stop just because the weeks are shorter. They can manage it for the study, but if they kept the four-day week going, someone would start to recreate the pointless meetings and productivity would drop again.

There are probably a lot of businesses that could achieve what they want with a four-day week, but I don't expect that many will when the idea of a fifth day, which you can either use for additional work or for not having to cancel meetings, is available and widely accepted. There are a lot of businesses who care more about having some meetings than about the clear cost they have, as most or all of us will know from experience of an hour or two spent doing nothing because someone wants you to be there. The ones who care more about performance and avoided these would probably see no or low benefits from trying a four-day week, even though the studies of less dedicated companies show improvements.

Python dethrones JavaScript as the most-used language on GitHub

doublelayer Silver badge

Re: AI

Which is one reason I don't believe it. If that happened, I'd be quite happy. I don't like writing that kind of stuff and everyone needs it. People who have made their living on writing that kind of stuff rather than my approach of running as far away as I could (not very far) may be less pleased if that comes to pass. However, I don't think the AI is going to successfully solve the most difficult parts of the process, such as actually understanding what the program needs to do, where the data is and exactly what form it takes, and where the final software is going to run. A lot of software like this is relatively simple to make, but companies end up buying third-party software instead because they aren't good at the maintenance it takes to keep a self-built version correct and operational. At best, AI (a fictional AI that actually works reliably) might make the process of writing the software faster, but it's not going to take vague requirements and make correct technical requirements out of them, process jumbled data into something that fits the selected format, or keep the systems running. You will need at least four different AIs for these tasks, and the LLM approach won't work for any.

doublelayer Silver badge

Re: Apples and oranges

That's why Python has managed to be popular with a GIL for decades. Quite often, the presence of a GIL isn't too bad and computers are fast enough that you can survive the performance hit. However, the reason for wanting to dispense with it is when you want to do a lot of stuff and you'd like to use all the hardware you have to do so, and most languages would let you do it. I've frequently sped up software merely by having multiple cores working on something. If you don't have programs that run for a long time, or if they run for that time but aren't using all the CPU they have access to, then that might not be something you need. If you frequently write or run programs that are using 100% of one core, then you might want to see how much faster they can go if they can use all the cores.

doublelayer Silver badge

Re: Bollox

If compilers changed from run to run between the options of "gets the answer mostly correct, but there is a bug you will find two months later" and "segfaults immediately", then someone would be justified in disliking compilers until that changed. Actual compilers were both deterministic and auditable, meaning that although they had and still have bugs, those bugs can be specifically identified, fixed, proven fixed, and tracked. If those compilers changed what they produced on things that weren't supposed to be syntactically significant, like comments, then that would also be a bug.

There is a difference between a tool which works the same way and can be improved and a tool that is intentionally random. At some point, we will have tools that automate even more of the process of writing software, but categorizing LLMs in the same category as compilers is too broad. Maybe we will eventually have a piece of software which works similarly to an LLM, where you can provide a natural language summary and a program pops out, and if it's wrong you describe how and a new program with the bug fixed pops out. LLMs as we have them don't and won't do that.

Officials warn of Russia's tech-for-troops deal with North Korea amid Ukraine conflict

doublelayer Silver badge

Re: "DPRK soldiers should ... surrender to Ukrainian forces"

You are right if it became known. North Korea wouldn't say so, but they could easily arrest some people and not say why; they already do that every couple minutes. However, if I was a North Korean wanting to defect, I might try it now and see if I can leave scraps of my uniform near some other destruction, and if there isn't enough destruction, make some extra for scenery. My family is least likely to suffer consequences if my Russian or North Korean commanders think I was just struck down in the pursuit of my gallant duty to kill people for no reason. The Ukrainians could help arrange some of this.

Why we're still waiting for Canonical's immutable Ubuntu Core Desktop

doublelayer Silver badge

Re: I’m not a fan of ‘snap’

If you don't know that the error is Snap, it seems a little premature to be blaming it for your ills. You might investigate to determine whether Snap is really the problem behind each of the problems you have. For instance, I don't use Foxit, but I don't see why Snap would make it open things in new instances, so you might want to check what the method you're using to open the file is doing and if you could change it.

Mozilla Foundation crumbles as third of staff cast off

doublelayer Silver badge

Re: Wait ...

Mozilla the foundation had 120 employees. Mozilla the corporation, owned and paid for by the foundation, has an additional 750 or so. Do the calculations again and the numbers start to look a little more normal, especially as they're not using all of that revenue to pay each worker, some gets saved so they can continue to operate if that revenue drops again.

Why the long name? Okta discloses auth bypass bug affecting 52-character usernames

doublelayer Silver badge

Re: I wonder...

Neither of which is very important because:

1. The email address is easily guessed: for most users, the username they pick is not going to be any harder to guess, and it's likely it will be the same one they use on any other site that demands one. For anyone who wants to have unique usernames, they can stick something on the end, fred+randomstring@example.com, and then it's unique again and still sends emails to the right place.

2. Almost every service is going to have an email anyway, they're just going to have a separate username. You still need to update that email any time you decide to stop having the previous email, because it's going to be used for notifications, auth checks, and various other important things. That is a good reason to switch all accounts whenever you know you're losing an email and not to rely on an email you could easily lose by changing ISP.

That position you just applied for might be a 'ghost job' that'll never be filled

doublelayer Silver badge

Re: While I agree this sucks and legislation would be helpful

In spirit, they're very similar because they involve wasting the time of someone else. In practice, there is at least some difference because, if they don't get what they asked for, they might actually take the job they applied for so, although there's no certainty, it remains a possibility. In scale, it's very different because they are only wasting the time of one possible employer rather than large groups of applicants. You can decide how you feel about the action, but like these, you have no realistic chance of doing anything about it.

doublelayer Silver badge

Re: Solution 1

I'm not asking for something by post. Your acknowledgement is all I'm asking for, and if you didn't even get as far as a phone call, I'd be satisfied with an email, which is effectively free to send and can be a pre-written statement so it doesn't take any time to write. What I object to is a company that rejects a candidate but can't be bother to send any indication of that. It might have been excusable in the days of paper applications and notifications, but if they get all the details digitally, they can notify me digitally, and they can probably do it by pressing one button, then there is not an excuse anymore. The applicant eventually has to give up. There are a few problems with that:

1. It is common decency to send an email and it's quite rude not to.

2. If a candidate hasn't been through many of these, they may be concerned that they're missing communications from the company. They may waste their time and that of someone at the company by contacting them and asking to confirm.

3. If a candidate has been through many of these, they will not contact the company for clarification; they'll just assume they didn't get through. If there was actually a problem with communication, the candidate and company may both lose out.

4. It makes people hate the company for doing that when it would be really easy not to.

doublelayer Silver badge

Re: Solution 1

I can't agree with the last part. I am sure you sometimes get way to many responses, but it shouldn't be hard to send them a message when you've tossed their resume into the bin. If it's relatively modern, you already have their email address on file, so all you have to do is write one common message informing them that they are not in consideration and giving no details and send it out. Doing that doesn't leave the applicant waiting and hoping and is less likely to make them hate you.

doublelayer Silver badge

Re: USA

Like the fraud, that's theoretically true but it is vague enough that it probably won't work. They're not making any real statements, so it is hard to prove that they're being intentionally dishonest. It's like a con artist driving an expensive car, or one designed to look more expensive than it is, so you think they're successful when their not. As long as that's all they're doing, they haven't done enough for that to constitute fraud. Now if they take their fake job postings and use them to make another kind of statement, for example claiming that they're building a team but they haven't actually hired anyone and don't plan to, then that might be sufficient. If it's just scenery, they're likely to get away with it.

Financial institutions told to get their house in order before the next CrowdStrike strikes

doublelayer Silver badge

Re: What server?

If you think a person who doesn't document, manage, and know the state of their server is getting things right, I'm worried about whatever servers you manage. Yes, it means the software running on that server was correct, but that's no miracle. That's the case for lots of software, but not crashing is the bare minimum, not the entire goal.

I've seen many disasters with that going on. Yes, the computer concerned has three different strains of malware on it. Yes, the customer data is left unencrypted and could have been copied by any or all of those strains. No, there are no logs of what was done. No, there are no backups if the hard drive in that box fails. No, the software has no version control for the data it's working with. No, we have no idea who to talk to if we need anything different than what we have needed so far. But the Python 2.4 script that processed the input is still getting input and producing output in the right format (Excel 2003 files), so it's perfect. Why do you want to do anything to that box?