* Posts by doublelayer

9378 publicly visible posts • joined 22 Feb 2018

China bans compulsory facial recognition and its use in private spaces like hotel rooms

doublelayer Silver badge

Re: In other news, today's Times >

We're not decrying the Chinese government for bringing in this law. I, and at least some others here, are reacting to the fact that they violate the law they passed themselves all the time and will continue to do so. I, unlike some of those posts, also don't think they have any intention of enforcing this, even on private companies that don't work for their government.

Now take a guess whether I'm pleased when countries I support more often do the same thing. Do you think I'd praise the action? In case you're unaware, no, I don't and I want them to stop. I will do what little I can to get them to stop. Most of the time, they do not bother listening to me.

Asahi Linux loses another prominent dev as GPU guru calls it quits

doublelayer Silver badge

Re: Don't buy closed hardware in the first place

You have several justified criticisms of Framework. You go on to harm your point by faulting them for not doing something that no manufacturer could do, E.G. allowing you to use the same RAM when using a CPU that uses a different type. Now, you are quibbling with definitions of standard when the original issue is that you faulted them for doing things with a battery which they have not done and are deliberately avoiding. You stated the situation incorrectly, i don't know why, but as I think you entirely understand how their batteries work, I'm not sure elucidating further is helping either of us.

doublelayer Silver badge

Re: Don't buy closed hardware in the first place

"Yes, I could start out on a lower end CPU in a given family and then upgrade to a better one in the future, in the same family...e.g. start out on something i3 class (because for some reason my budget might be tight) then move to an i7 in 6 months to a years time...rather than being stuck on the i3 until I can afford a whole new laptop...that's the difference socketing makes."

And that is exactly what Framework provides. From the context of the previous comments, the discussion was whether you have to replace the RAM if you do that. For your use case, no, you don't, because the same generation uses the same type. If, however, you replace the old I5 using DDR4 with this year's Ryzen which uses DDR5, then you do have to change the RAM but you would have to with sockets.

"They aren't all standard. Whilst the variance isn't massive, it is there...and connectors are even worse. There are laptops that use standard size batteries, but they don't use standardised connectors. Connectors are where laptops tend to differ the most."

And Framework's laptop uses one battery with one connector, hence standard. They've made a few generations of boards, including Intel and AMD boards, and they use the same battery and the same battery connector. How did we get derailed from this? This thread is entirely about what Framework does to be upgradeable or not. Using the same battery connector is one of the advantages I see in it because it means I don't have to search to identify what connector I have and hope I can find a battery using it eight years from now; if Framework's still around, then I can get one from them, and if they're not, the connector design is at least available to others.

doublelayer Silver badge

Re: Don't buy closed hardware in the first place

Your critiques are falling into either obvious or obviously impossible categories. For example:

"Some of those items aren't a given either...a new board might require different RAM...": Yes, it might, based on what RAM type the CPU uses. Would socketed CPUs do anything different? No, they would not. Thus, that does nothing to argue that the mainboard approach is any less upgradable than the unavailable alternative with which you're comparing it.

"a battery with a different voltage": What part of standard design makes you think that's an option? The board is responsible for converting the voltage from the battery to whatever the CPU needs, which is likely the same voltage the other CPU needed, but just in case it isn't, that's what the other components are for. The battery will stay the same.

Also, the main reason I bought one isn't to upgrade the board inside it. The main reason I chose it is that I am often content to run the same old processor for a while, but some other parts are more likely to fail while I'm doing it. I've had the experience of looking for parts for an old machine and finding that they're not easily located or the replacement process involves dismantling everything. I wouldn't be surprised if, when the one I'm using eventually gets discarded, the mainboard is among the original parts whereas other consumables aren't.

I agree with you that these aren't the most cost-efficient models out there. I could have gotten the same performance for less money. I could have even gotten more of some specs for less money, although most of that "more" consists of things like touchscreens which I don't care about. I've dealt with computers, mostly others' but sometimes mine, that lost things like charging ports or keyboards that were too tightly integrated, and the computer this one replaced was replaced because things like that had happened to it, not because the CPU was too old for me. Since I expect to be replacing some parts and keeping this one for a while, I am gambling that the parts availability will allow this one to last longer than the cheaper alternative. We'll see if I am right. I am not a Framework apologist; they've done some things I'm not a fan of, but my experiences with the competition haven't been very good.

doublelayer Silver badge

Re: Don't buy closed hardware in the first place

Most of that is correct, but some of it is exaggerated. For example, when upgrading motherboards, the chassis is not all you keep, unless by chassis you mean literally everything except the processor and the board it's attached to. The battery, the screen, the keyboard, the storage, the wireless card, all the other little bits in there stay there. The processor isn't on a socket, but it's not like the board it's on has a lot of other replaceable components that you'd normally find. You do have to buy new USB-C ports and M.2 connector with the new CPU, but those things are not very expensive compared to the parts that stay. That is what other laptops cannot do. One of the main reasons that is an advantage though is, by having standard form factors, it becomes possible for others to make a board. Some company could produce a board that, for example, could take a Raspberry Pi compute module. Expose some USB-C ports and connect to documented and open sockets* and their board can be dropped into laptops with all the rest of the components built in. That is what isn't easy to do with any other laptop because you'd have to reverse-engineer the internal connections and because that model might be designed completely differently next year and all the effort is wasted.

The ports are dongles, true, and they're not anything special. The important part is that they're dongles that are designed into the computer, making them fit as if they were integral. I chose only USB connectors, meaning that I have what looks like a normal computer and I will use dongles when I need other ports, but if I wanted an HDMI connector, I would have one that fit just as nicely. The problem with dongles is that they are separate, meaning that people leave them behind, they stick out and have to be disconnected when traveling, and all the small inconveniences that come along with that. The "dongle" approach also helps in that they're replaceable. I've known people with computers with broken USB and HDMI ports, but if that happened to a Framework, they'd simply be removed and replaced. I would have liked if they could build them differently, for example if I could put more than four of them into my computer because I often find that more ports is usually better. Still, your comparison seems to miss why that's been done.

* If someone did make a Raspberry Pi CM board, the largest challenge would probably be connecting it to the screen, because I don't think it is an HDMI one. You might need extra hardware to connect those up.

doublelayer Silver badge

Re: An Interesting Article in IEEE Spectrum not entirely irrelevant...

I have read that article, and while the points are basically correct, I find them rather simple. Basically, they point out that it isn't easy to identify or prove who is the absolute best at what they do, if there is such a big distinction in the first place, and that you can't guarantee to keep them. Therefore, you must not assume that you will always be able to find and attract some and must plan for more normal levels of productivity and skill. Most businesses already know this because they don't always find and hire people who are perfect at generating code or whatever else they're generating, as much as they might want to. Most aren't even trying, because all the very knowledgeable programmers I know are uninterested in going to work for the kind of company that builds the necessary but boring applications that are most common; there's no way for them to use their unusual knowledge in a place like that and that's what they enjoy.

Other than that, though, the article seems to run out of ideas. I would think that the most important element is how to improve individual productivity if it matters so much, but that's not mentioned. Team performance is great, but there are a lot of tasks in programming, and I'm sticking with programming not because it's the only job where this applies but because it's what the article talked about and what I know, that are not easily parallelizeable. The article is wrong about this, claiming that:

Everyone uses the same software delivery pipeline. If it takes the slowest engineer at your company five hours to ship a single line of code, it’s going to take the fastest engineer at your company five hours to ship a single line of code. The time spent writing code is typically dwarfed by the time spent on every other part of the software development lifecycle.

No, if the slowest engineer takes that long, then they'll produce less code. Maybe they'll turn out to be great reviewers, testers, organizers, or maybe they're just dead weight. The fastest engineer will produce code faster, and as long as the simplest amount of separation has been put in place, they'll be able to do that without waiting on the slowest engineer because the two bits they're writing should be independent of or compatible with one another. You often can't speed up development of a small module by having three people write it, but you can speed up development of three modules by having one person write each of them as long as you can be pretty sure that the modules can be written without coordination. If they don't recognize that, they may be arguing against looking for 10x engineers for a bad reason, "they aren't useful", rather than the many correct ones, only some of which they mention.

Show top LLMs some code and they'll merrily add in the bugs they saw in training

doublelayer Silver badge

Re: Obvious result, bad methodology

"It takes real intelligence to decide that the code needs fixing. likely to succeed... or to fail. It takes intelligence to decide which one it did, so what's the point?"

What's the point of telling it to, or what's the point of my criticism? I did not try to answer the former, which I will handle below, but the point of my criticism is that their paper leaves open an option for an AI adherent to claim that they mistated it:

Researcher: We put in some code which has bugs in it, and the AI put in those bugs. AI is unreliable.

Adherent: I told it to fix the bugs, and it fixed them without having to be told what the bugs were. AI is great.

AI isn't great. It's success at patching a bug that's described right next to the bug doesn't prove that it can fix other bugs. It's generation of buggy code in a preexisting example doesn't demonstrate that it will generate buggy code when actually used. In both cases, prompting it to generate new code will prove how good it is: it will produce buggy code on its own, it will not fix them automatically, it will not consistently fix them when told to. That demonstrates a problem that this paper does not. That makes other papers better than this one.

And what's the point in telling an LLM to fix bugs? There isn't a lot of point. It might work, it might not, and chances are that if you're intelligent enough to figure which one it is, you could have fixed the bugs yourself. I suppose it could be a random thing to try if you're having one of those annoying "there's a bug in this but I can't see it" situations, but if you're writing code professionally or with a lot of experience, the chance is good that the bug you're not seeing is not going to be as simple as the obvious typos in this set, and thus the LLM is unlikely to find and fix it. But there is a situation where there is a very good reason to tell an LLM to fix bugs: you're trying to prove that the LLM is better than it is. Thus, I expect that those with a financial incentive to see it used will use plenty of cases of that and I don't encourage giving them the setup for flawed defenses of flawed technology.

doublelayer Silver badge

Obvious result, bad methodology

We have all seen the code produced by LMMs, and it's not good. It's not surprising that this would happen. However, the paper made some rudimentary mistakes which mean it doesn't do a great job of proving it. All they did here was give it some code from data it was almost certainly already trained on. As we know, these models are quite good at completing the quote, or even quoting without any seeding at all. It is not surprising that, when given several lines of code that don't appear elsewhere and are followed by a bug, they are likely to reproduce that bug. Someone who wanted to prove the opposite could give it several lines of something without a bug and use its reproduction of the verbatim lines that followed to suggest that it wouldn't. Neither would be very useful in understanding what it was likely to do in a new situation.

A lot of papers have already been written demonstrating that, when asked to write code, LMMs do not write it well. They often tested it by giving it specifications and evaluating its output, not priming it with something and seeing whether it fixed it. The former much more closely matches what people who are going to use them are doing. In defense of the LMM, and if you've seen other comments of mine you'll know how little I like to do that, you could probably give it the buggy code from these examples with a prompt like "Fix the bugs in this code" and it is likely to succeed. Not because it understands anything about the code, but it will find the fixed lines in its training data and insert them. If we are to prove how bad it is at generating new code, we cannot use methods that, in the hands of someone trying to claim it is great, could easily show the opposite.

Dept of Defense engineer took home top-secret docs, booked a fishing trip to Mexico – then the FBI showed up

doublelayer Silver badge

Re: Maybe...

Because connecting an SD card is a very fast notice sent to the monitoring software which is probably configured to block that anyway. They occasionally need to print things, so that's still an allowed, though logged, method of getting files out of the computer. To me, the solution would seem to be either not letting paper go in or out of the place where the data is available or just not using printers and requiring everyone to read it on a screen, but there are probably people who are used to using paper copies and won't accept that being taken away, whereas nobody was able to just copy what they wanted onto USB or SD cards and so it's not hard to deny it to them.

M4 MacBook Air keeps ports modular, locks tight – still a headache to repair

doublelayer Silver badge

Re: You can lock and unlock software locks at will

Do "digital handcuffs" and "software locks" sound very different to you? Well, of course, since that's the point of your post, but why is that a point of your post? The important factor is the lock, which is how handcuffs work anyway, and there is definitely a lock here, a lock to which neither you nor I have the key, so why is this terminology debate relevant to you or anyone?

doublelayer Silver badge

Re: So what...

Repairability does not require that the board rattle or dislodge or overheat. In fact, to be repairable, it has to fasten down better because it has to be possible to remove it and install another one without those things happening, whereas a device designed to be unrepairable can get away with tactics that make sure the device is broken if you try. Also, assuming that the problems with repairability are due to better ingress protection, any citation on that? Because MacBooks aren't any more ingress-protected than normal laptops, even the fanless Air which could but isn't made airtight.

In short, you've made up a bunch of mythical downsides and benefits that you don't get, and you're probably smart enough to know that these are fictional.

doublelayer Silver badge

Re: Socks

Unless it's one of the many faults which they know how to fix but don't want to, so their suggestion is replace logic board for 75% of the cost of a new laptop*, and since the storage is soldered to it, I hope you had a good backup even if the disk isn't the part that failed.

* Cost of logic board: 25% of new laptop. Cost of labor to install it: 10% of new laptop. Surcharge for using their program which allows the new board to work with your screen: 40% of new laptop.

HP Inc settles printer toner lockout lawsuit with a promise to make firmware updates optional

doublelayer Silver badge

Which, from a trustworthy company, would look reasonable because I'm sure the stuff degrades eventually* and, if used, could cause problems. However, every printer company in existence seems to be untrustworthy, so I assume they stop working on any possible excuse that will send you to buy another cartridge.

* I know this is a problem with inkjet printers. I'm not sure how toner ages. I print so infrequently that I don't have a printer, though that means I have to occasionally commandeer the office printer.

doublelayer Silver badge

Re: HP

Remember that this is the person who invented time travel yesterday. I'd plan to wait a while for this, unless the plans can be sent backward to today using that ansible.

Political poker? Tariff hunger games? Trump creates havoc for PC industry

doublelayer Silver badge

Re: "President Trump's ongoing trade war ..."

While the people who pay the most are US residents, it does hurt others. Canada didn't put retaliatory tariffs in place just because they don't like the 51st state thing. A lot of Canadian businesses that used to sell to the US are going to suffer somewhat as a result of this. The problem is that Trump thinks three things which are all only partially true and probably better described as mostly false:

1. Punishing Canadian businesses will mean more US businesses will make what the Canadian ones used to make. It isn't that easy.

2. The pain is mostly felt by the Canadians. It's not. They get less demand, which isn't fun, but the US gets less supply and higher prices, which is generally less popular.

3. This tactic is likely to cause more problems for the Canadian businesses than US ones. It's the other way around because of retaliatory tariffs, meaning that China, and soon, EU countries are virtually guaranteed to buy from the Canadian business over the US one if they were otherwise identical pre-tariff.

Still, with actions going back and forth, I think multiparty conflict is an appropriate description.

CISA fires, now rehires and immediately benches security crew on full pay

doublelayer Silver badge

Re: Rehired. On full pay.

"Clearly not a cost savings/efficiency measure then."

They don't have a choice. The court said it was illegal to fire them that way, and if they want to comply, they have no choice but to continue to employ them at the same pay levels as before. They can choose whether they work or not, but they couldn't hire them back on less pay without violating that court order.

Of course, it is easy to question whether the original firings were intended as a cost-saving measure as claimed, and there are plenty of reasons suggesting that was not at all what they had in mind. Hiring them back was not what those people wanted to do though, so you can't really show the costs as a failure of the plan since it's exactly the opposite of what they had planned.

doublelayer Silver badge

Re: Why do you assume that CISA are happy with the DOGE developments?

CISA can't be happy or unhappy about any of this. People who work there are probably unhappy with their colleagues being fired. The people deciding what it does mostly include the directing staff of the Department of Homeland Security who were appointed by Trump, so they're probably willing to carry out his orders and annoyed that the courts have blocked this one. Ordinarily, they'd have a director to do this, but the previous one resigned and they haven't put in another yet, hence the increased involvement from its parent department. I wouldn't read pleasure into cooperating with this order any more than you should read it in their cooperation with the previous order; they have no control either way what happens here and simply do what the last person to rule says.

US tech jobs outlook clouded by DOGE cuts, Trump tariffs

doublelayer Silver badge

Re: Absolute Fairy Story

That is true, and if you could access the raw data from the government, it would be a much better source. However, most of that is protected quite correctly by privacy legislation. If you are hired for a job that was never advertised, the tax authorities will know, but someone looking at ads wouldn't know that it had happened. For example, they often release numbers of people who lost their jobs and are using unemployment programs, which is a reliable number because they know exactly who that is, but it doesn't give you enough information to know what the people were doing beforehand; they know that too but they don't release it. Therefore, estimating the number of tech people who lost their jobs recently is not very accurate when using public information even though there is a dataset that could tell you. I'm not sure having the more accurate data will produce enough benefits to justify the privacy implications, so I'm fine to live with less accurate research.

doublelayer Silver badge

Re: Absolute Fairy Story

Because discrimination causes problems, so we forbid it, and if we forbid it, then they're not supposed to do it. Those people who still do that but don't say so are committing a crime.

If you think it shouldn't be a crime, you could make that point, but unless you're doing so your argument boils down to "if people are going to commit a crime anyway, why not admit it?" and the answer is "because they don't like the sentences you get for doing that".

doublelayer Silver badge

Re: Absolute Fairy Story

Correct. And to determine that the rate hasn't risen, fallen, or varies significantly with time, what would we do? This is the reasoning used to accept bad data, and it's backed up by more bad data. The answer is that we mostly don't know how many of the ads are true or not and we also basically don't know whether the prevalence of this has changed. We use the data we have and hope it's good enough, and that hope may be successful, but who knows.

doublelayer Silver badge

Re: Absolute Fairy Story

I think you're right that ads don't track this very well, especially because it can go the other way, people getting hired for jobs that weren't advertised. It's like several other areas: if there is an available source of data, it's not very good, but correct data is really hard to get, a lot of places will just cite the available source and assume it's good enough. It also doesn't help when a lot of tech jobs are lumped together or might not be and they're not clear about which sectors they're talking about.

Do AI robo-authors qualify for copyright? It's still no, says appeals court

doublelayer Silver badge

Re: Time and gravity

Deleting a post and reposting it doesn't count as time travel. I see you fixed the line breaks. Well, some of them anyway.

doublelayer Silver badge

Re: Maybe there's an ulterior motive to this?

This is one reason I'm quite glad for this guy. With every time he brings his case based on the stupidest legal arguments, he puts one more brick in the wall against that argument should someone (I wouldn't put it past anyone who makes LLMs) try to do the same. I'm sure they would if they could, and somehow they would get a lawyer to argue simultaneously that they can use any material, no matter its copyright status, and that they own all the versions of that data after their program has chopped it up and reconstituted some of it.

doublelayer Silver badge

Re: Of course it's Stephen Thaler again.

Nothing says that his idealistic point is one I agree with (I don't) or logical (it's not). However, there is still a difference between the person who tries to pass this because they stand to gain and someone who does it by being completely deluded, and I have seen enough similar motions to conclude that it is the latter in his case. The patent office has already told him that he can have his program spit out as many patents as he wants and he can own them. If this was a ploy to flood the patent system, he has the power to do it. He isn't doing it. Instead, he is trying over and over to let a program have ownership itself, and he alleges that the programs involved are beings capable of owning and disposing of the intellectual property they'd theoretically own.

Perhaps the problem was my use of the word "idealistic", which suggests a positive element. I meant it in the sense of pursuing a personal ideal, even an illogical and stupid one, without having a pragmatic purpose behind the action. From what I've seen of his court cases so far, he doesn't have a pragmatic purpose, he wouldn't stand to gain if he actually won, he isn't adjusting his tactics at all in order to make it more likely that he will win, and his statements tent to be philosophical and largely nonsensical.

doublelayer Silver badge

Re: Of course it's Stephen Thaler again.

He seems to be doing this for an idealistic point, since the patent office said that he could patent the thing the program created but he wants the program itself to own it. I wonder if he has a suggestion for how a program would own something and act based on that ownership. If he somehow succeeded, what does he think his program would do? If it doesn't already have code in it to sell or license out the patent it creates, then surely editing it to insert that would make a different program that doesn't own them?

I wonder if this is another case of sci-fi overdose. I find that quite a few people, mostly people who work with or at least around tech, seem to have read a lot of classic science fiction, as I have as well, but have forgotten that it was intended to be fiction. These people do things like grab doomsday scenarios from a story written to be interesting rather than extrapolating to theorize a likely way that our current situation could lead to disaster; invest time or effort into duplicating something that wasn't written as a suggestion; or in this case, ascribe to computers the kind of attributes given to robots or aliens in the stories without realizing that most of those stories were using robots or aliens as analogies to humans or, if they weren't, were describing things that were unquestionably conscious and sapient. Whether it's Zuckerberg not understanding why his VR business doesn't have the same pull that immersive environments do in cyberpunk books, virtually everybody at OpenAI and Anthropic worrying about their AI taking control of nuclear arsenals instead of all the things it's actually going to break without thinking that authors from the 1950s and 1960s might have been thinking a lot about nuke safety, or people who think the point of space travel is to get a human to Mars as soon as possible without considering how much nicer it might be to get lots of probes there before any human is around to mess it up though some authors actually did write about that, they seem to think that plots written for different reasons and often many decades ago are futuristic and visionary thinking.

doublelayer Silver badge

Re: Who would own that copyright?

That is why this ruling is so important. If the output of AI could be copyrighted, then the creators of the programs, among the most prolific copyright violators in existence, would almost certainly try to automatically copyright anything it spit out. Not so much a problem for people whose work it quotes, because they can prove it was in existence before the model was created, but a big problem for anyone they claimed used their model's output afterward. The hypocrisy would somehow not be obvious enough to automatically cause their suits to fail, though I do think they'd lose a lot of them. A blanket no-copyright stance is very helpful though.

doublelayer Silver badge

Re: Time and gravity

Please, do me a favor and send the following message back in time to yourself say about a week or so:

"Line breaks between random words are unnecessary. The XKCD suggestion to have a line break after every sentence wasn't an honest one."

When the post corrects itself, that will prove your tech exists.

Trump fires Democrat FTC commissioners, presaging big tech policy shifts

doublelayer Silver badge

Re: Reminds me of the old saying...

The theory is that you hire someone who has few skills but you think is smart enough to learn. You pay them and give them training in something. Now, they have skills and you're hoping that they'll do some work for you, but they choose to leave because they now have skills and can more easily find a different job, meaning the money you spent while training them is wasted. And, to be fair to them, that sometimes happens. The problem, as usual, is that there are limited cases where non-competes make sense, but employers decided to use them for lots of other situations, most of them clearly abusive to the employee and anticompetetive to everyone else. A lot of people misusing something can break it for everybody.

Microsoft isn't fixing 8-year-old shortcut exploit abused for spying

doublelayer Silver badge

Re: It's not a UI issue. It's an AI issue.

The sanity check being what, exactly? A maximum size? Everyone knows that, as soon as you put a maximum on something, someone finds a reason why they need more than that maximum. 8.3 filenames were too short, 640 KB was not enough RAM, a 2 GB file size limit was not adequate for a filesystem, and on it goes.

doublelayer Silver badge

Re: Microsoft is right

"Knowing about things like the file properties dialog is for clever Register readers who know how to check for these things not the standard user who just wants to get the email done and out of the way."

That was basically my point. If someone doesn't know enough to open the properties dialog, then this whitespace padding thing does not make anything more convincing to them. Thus, all such users are irrelevant. I would hope that anyone who does know to use it also knows about lnks in files you get sent over email, and for those who don't, I would hope they recognize that opening a command prompt window alone is implausible and useless. Either of those reactions also make this a nonissue. Thus, I agree with Microsoft: low priority UI fix still worth doing, but no need to yank programmers onto an emergency patch.

doublelayer Silver badge

Re: Microsoft is right

Yes, one could assume that. It seems more likely to me that it hasn't been fixed because it is a relatively minor thing which doesn't introduce any new vulnerability in the system and likely doesn't actually help the attacker much at all. The workflow for this exploit is that a user sees an lnk file in something they've downloaded or been sent. There is virtually no justification for doing that in the first place. If the user knows this, they won't use the lnk file and the attack fails. If the user doesn't know that and executes that, the attack proceeds. This only helps in the situation where the user is knowledgeable enough to know that these shortcut files exist, identify one, but is still willing to execute it after having a glance at its properties, properties that, with the obfuscation available, make it look like it is going to spawn a command prompt. The command that it is going to open is not visible because of the whitespace. So, if someone does look at the ostensible properties and execute it, they should expect that it opens CMD at the current location and nothing else. So they shouldn't run it anyway.

Microsoft should still try to fix this, but I agree with them that it is a low priority issue. I wouldn't be surprised that attackers have no evidence telling them that this ever helped, but it was easy to include and why not do it just in case it helps once? As another commenter pointed out, it actually makes it easier to spot automatically because it makes the file very large in comparison which normal lnk files never are.

DoorDash sued for allegedly branding customer a fraudster after delivery photo query

doublelayer Silver badge

Both off the rails?

To me, it seems like both parties here have taken this into strange places. The customer's concern about pictures is possible, but the purpose of the pictures seems obvious: it helps the customers find things when they haven't been handed to them directly and it proves that the delivery was completed. If drivers intended to burgle the houses, not taking a picture of the delivery wouldn't protect them, because they could easily take many pictures as they arrived and just not sent them to the customer. If delivering an order would actually provide any assistance to the aspiring burglar, it doesn't sound like that's something DoorDash would create.

So I'm likely to side with DoorDash against this paranoia but they then respond with flimsy attacks on the user. Using a corporate card is not a fraudulent act, and, if their app works like similar ones, they probably specifically recommend it and have UI controls to make it easy. Accusing a customer of fraud is not the appropriate response to dealing with someone, even someone with unrealistic actions. The more appropriate response is to say "People with cameras near you can take pictures of your buildings and cars and we neither encourage nor discourage it. Nothing we can do, feel free to stop buying from us if you have a problem with that." and let the customer seethe. Unfortunately, the account closure on false pretenses is not a tactic invented for this case, where I don't find the closee sympathetic, so I can't side with Doordash either.

Dash to Panel maintainer quits after donations drive becomes dash to disaster

doublelayer Silver badge

The problem is that you can't get a magic money pot for people who do work on a purely voluntary, when they want what they want basis. I have several open source projects for which I don't get paid. There's a donation button, mostly for scenery. It's been used twice, and one of those was me testing that it worked. However, users get the updates I choose to make. If I don't find it relevant, the code doesn't get written. If I am busy, the code is delayed. If I grow bored, I'll just stop working on it. These are the kind of risks that come along with getting the code for free and without many restrictions. I'm fine with encouraging donations more forcefully, but not with mandating payments. We already have software that mandates payments, it's called proprietary, and I'm perfectly fine with it. Just don't write proprietary and tell me it's open source.

Why is being free at download important to you when you're already making everyone pay whether they use the software or not? Wouldn't paying at the point of download, or after some trial period, be more fair as it would only charge people who use the software? Not that I'm in favor of that either. I can also make an analogy to music. Contracts with music publishers can indeed be predatory, but it is now quite a bit easier than it once was to produce your own music. I've heard it suggested that, instead of letting people do that, we have a common fund where musicians get grants to assist with their work. My problem with this is do we fund people who like to make music but whose music nobody likes? I don't think that is very helpful, especially when doing so reduces an already small pot. I prefer a mechanism where people express their preferences and the musicians get rewarded based on the number of people they can appeal to, which you can do with or without companies and contracts.

doublelayer Silver badge

If we want open source to be the future, we have to decide why we want that and what we think open source is. Because if we go with your mandatory payment system, someone is going to ask why this is any different from proprietary software where you also have to pay to use it. They may decide that the answer is that it doesn't, especially as you erode the freedoms that contradict mandatory payment.

That's in the situation where you list all the open source software you run and submit payment to its authors. There are worse options available, such as charging everyone, including people who don't run it. Your suggested tax on internet connections is virtually guaranteed to pay money to projects the users of those connections don't use. Why should someone fund a piece of software they have no interest in? Your suggested solution, putting that decision in the hands of organizations, means that you're now giving the power to people who may be deciding this for idealistic purposes. For example, would the FSF choose to fund mostly or entirely things using licenses they wrote? Would they make their decision based on the ideals of their leaders, ideals that not every writer of the software agree with? What if the OSI were making the decision? They're a divisive organization (not that the FSF escapes that either). I quite like the OSD, but the organization itself has made some stupid decisions, including their embracing of really not open source AI systems, mostly giving Meta AI a rubber stamp for claiming to be open source when they're not in the slightest.

Choosing to write open source software comes with no guarantee of getting paid for the same reason that choosing to run open source software comes with no guarantee of bugs getting fixed. It can make life more difficult for people who want to write it, but there are ways around that rather than forcing open source to be proprietary after all.

doublelayer Silver badge

I don't care what the FSF's position is. I agree with some of their positions, disagree, sometimes strongly, with others. I'll decide on my own. Donation requests can sometimes be invasive, but that doesn't make them disallowed. If it gets bad enough, maybe I'll write a patch that blocks them. I don't have a right to have the software without an annoyance. Maybe I can convince the author that adding it will be counterproductive, though if I've never contributed code or money, my comments may not be listened to too much.

I have written open source software before without the expectation of payment. There is a donation button on the website, just for scenery. I assume it still works, but nobody has used it for multiple years. This is fine with me. However, I also reserve the right to stop working on that software whenever I choose to do so. Donations exist when that is becoming less feasible and the maintenance burden is high enough that someone might choose to stop without them. I've faced that situation with software I use on occasion. Sometimes, I decided that I quite liked the software, wanted to keep using it, wanted it to have updates, and didn't want to do the work myself, so I paid. Sometimes, I decided that I could live without any updates and I didn't value their work as much, so I didn't pay. If they were too intrusive, I stopped updating, I worked to remove their intrusive additions, or I started a fork. In all cases, this was my free choice to respond to their free choices. A donation request screen is not malware. They have the right to add one if they want.

doublelayer Silver badge

Re: Imagine if...

This is one of the things any user or contributor to open source software has to learn eventually. Nobody is owed money for choosing to do it. Nobody is owed code that does exactly what they like. These things go together and you can't have one without the other.

I don't use this software. I imagine that, if I did, the presence of the donation button, if I'm correctly interpreting that it would reappear after being hidden when the software restarted, would annoy me. I have every right to complain about it, but the maintainer has no obligation to care. Sometimes, that means that contributors add things that I have serious problems with. In that case, I or someone else will fork the code to remove those, optionally add in new things written by that person, but not the annoying or dangerous stuff. That is also our right to do. The most we can do if we don't like what a developer did to something we run is complain or fork. The most they can do if they would like more support than they're getting is to campaign for it, in the code if they like, or quit. Any attempt to change either, whether it is forbidding a developer from making a change we find annoying or requiring that any user pays to have the code, are incompatible with the freedoms of the software.

doublelayer Silver badge

Not quite. The license says you must distribute a copy of the source to anyone who already has the software. Not to anyone who might want it. If I make some software, license it under GPL, and give a copy of the binary to you, then I must give you the source for that if and when you ask for it. I have no obligation to provide that source to someone else.

However, and this part is very relevant to Red Hat, you can give either of those to anyone you want and the only proviso is that if you give out the binary to someone else, you have to give them the source just like I had to give it to you. Red Hat is using an additional license which tries to limit others doing this. This additional license is directly in opposition to the spirit of the GPL, but not necessarily in violation of it because it doesn't contradict it, just applying penalties if you exercise your rights. This puts it in a murky area, but not because the code isn't on a public server, but because that license's compatibility with the GPL is untested.

For more information, a blog post from the Software Freedom Conservancy may be helpful.

DeepSeek-R1-beating perf in a 32B package? El Reg digs its claws into Alibaba's QwQ

doublelayer Silver badge

Re: Chain of thought

Why is it stupid? Basically, it boils down to splitting the initial task into subtasks which can be completed more accurately and trying to create additional verification tasks. And, as inaccurate and unreliable as these models tend to be, that approach seems to work in that models with it produce less bad output than models without it. I expect that any artificial intelligence, whether a real one actually capable of doing a task or a simulation sometimes able to do the task, is going to need the ability to do this. It is, for example, one of the primary necessary skills of a programmer: taking the idea that someone had and figuring out what components you would need to do that, which ones you can get from elsewhere, which ones you have to create, how you would put them together, how you will test them, and when you need more information to do those above things. Other than the last one, chain of thought is designed to try to do those things.

Need cash? Your IPv4 stash can now be collateral for $100M loans

doublelayer Silver badge

They're definitely not borrowed. It's a little tricky whether assignment from IANA, or more often assignment from previous structures before IANA inherited management, is the same as ownership. However, since they can be transferred freely for cash, I think it's close enough. It doesn't need to be legally identical to ownership for a loan to work.

doublelayer Silver badge

Re: Monopoly?

That's definitely the plan. If the businesses do fine, they get the interest, and if they don't, they get the addresses because we're not adopting IPV6 fast enough, so for now the prices are only increasing. Cloud providers would be happy to buy them and there's plenty of rental income to be made if you can get enough of them.

We did not have Brave clashing with Rupert Murdoch on our 2025 bingo card, but there it is

doublelayer Silver badge

Ideally, I'd like to split this up. Indexing for a search engine should be allowed use. And it is, that's been decided in court. Use for AI should not be allowed. My ideal solution is that they be forbidden from training their AI on it but still allowed to index it for search. If training the AI is important enough, and I doubt it is, they can pay to license that.

And on the topic of Google, that goes for them too. I want the cases to decide that search indexes stay, training Gemini must get another license. I have reason to believe that this is already required by law, but courts haven't decided whether it is or not. There are several lawsuits challenging exactly this behavior.

doublelayer Silver badge

Re: it would be nice to get a straight answer out of a news headline

Except when the AI summary says something the article doesn't, for instance that someone died who is still very much alive, and they decide that the summary is good enough and start acting on that summary. Enough people do that already with human-generated falsehoods. I see no benefit in attaching an automatic system to augment that with an added random element.

BOFH: HR's AI hiring tool is perfectly unbiased – as long as you're us

doublelayer Silver badge

Re: In my day...

In my mind, this is one situation where AI could possibly actually make a minor improvement. Well, it's still crap, but it might somehow manage to be less crap than what you had before. Fuzzy matching is a pain. The rules you write to try to match things are fiddly, they can be thrown off by tiny changes in grammar, phrasing, terminology, or sometimes even spaces and punctuation. It often means you're identifying the resume that best fits your idealized pattern, which if people knew your pattern, for example if you were obvious about it from the job description, means the person or agency best able to mash their resume into that format. Since most places don't make that public, it's a pure guess. People know that's going to happen to them, so they do the equivalent of trying to use an early 2000s SEO strategy on their resume. It might work for your system, but any human trying to read it gets something less useful than a shorter one that doesn't try to pack everything they ever did with buzzwords. An AI at least has some chance of handling synonyms or different word order a little better, although I expect it will still have most of the downsides. I've yet to find something that can even slightly approximate reading or skimming by a knowledgeable person.

I don't know how well it worked for you, but most experiences I've had lead me to believe that your version may have been a lazy and ineffective mechanism that saved a few weeks of manager time at the cost of hiring worse candidates and passing over perfectly qualified ones.

AI bubble? What AI bubble? Datacenter investors all in despite whispers of a pop

doublelayer Silver badge

Re: The thing about bubbles

I don't think it's very easy to take an H100 and make a consumer GPU out of it. Those things don't come apart very well. If this happened, Nvidia might use the same technology to build more GPUs, but the existing ones would stay in the datacenters. My guess is that they'd be rented to cryptominers who don't need that much RAM and other AI projects that do.

However, I don't think the popping of the LMM bubble would look like that. There are several types of model that aren't cost-efficient. I think LMMs are that way, though it may take a long time for companies to figure that out. I might be wrong there, as I know an uncomfortably large number of people who think LMMs are great as they are, and if there are enough of those, it doesn't ever have to work to continue having revenue flow in. Even if I'm right, though, there are other kinds of models that can benefit from this hardware and would be happy to use freed capacity. For example, several articles have been written about movie studios trying to make AI-generated actors because they're cheaper. They already know how to make money from that. Unless the models simply can't generate the video they need, mostly of extras and background roles, they will have an interest in doing them. You don't need to train on the entire internet to make it happen, and the work of adding all the little things needed to avoid the model adding fingers can be done by cheap labor somewhere. That gets classed as AI too, and I don't think that will pop with the rest of it.

OpenAI asks Uncle Sam to let it scrape everything, stop other countries complaining

doublelayer Silver badge

Re: Is deep learning a copyright violation?

It is popular among LMM creators and fans to try to make an equivalence between training and human learning. The word was originally chosen to make this association because metaphors are particularly common in this area. That does not make them similar. But let's take this from the top of your comment.

Copyright has involved several restrictions which have been enforced long before computers. As written today, having a copyright on a work means that you're the only one who gets to sell it, it is illegal to use it without having a legal copy (purchased, borrowed, or provided to you by someone who has permission to do so count, going to get an unauthorized copy does not count), you can attach a license to use of that work (E.G. open source and every other software license), and that these rights apply to substantial portions of the work, not just the entire thing as a unit. It's not just being the only person to sell it.

You maintain that an LMM does the same thing as human memory, but this is not correct. "The way an AI processes information fed into it mimics the way humans do it.", for example, is just wrong. Neural networks do not mimic human neurons; we don't have that good of an understanding of how specific human neurons work. LLM memory uses specifically divided tokens. Human brains do not; memories are significantly longer. LMM memory does not have the ability to generalize further than those tokens, whereas human brains can. Their difficulty doing mathematics without writing programs and having them executed demonstrates this. There are lots of differences between this software and brains, which is perfectly natural; as cool as our brains are, and as great an AI as we could create if we could actually model them, our knowledge of neuroscience and raw computation power is insufficient to model the entire thing. The LMMs we produce do the task assigned to them, evidently with sufficient accuracy for the people who sell and use them though not for me, so nothing says it has to work like a brain does. Nothing, that is, except for the argument about why their use of copyrighted data is valid.

Another problem is your contention about what happens to the data after it is ingested. "The information becomes an integral and inseparable part of a body of knowledge, identity and decision making (simulated in the case of an AI but based on the same principles as the organic neural networks that AI is modeled after) and does not continue to exist within that AI's memory as a separate "work" that the AI then redistributes or publishes in whole or in part." None of those parts is true. The tokens are linked as probabilities, meaning that while some of them are entirely discarded, others are still present in their original form. LMMs do quote their training data frequently, sometimes when they're asked, sometimes by mistake, and sometimes when answering unusual queries that don't have a lot of associations in their trained state. A human brain might do the same. If a human uses their brain to quote copyrighted information for an audience, that's not allowed. The fact that an LMM is possibly doing this unintentionally doesn't change the result.

On the way that you learned programming from a book, you are allowed to teach from a book. If you gave out copies to the students without getting permission, that would be a copyright issue, but if you just read it, learned it, and taught based on your learning, then you are not violating copyright. There are three differences between you and the LMM here:

1. You presumably obtained an authorized copy before reading it. LMM authors could have done that. The many cases show how often they chose to obtain illegal copies, either deliberately or by scraping sites where someone else did. Before any training takes place, that's already a violation. In many cases, this is the only violation being litigated, meaning that that whether you agree or disagree with me about the differences between an LMM and a brain, it doesn't matter because they haven't bothered to train it yet. At this point, they just have a copy in their big storage array of data they plan to retrieve to send to their training later, which violates the license attached to most books which often says that it is "not to be stored in a retrieval system". This means that, even if they did go and buy a book off the shelf, that would probably be insufficient for all the things they expect to do with it and they would need a special license, but they haven't even tried that. They have proven this by obtaining some licenses to datasets, for example paying Reddit for copies of their user posts. I do not have any objection to them training on data like that that they have permission to use (if you don't want Reddit to be able to sell your posts, read the Reddit terms and conditions and maybe don't post there). I do have a problem with data that either doesn't have that open a license or where the use is explicitly disallowed.

2. You probably didn't just learn to program from that book. After reading it, you wrote some code of your own, adding extra information to what you were teaching later. You may have read more sources as well. The LMM doesn't do that. It is incapable of writing code and watching the result, understanding whether what was intended or not. An LMM can write code, but it has never had the experience of debugging to a goal. It is as content (to anthropomorphize it a bit too much but I didn't start it) to write code that doesn't compile as to write something perfect, and it doesn't adhere to the letter or the spirit of the specification unless by chance. While your teaching is based on an actual goal, the LMM's is just based on what teaching looks most like the text it already saw. In fact, the code it writes is mostly based on other code it saw, not the content of the textbook. If the textbook says never to do something because it's a readability disaster but a lot of the code in its training data does that anyway, it will very likely use that structure anyway. A normal mind could easily mentally reformat that the way the book suggests, the way that I've seen lots of code that uses bad structure or expressions but can still glean what I need from it and not write that way because I remember cautions from others.

3. If your teaching wasn't based on anything but the book, then there is a chance it could be viewed as a copyright violation. If I had to teach about something I knew nothing about and I decided that the easiest way for me to do the job assigned would be to get a textbook and just parrot it back to my students, this could be interpreted as a performance of that book. I may have summarized and paraphrased the contents, but that is not enough to prevent the violation. In practice, nothing would really happen because the copyright holder wouldn't know I was doing it, and for that matter neither would my students. They'd probably both think that I was just a bad teacher. Also, nobody is very interested in trying to prove that I was doing that specifically instead of being one of the many other types of bad teacher. This is the hardest argument to make about an LMM, which is probably doing that to lots of sources instead of one. I think it is still correct and a viable complaint to be made, but I think the arguments about use without permission and direct quoting are more convincing.

doublelayer Silver badge

Re: While we're at it

Reason 1: If I have a great new idea, but to figure out if it works will take three years of my time and probably hiring some people, but I don't have any money, what can I do? I can talk about this idea everywhere in the hope that someone else says it sounds great, pays me lots of money with which I hire assistants and buy materials, I spend the years trying it, and hopefully it works. If I don't find someone who wants to give me that support with no strings attached, then I write it down, work on it on the weekends outside of my work, and maybe it doesn't get invented. If, on the other hand, I can tell the people providing the money that, if it is successful, they can get their money back, it becomes easier to get that support.

Reason 2: Great, I've spent my time and made something new. Well, if it wasn't protected, but I want to get a reward for my work, what do I do? Correct answer: I hide everything I can about how this works so that, until you figure it out manually, I'm the only one that can provide it. If I'm good at that, maybe you never figure out how it works and you always have to come to me to get it. What does that do for someone else who has an idea for how to improve it or would have had one if they knew how it works? They're out of luck and the rest of us don't benefit from their idea. If I get the patent, I publish it. Everyone can read how it's done, and after a few years, everyone can do it using my plans. If someone comes up with an improvement, they can build that around my thing before my patent expires, they could bring it to me and add it in, or they could wait for the patent to expire and do the whole thing.

I somehow think you were well aware of both of these things before I wrote this comment.

doublelayer Silver badge

Re: Hi ! We want to steal everything...

All that waffle, and all you have to say to prove it wrong is:

1. Digital sequences, like any other product, require cost in time and often in money to create.

2. Patronage is a pipe dream you should stop having, because everyone else here doesn't get patrons whenever they have an idea.

3. At the moment, those things do, in fact, have monetary value because copyright exists. You will not prove it wrong by insisting that it does not. To convince people, you need to argue that it should not, and I don't have much hope that you will.

doublelayer Silver badge

Re: New technology calls for new a new copyright paradigm

I don't think the comparison is the problem. The problem is that, no matter what the activity is, we decided that it was illegal. Now, it is easier to do it. Your comment says that we need to reassess whether it is legal because of that, when the two things have no dependence on one another. We could decide today that it's a bad idea and nothing gets copyright protection anymore, but for the same reasons that we could have done so in 1963. That it is easier is not even a bad argument for doing it; it's no argument at all.

You've decided to try to pull in the irrelevant additional argument about whether using a work as training data to an LLM is the same as a human reading it. It's not, by the way. We could have that discussion in a separate thread if you like. It isn't relevant here because it has nothing to do with how easy it is to do, and that was the entire point of your history repeating itself argument. I have a feeling you are bringing it up now because it is hard to defend your original point, but that does not make the two arguments connected.

doublelayer Silver badge

Re: New technology calls for new a new copyright paradigm

Before 1990, the individual computer users were content in their security position, since malware was crude and limited. Always-on internet access had not spread very far, usable encryption software was limited in scope, and basically no home user had a public mechanism for a random attacker to send them malicious instructions. Then along came easy internet access, email, and AES, and people to put them together and make ransomware. Suddenly the genie was out of the bottle and the rules had changed forever. Users, predictably, went up in flames, screamed bladdy murder and dug in their heels to resist having their disks encrypted and held to ransom. It didn't help one bit. Those who failed to protect themselves got encrypted. Those who failed to back up lost their data or had to pay. Darwin would have nodded approvingly.

Now we see the same. There's no really effective way to eliminate ransomware entirely. And that means, simply put, that our ideas about whether it's legal to ransom and the associated legislation have to change to adapt to the new situation. It's that or be left behind.

Does that argument make any sense to you? Just because something that was and still is illegal is easier, does that mean we have to change the law to make it legal? Maybe we need new ways of enforcing that law to deal with more widespread instances, and if we decide we no longer approve of it, then we can repeal it. There are some people who do; I've seen the arguments that anyone who gets hit with ransomware successfully and doesn't have backups was stupid at least twice and thus deserves anything they get, often from people who put as much thought into their moral philosophy as the fanatic pirates do to their philosophy about what they want is a moral imperative. In reality, the likely change to the law is making paying ransoms illegal when it formerly wasn't to try even harder to eliminate as much of it as we can.

Pirate Bay financier and far-right activist Carl Lundström dies in plane crash

doublelayer Silver badge

They did say "island of Honduras". I have no information about the plan they're talking about, but Honduras has a bunch of islands in two oceans.