* Posts by doublelayer

9378 publicly visible posts • joined 22 Feb 2018

808 lines of BBC BASIC and a dream: Arm architecture turns 40

doublelayer Silver badge

Re: "RISC architecture is going to change everything"

You can do equally as much with three instructions and one register. We don't all do that because sometimes we're worried about more than what it is possible for our computer to do, but what it is efficient or feasible for our computers to do.

doublelayer Silver badge

Re: "RISC architecture is going to change everything"

The chip is Turing complete, so it can run anything you choose to compile for it. It can run IOS if you give it enough RAM. The point is that it doesn't because it would be slow because it doesn't support those thousands of operations. When people want performance, they go to a chip that has thousands of operations. Does that make it non-RISC? Does it mean that RISC was abandoned by ARM in the quest for performance? Since RISC isn't well-defined, there isn't a clear answer to this.

doublelayer Silver badge

Re: "RISC architecture is going to change everything"

It depends whether you give ARM credit for still being RISC, and if you don't, where you draw the line. The Cortex A-range chip in your phone likely supports a couple thousand instructions. True, they have several subsets of that, the smallest of which (Cortex M0) has only about 55), but you don't run a phone or computer with one of those as the CPU. I don't think I can call that RISC, even if it still has load/store and comes from ARM.

Trump admin freaks out over mere suggestion Amazon was going to show tariff impact on prices

doublelayer Silver badge

Re: Dihydrogen Monoxide

You can't include that one about a profitable industry in the US. That might outweigh anything else in the list. No, to really clinch it, you have to report that nearly a billion dollars per year is used to import DHMO products and Canada is a source of the raw substance and partially regulates the US's use of it. That should be enough.

TAKE IT DOWN Act? Yes, take the act down before it's too late for online speech

doublelayer Silver badge

Probably not at all, because these are not lawsuits. These are complaints sent to a different entity entirely, and from a court's perspective, the worst that can happen is an individual image is taken off a website you don't own. In reality, the worst that can happen is getting banned from a website because they don't want to deal with any more complaints or respond to cases where the complaints are untrue. In either example, though, it will probably not match the criteria written in anti-SLAPP laws, and you'd have to use less clear methods to try to respond to someone using these maliciously.

doublelayer Silver badge

I don't think that means what you think it does. It doesn't mean the law doesn't have force until a year has passed, but that it is fully operational exactly one year after it has. Until then, social media can probably use the "still working on it" excuse for making the process more complicated and not immediately hopping to taking down the things they're told about, but they're still supposed to and can likely be pursued if they don't.

doublelayer Silver badge

It is probably the most fun the politicians' staff members* get. Come on, wouldn't playing the bacronym game be a little fun if you didn't have to do it too often?

* The ones who would otherwise have to do boring work like writing and reading the tangled words of legislation so their bosses don't have to, not the ones who are preparing for their next election or their social media war. In other words, the staff who do work, not the ones who play the politics game.

Infosec pros tell Trump to quit bullying Chris Krebs – it's undermining security

doublelayer Silver badge

I wasn't one of the three so far, but if I had to guess, I'd guess that people disagree that that's a strength even sometimes, you didn't explain why you think it is one, so they think you're at least partially wrong.

Open source text editor poisoned with malware to target Uyghur users

doublelayer Silver badge

From the not very nice translation that Google Translate made of the Uyghur readme/changelog, this editor seems to have a few features that many text editors you could localize either don't have or are specific to Uyghur including:

OCR: I don't have this in my text editors, though you could add it.

Convert between writing systems for Uyghur.

Save to Docx: Most text editors don't bother with this. Localizing LibreOffice and trying to add these features as add-ons would be more work than the average text editor localization.

Features specific to languages that Uyghur users may also know including Kyrgyz.

You probably could bolt these onto another editor, but I'm not sure they're small enough that that's an obviously better option.

Swiss boffins admit to secretly posting AI-penned posts to Reddit in the name of science

doublelayer Silver badge

Re: “…the risks (e.g. trauma etc.) are minimal.”

"But if you're researchers, social scientists, experimenting on people, the first thing you do is obtain informed consent."

I think you're simplifying the ethics review process to the point of inaccuracy. Testing on uninformed subjects is done frequently, whether that involves bringing in subjects, telling them you're testing one thing, then testing something else*1, or testing on the general public without telling them*2. The review process would not dismiss either type of request simply because the subjects weren't informed. They would ask questions to determine the ethical consequences of not informing the subjects up front, and they might refuse permission when it's too sensitive. If you think this study violates those ethics as well, you could argue for it and I think you'd probably have a point, but if you think it's as simple as "they weren't informed so it would obviously violate the ethics codes", you don't know the ethics codes.

*1: For example, the famous study where people were told to go to another building and watched to see if they'd ignore a person needing help on their way. The subjects were not informed that they'd be tested on that, since the purpose was to see if they'd go out of their way to help, and they weren't informed beforehand that they'd see a person in (simulated) distress.

*2: Many studies involve setting up a situation in a public space and watching what passersby do in response. It's very common.

Generative AI is not replacing jobs or hurting wages at all, economists claim

doublelayer Silver badge

Re: When was 2024?

You're both right. The data isn't very old, especially given how long it generally takes to create new data. How many people have jobs is often on at least a month's delay as it is, and information about how companies are changing the jobs their employees do is often delayed longer than that if it's explained at all, so the data they've used is probably the latest they have access to. However, if modern LLMs had actually taken over and replaced employees, it wouldn't show up in the last set of data from 2022-2023; the LLMs of that age were much worse, struggling to string a paragraph together.

We won't get the full story until some companies actually try replacing workers with LLMs for a while. Having seen them used, I'm not expecting large changes. While there are a lot of places using them, the quality is still a problem, meaning that companies using them and expecting quality usually need to spend about as much time testing, rerunning, and correcting LLM output as they did doing the thing from scratch in the first place. Various people I know or work with have arrived at different places on the spectrum of how much LLMs are used, and I do know someone who uses LLMs frequently and nonetheless produces good code (he does complain that he has to try five times and then correct manually to get workable code, so I don't know how efficient he is). I'm still waiting to see how badly it fails when a company decides to trust LLM output more readily.

CNCF tells main NATS contributor Synadia that it's free to fork off

doublelayer Silver badge

Yes, and you could argue that merely adopting such a strategy is already forking the software since it is now diverging from what others used to contribute to and use in such a way that they may only choose one approach: paying or being allowed to use their own work in the way they want. These people have decided to add another level of stupid to the game, though. Most companies that do this have kept the trademarks and domains, so their version gets to keep name recognition. I must admit, I've never seen a company give those away before trying this before.

Signalgate lessons learned: If creating a culture of security is the goal, America is screwed

doublelayer Silver badge

Re: Who is to blame?

Correct, and in that case, you've moved the endpoint. That can be helpful. If, however, someone got malware onto that endpoint, malware which either takes it online without you knowing or uses one of those ways Mordechai Guri is always making for getting data out of a computer that has no standard connections, then you have the same problem. Which is not relevant to the completely unrelated issue of adding a recipient who shouldn't be on the conversation, because whether they used one terminal or two, airgapped or not, is not anything we're talking about in this article or the comment thread you've replied to.

doublelayer Silver badge

Re: Alas, cryptography is not simple

"Little Jenny has nothing to hide. Prodding her to encrypt her e-mails only so that those of us who have good reasons to encrypt would not attract the attention of the Chief Magistrate was (a) not entirely ethical and (b) most certainly counterproductive."

I disagree on both points. In most cases, we did no prodding at all. Jenny who chose to use normal email doesn't have encryption in transit, but if Jenny decided to use iMessage because she has an iPhone and so does the person she's talking to, then she gets encryption anyway. She had to do no more work. Many nontechnical people want security and, when it's easy enough like downloading a certain app and remembering to use it, they use it. So I'm not sure there was as much prodding as you're suggesting.

As for its ethics, when we did make encryption and enforce it on people, it is in things like requiring HTTPS to access websites. In which case, enforcing security is ethical because it defends people whether they would have chosen it or not. Bob logs into his online bank account on a public WiFi network, and he doesn't understand or care why there's encryption on that connection? If that WiFi network is logging things, then that encryption prevents him from losing his money. He probably cares about this. Not only would he care if someone explained it and he listened, but since he would complain to his bank and expect them to make up the shortfall if someone did steal it, the techs working for the bank have enough personal interest in preventing it to justify requiring encryption. Once again, the heavy lifting is done by the browser writers and the server admins, not the user, so simplicity made it easy. We did make it a little too easy by advising users to look for the lock icon, meaning they sometimes trust websites as long as they have an HTTPS connection at all, but that wasn't worse than what we had before.

As for counterproductive, I don't see where you get that. Not totally effective is not the same thing. The increasingly common use of cryptography has brought significant improvements. It hasn't prevented people from unintentionally disabling it or doing something it didn't cover (most of which are reasons to increase how many things use it). Nor has it prevented people from doing stupid things they fully understood, which it never will be able to fix. I don't see what negative things it has done and so far, you haven't named a single one. Your arguments have been blaming cryptography for things it has nothing to do with and a user choice argument which, while an interesting philosophical argument, isn't very relevant to cryptography's failures.

doublelayer Silver badge

Re: Who is to blame?

"For example, Signal users (both senders and recipients) HAVE TO HAVE PLAIN TEXT on their end point!!

And when that end point is compromised by MORE TECHNOLOGY (NSO/Pegasus malware), then the whole point of Signal is made useless."

And the alternative is? If I control your endpoint, then I have your communications. It doesn't matter if it's encrypted at rest or not (it is for Signal, incidentally, so your plaintext argument is not entirely correct), I can know what it is because it has to be decrypted to appear on the screen and I can grab it then. No cryptography can prevent that from happening. We have lots and lots of writing explaining how to try to prevent your endpoint being compromised in the first place, not that there's a simple, foolproof method of doing that.

"So....to your point......users of technology who would like privacy and security ARE FORCED to be more skilled with technology."

The point of making it simple when possible is to provide those who aren't skilled with technology with some level of protection. As with anything, those who are skilled with technology will get better results out of their technology. This is why, when an option arises to make something more user-friendly, I try to choose it so that more people can benefit from it. Once again, there is no way to simplify everything so that people who don't want to spend ten seconds can use it, but we can make incremental improvements and I suggest we do so.

doublelayer Silver badge

Re: Who is to blame?

I think you have misunderstood what they are saying and misapplied that to this situation. This situation is not about cryptography, and the ease of cryptography has nothing to do with the problems involved.

Cryptography itself has to be simple to use or people won't use it. I'm all for teaching everyone in the world about cryptography so they can use better systems, but that wouldn't be feasible and many of the people you could get into your educational centers are going to ignore you because they are the type who don't understand why they have to care about security and intend to ignore this whenever it conflicts with convenience. For people who know and care, there are many systems where the cryptographic mechanisms are conveniently configurable and can be used to create communication channels that have the amount of security they're looking for. For those who care but don't know, there are a lot of resources online to teach them how this stuff works, though some of that might benefit by having less technical versions. For those who don't care and don't know, that's why cryptography has to be built in with little or no user interaction required, because they will not try to do anything about it.

And all of this is completely irrelevant to this situation. Nothing here became possible due to a problem with cryptography. The cryptography in Signal, as far as we know, did exactly what it was intended to do: it encrypted the traffic between participants so that only those invited to the calls could read what was being sent. That's what cryptography is for, and had we made it more manual, they would have either done the same thing manually or they would have sent the same traffic unencrypted which would have been even worse. What is happening here is not a technological problem at all. The two problems involved are:

1. Sending data to the wrong person because they manually mislabeled that person.

2. Sending data to the right person, as in the person they intended to send it to, but someone they really shouldn't have informed.

Technology could try to fix problem 1, but it would have nothing to do with cryptography and might have more downsides than benefits. Technology cannot fix problem 2. Existing human systems can try to fix problem 2, and they did. The person trying to inform people who didn't need to know about military actions was aware that they didn't need to know. He was aware that the systems in use were intended to prevent that data being shared. And that is why he bypassed them. He intended this outcome, and cryptography could not have changed his mind or prevented him from doing what he did.

What the **** did you put in that code? The client thinks it's a cyberattack

doublelayer Silver badge

Re: Most inappropriate message

Appropriate in the sense of avoiding making customers angry, probably. Appropriate in the sense of proper user design, no, in my opinion. The problem is not the informality or even the command, but that it didn't tell the user what they did wrong or what they should do differently. In fact, I don't entirely understand what this even means. What my mind immediately jumped to is those times where what looks like a simple modal has an OK and a Cancel button, but I can't tell what differences to expect when selecting which one to press. I'm assuming your case was more complex than that, but since I don't know when it would appear, I would be wondering whether this was the equivalent of the abort/retry/fail situation where I was stuck in a loop until I terminated the program. If the users could do anything about the situation, then probably a better error message was possible.

Downward DOGE: Elon Musk keeps revising cost-trimming goals in a familiar pattern

doublelayer Silver badge

31.72689469541619 ...

Less precise, please.

doublelayer Silver badge

"Who would build a large wind/solar farm sans that infrastructure?"

It happens all the time. Some company finds a good place for one and thinks someone else will want that power enough to finance the power lines. The someone else either decides not to, or more likely they decide to do it but have delays the generation company didn't. Result: power plant without sufficient transmission infrastructure.

Even more frequently are the cases of intermittent situations with extra power. Generating capacity is installed somewhere and has sufficient transmission for a certain area. Then the power requirement decreases and there isn't enough transmission capacity to send the excess power elsewhere. For example, a hot area where the power generated is expected to power air conditioning systems, then a period of cool temperatures where there isn't that load, but there isn't a lot of transmission out because there wasn't expected to be that much excess frequently enough to justify the investment.

doublelayer Silver badge

Re: Have you listened

I listen to politicians directly all the time. Most of the time, if they know I'm listening, they make sure to all sound the same. Lots of agreements that they care about everything I care about. Not much detail about how they're going to fix any of those things. For example, most politicians explain that personal privacy is, of course, very important, because we don't want to be like those dictatorships that spy on everything. This is regardless of party and, in fact, regardless of country. In addition to watching what they say to you, I suggest you watch two more important things. One is what they say when they're directing their statements at somebody else, where they often show that they'll be willing to say something completely different. Sometimes, this is basically just the same thing and they're doing their best never to be clear about what they want but always to sound as if they are in agreement. Other times, they show their true beliefs and are clearer about what they're willing to do, things they didn't announce when they thought I'd be listening because they knew I and many others would see them as problems. For example, when that politician that was all against government surveillance starts adding the "unless" to the statement. It applies to you, but not to some group that obviously deserves the oversight.

The most important thing, and what we all get to see right now, is what they do when they've got power. Not what they say they're doing, but what they're actually doing. Continuing with the analogy, all the politicians who made pro-privacy statements and went on to vote for more powers of surveillance. I take it back, it can be somewhat instructive to link what they say they're doing to the actions they take to demonstrate how badly those things line up. This is the best way to know what a politician intends. It's unfortunate that to know that for sure, you have to give them at least some power to take actions. I have a feeling the statements you would quote from politicians you like would sound pretty good to me unless we're very far apart politically, but I could find other statements they made that aren't as nice, and their actions may not faithfully agree with either set of statements. Their actions outweigh any and all statements they make. That's why I watch them.

doublelayer Silver badge

It only makes sense to me if there is a large source of green energy without the transmission capacity to send it where it's needed. For example, if there is a large wind farm and the wind is blowing hard, but the transmission lines can't take that much power from it, then generating hydrogen from what remains in order to store it might make sense. That's not the only way to use excess power as a power reservoir, so depending on how efficient it is in comparison, even that might not be a good thing to do with it.

Trump’s 145% tariffs could KO tabletop game makers, other small biz, lawsuit claims

doublelayer Silver badge

Re: Sue him directly.

The difference between civil and criminal law is not only a thing in the United States. I don't know where you live, but chances are that it's exactly the same there. There is such a thing as private prosecution in many countries, But you can't do that just because someone has done something that harmed you financially, and where it's possible, it's often rarely done because it's extremely difficult. That's when it's against another person, not when it's against someone who can bring every lawyer employed by the US government to defend him. Also, it isn't even allowed for federal crimes, so look for a state crime and hope that state allows it.

doublelayer Silver badge

Re: Damn.

"If you were completely convinced that the US economy is going down the drain, and the only way to stop it would be to bring some self-sufficiency back, how would you bootstrap manufacturing in the united states again?"

Step 1: Focus on what, specifically, you need to be able to make there. Don't try to make everything. If you try to make everything, you're going to run out of ways to promote that capacity before you've gotten far enough. It's also a bad idea Because there are plenty of reasons why other. countries might be better at making something. Unless you need it, the place that makes it is an enemy, or there's some reason to think that you could do a lot better at it than you are right now, it might be better just to let it happen where it's already happening.

Step 2: Figure out who does make those things. Are they your friends or your enemies? Study them and figure out why they can do it there and can't where you are. The process forks after this depending on your answer, but to avoid writing too much...

Step 3: Get more of whatever they have access to and you don't. If you don't have enough skilled people, figure out how to encourage education of more. If they don't have a regulation you have that slows them down, consider whether the regulation is necessary (don't just eliminate it, consider it). If they have particularly rare people, figure out how to convince some to move to you.

Step 4: Determine whether any of those differences are unfair. If they are breaking an actual law, one that's been written down and you can point to and prove they broke in a court that doesn't already agree with you, pursue that. If it's an international trade regulation, there is the WTO to handle that, well there would be if the US hadn't been blocking it for several years, but it can do some things. Here's where you might use some tariffs, as punishments for specific and real problems.

Step 5: If the rest wasn't enough, here's where you have to hand out cash. Private companies won't do something if it's not profitable. Creating an environment may not be enough in those cases. You may need to pay them to start. After a while, it becomes expensive to move what they started, and if you were successful with the environment thing, they won't need your help anymore, so those subsidies can be temporary. Tariffs to block all the competition as an alternative to subsidies usually won't work, but they especially won't work if the environment problems are still there. Even if the environment is good enough, tariffs will make it difficult for anyone to start if they import anything. A car company could have access to people trained in engineering, a place to build factories, and regulations making it easy for them to make and sell cars. If they don't have any of the materials with which to make cars because there is a tariff on those, they still won't build that factory. This is why step 1 is so important; unless you already make a bunch of steel locally, tariffs on steel and on cars is likely to prevent you from starting a car factory.

Governments aren't great at following this process. The current attempt is worse than usual at it.

doublelayer Silver badge

Someone could arrange it, but it's not going to make him resign, so a lot of people will probably be wondering why to bother, including many of those who would be best able to organize it.

As ChatGPT scores B- in engineering, professors scramble to update courses

doublelayer Silver badge

Re: But when deeper thought was required, ChatGPT fared poorly.

I'm sure there is, among people like you who insist on believing that they're conscious, intelligent entities when they're not. Among those who have spent at least five minutes understanding how they work, that interpretation is recognized as nonsense.

doublelayer Silver badge

Re: Yes

That's what I advise against. Confuse the text enough and you'll confuse students. I had one professor who did that. Not deliberately, he was just terrible at making himself understood. It would make it harder to cheat. You'd also get your homework back with an odd patchwork: 23 problems with a perfect score and 2 zeros because you solved some other problem. What you solved was clear enough, but evidently it's not what he asked for so no credit. Sometimes, you couldn't even figure out what he wanted even knowing that whatever you thought it was wasn't right. After a bit of this, I compared results with classmates, and about ten of us had all done the same wrong thing and gotten penalized for it.

You can rewrite the questions over and over until pasting them into common LLMs make them spit out a wrong answer. The problem is that, at the very best, you've added a reading comprehension task of indefinite difficulty to every assignment you leave them. More likely, you think you've done that but there are now errors in your phrasing that you don't know about.* That is unfair to students and it is your fault.

* Imagine, for example, being required to manually obfuscate code to make it harder to read. There's software that can do that automatically, but you don't have that. Your tools are a text editor a compiler, and manual testing. How much could you do to the file and be confident that you definitely didn't change the behavior of the resulting program in the slightest? The same is true when you're rephrasing text. It's just a lot harder to see because it all still looks fine. Assignments are just code that gets interpreted by a more flexible compiler.

doublelayer Silver badge

Re: Deus Ex Machina

Wouldn't it have been simpler to give each student one unchopped bar and only cut two of the bars into five segments? Also, if it was easy to chop the bars into equal fifths, then it shouldn't have been too hard to cut 2/5 segments from the bars and hand those to all but one of the students, the remaining one receiving the remaining fifths.

doublelayer Silver badge

I'm guessing you had access to plenty of tools that could divide when you needed it, and that you did plenty of short division, the unofficial version they don't teach students but is useful for knowing whether your calculator result is logical. It's not that everything students are taught will be used in exactly the same form forever, but that they need to learn it in order to have the basic skills. For example, the stereotypical problem for computer programming students is writing sorting algorithms. In practice, most professional programmers won't write a sort; they already have one in the libraries they're using. Those who do aren't going to need to know about ten different ways they could sort, because most of those are more inefficient than something else, so they're likely to use quick sort, merge sort, or radix sort, and they rarely have to even choose among those because which one you use is directly related to your resource availability and data format. Only in particularly weird cases will they write something nonstandard. Students aren't taught to write and analyze sorting algorithms because they'll need to write and analyze sorting algorithms. They're taught it so they know how to write programs in general, and more importantly because they will need to analyze the performance of things that haven't already been subject to decades of research to improve them. The sorts are there as an example that demonstrates technique.

Adherents of LLMs seem to think that education cares about the product of the students' effort. It doesn't. It cares only about how they learn, and their work is used to make them learn things and check whether they have.

doublelayer Silver badge

Re: Maybe the problem here is one of understanding the REAL problem.

The problem is not with tests. It's not too hard to prevent cheating on things where you control the environment. The problem is that not all courses can do all their evaluations in the form of tests which, at their longest, might be composed of five-hour chunks. Sometimes, a student needs to do something that takes longer than that, meaning you have to let them do it somewhere else. If they're doing a research paper, then they need access to research materials, time to read all of them and figure out which bits are actually important, time and tools to do new analysis based on their research, tools which will include computers most or all of the time, and just writing the resulting paper will take longer than they get as a tester. I think your focus on mathematics and related courses may have caused you to think that what works to test students there will work for other subjects when it often doesn't.

doublelayer Silver badge

Re: Yes

The course evaluations probably aren't trash. How well an LLM can do on a course is very closely correlated to how well a student copying off the internet can do. There are a lot of courses where doing the work properly requires thinking but it is thinking that plenty of students have already done. As an example, consider computer programming classes. The solutions to these are very much not multiple choice, but while they're introductory courses, there are only so many simple problems that can be set to teach the basics. The answer to "write a function in C++ which takes a two-dimensional array of integers and determine whether it's a valid Sudoku board" is not a simple one, but it's one which you can find online if you want to copy. The LLM can too, so chances are its version will be just fine.

To prevent that, someone would need to come up with truly novel questions that are equally doable with the limited knowledge students of that level of experience have. It is not easy. Most attempts are likely to merely scramble the question in such a way that the LLM has trouble parsing it. So let's do that, problem solved. Not really, because by making things more verbose, we run the risk of confusing students as well and foiling our attempts to teach those who are trying to do the work.

Computer programming is just one example. The same applies to most things involving essays. A student in secondary school writing an essay explaining the effects of the Munich Agreement isn't likely to come up with anything that historians didn't already consider, but that's not the point and their grade is not and shouldn't be based on originality. Designing a curriculum that an LLM can't pass is similar to designing a curriculum where a student can't cheat. You probably can't, and if you manage it, you've probably lost plenty in order to achieve it.

doublelayer Silver badge

Re: Maybe

However, since we're talking about IT specifically, there are a lot of things that can't be tested in the time they generally let you close students in a room without letting them out and a lot of things that would never be done in that environment. If they're to write some relatively complicated code, then they will need more than a few hours and it's totally fine if they want to use a man page to get a function's specification or a web search to figure out what Clang meant when it emitted its cryptic error*. Most of the time, using an LLM to write it for you would be the same as asking your friend who already knows this stuff to do so; it might work, but you won't learn by doing it and it's therefore worth forbidding.

* Yes, theoretically they might eventually need to write code without access to the internet, meaning they can't get help understanding some of those. However, most of the time, they will have access. If the company's network is down, they can use their phone to check something like that. Also, the point of letting them now is to give them the knowledge so that, when they actually don't have access, the chance is good they'll understand it because they've encountered something like it before.

AI training license will allow LLM builders to pay for content they consume

doublelayer Silver badge

Re: Utter tosh

"Why would it be a single licensing charge? An ongoing monthly royalty payment for any creator whose works have been used for training would be a far fairer method. And the creator should be able to set whatever fee they wish, or have their works removed from the training data."

You have to recognize the difference between something that would be fair and what these people are trying to do. A lot of things would be fair: the company is forbidden from using your content without your permission, you can set any price you like, they have to have ongoing permission, you can withdraw permission. None of those things are planned or will happen with this method because they are hoping that AI companies will voluntarily sign on to this plan. AI companies, meanwhile, are using the system of being allowed to use anything they want without anyone's permission or a requirement to pay for it. They don't want to accept any reduction in that and will only voluntarily do so if it is cheap and results in a decrease in legal risk to them.

The reason it will be a single payment is that, if it was ongoing, AI companies wouldn't agree to pay for it. They also wouldn't agree to removing it at any time because they can't remove it from their models after creation and because they have no interest in maintaining the systems necessary to find and remove it from ongoing training data at your request. The reason it will be a non-negotiated payment is that it would take forever to negotiate with each person in a group of millions for how much they want for an individual page, and because paying a fairly-negotiated amount would be more money than they have. The organization trying to sell all this content to them will either request these things and never get anyone to agree, or they will negotiate all at once for one tiny value because it means they don't end up a complete failure.

This is why this suggested method is bad. It will not achieve any of the things we need, nor are they necessary to remedy the illegal actions of AI companies. Existing copyright laws already implement all of this; it's illegal for the content to be used without permission and compensation and negotiations for those would have to be individual. All we need to make your preferences (which are also mine) happen is for courts to confirm that AI companies are not exempt from copyright law and punish them for their illegal actions. A licensing organization will not help this happen, and they will not try to organize something that makes you happy. They will try to organize something that makes them happy which will be more advantageous to the AI companies than it would be to you.

doublelayer Silver badge

Re: Utter tosh

I do think the first problem they'd run into is getting any AI company to accept that they need to pay for the data they think they can have for free. However, if they somehow managed it, I do think payments are a problem because, unlike the systems you mention, there is a very different scale involved. In three of the four options you list, the payments are amalgamated over time. For example, if I put an ad network on a site of mine, they add up all the ad views for all the pages of my site over a month, then send the payment for those views to me. Meanwhile, any company who paid for this would be using a pay-once for unlimited usage policy because it is required by the system they create; they can't count the number of uses of any document because who knows what even happened to it after the training process got it. A single licensing charge divided by everyone who had documents in the set is going to produce very small amounts that aren't going to recur. That is not convenient to pay. They could be honest and find a way to pay it anyway. I'm predicting that the people who have access to the lump sum are going to realize the personal benefits of not doing that.

doublelayer Silver badge

Re: How much would a LLM training cost?

I value my product at more than that too, but I can guarantee you that if you signed up for this service, they would negotiate for access to everybody at once and they would negotiate for a value that would end up being pennies for each participant. If they tried to negotiate for individual pieces, the negotiation would never end because of how many individual pages there are in that set and how little any AI company wants to actually decide which ones they care about. If they tried to negotiate a high price for everyone, nobody would agree to pay them. Therefore, expect that that's all the payments would be worth if you agreed to be represented by this bad idea.

The point of the lawsuit settlements is that they often get settled with what looks like a large financial amount, but that amount is small per participant and participants don't get all of it. For example, you might see a settlement of ten million $local_currency_units, which, when divided by the 400k participants would give everyone 25. That isn't a large amount for what could be a large offense, but they won't get that much anyway because in practice, it actually goes like this:

Settlement amount: 10M

Lawyer's fees: 8.3M

Processor's fees: 200k

Cost of posting notices to 100k people: 50k

Remaining amount: 1.45M

Remaining amount per person: 3.625

Expenses for delivering payment of that size, per participant: 1.439

Actual amount received per participant: 2.18, sometimes in some inconvenient form like a discount voucher

doublelayer Silver badge

Re: How much would a LLM training cost?

I think what they were saying is that this license won't work even if the AI companies accepted it. And they would be right. Their financial objection is not the only reason it won't work. I think they won't even get to the stage where that would be the problem. However, if they did somehow get to that stage, it would be the problem. Distributing payments that amount to a few pennies to millions of people is so expensive that it's not worth doing, especially when the people in charge of it realize that if they try, nobody really gets anything, and if they don't, nobody really gets anything except them who get something spendable. See also the settlements reached in most group or class action lawsuits.

doublelayer Silver badge

Re: Not read it all

As far as I can tell, the system works like this:

1. This group, or someone else, gets the lucky position of the only licenser who the AI companies will deal with.

2. If you use their license, then they negotiate the price for access to your content. That price will be very low, and they will keep most of it.

3. If you didn't use their license, then clearly you weren't interested in getting funding, so no payment. Your data still gets used.

They seem to have no idea how they'd make AI companies comply with their license when they ignore much clearer rules about not using things they don't have the rights to. Neither do the nonprofits appear to have any mechanism for detecting misuse. My guess is that they're hoping to get a large group of signatories so they can try to negotiate with AI companies in bulk, but if that turns out to be true, I expect that they will fail badly and we will never hear of them again.

Asia reaches 50 percent IPv6 capability and leads the world in user numbers

doublelayer Silver badge

If they are routed in an internal network or even a separate internet that only some places connect to, then the addresses don't have to be globally unique for the regular internet that they're explicitly not a part of. Of course, they may use their massive ranges for that purpose, but it doesn't change either part of my statement: they don't need ownership of them to do it, and releasing them wouldn't fix our annoying ability to use more IPV4 addresses than we have available.

By the way, if you have a citation for them being in use, I'd appreciate reading it. I wouldn't find it surprising if they didn't, in fact, have a need for 13 /8s and several smaller blocks, but they were given them, and giving them back is work that they don't see a reason to do.

doublelayer Silver badge

It's really hard to answer because a lot of surveys do things like looking at an address and see if it responds, which means that a firewall that restricts what it will respond to may show it as unused when there is a system using it but you can't see it. We do have plenty of empty space in IPV4, for example all the addresses the US military has and doesn't use which could be taken off them and used to cover about 15% of India's population, but I don't know how to take stuff off the US military without them getting annoyed and they have a lot more ways of making their annoyance my problem than I have of doing the reverse.

doublelayer Silver badge

Re: Inflection point?

Fortunately, very few things I've ever seen have 5 GHz only, so in your case, you can always continue to use 2.4 GHz. I also want the option because I've lived in places where there was far too much noise on 2.4 GHz. Not much of a problem as I simply disabled it for my network, forcing everything onto the much more functional 5 GHz space, but for the occasional cheap device that didn't support that, it was a pain.

doublelayer Silver badge

Re: And Malaysia

The bare minimum is a /64, and that would be fine for a lot of people. I see no problem in ISPs handing out a /60 instead of a /56 unless requested otherwise. So you can only create 16 subnets containing as many addresses as you could ever use with their default, unless you choose to allocate addresses using a different method in which case you can divide up that space in any way you want? I think most users will survive that. Unless they forbid large enterprises from having more than a /60, I don't think there is any problem with the policy.

European biz calls for Euro tech for local people

doublelayer Silver badge

Re: UEFI

"Is it true that UEFI secure boot standard means that an American company has the final say over what is allowed to boot on any PC with it enabled?"

Depending on how you want to look at it, you could claim that, but it is not the clearest or truest way to describe the situation. Lots of open source bootloaders have signatures allowing them to boot, and those are not checked against some database when they're tried. If the companies that can sign them all decided they'd never sign something again, all those things would continue to boot as normal and any other system would just have to include any one of them as a first stage to work without difficulty.

"And that there is a lot of pressure for hardware to prevent people from turning it off?"

That one is much easier: no, there isn't. Several manufacturers turn it off by default. Those who have it on by default still provide a simple switch in BIOS settings that turns it off. If the situation I described happened and you wanted to boot something that wasn't signed, you can turn it off in about ten seconds and go on your way.

UN says Asian scam call center epidemic expanding globally amid political heat

doublelayer Silver badge

Re: Hmm... China again

Most of this is not Chinese government managed, and at least some of it is specifically opposed by them. There are plenty of people in China who have access to the money needed to fund and manage a criminal enterprise like this, and there's been some agglomeration from historical successes, but scam networks are international. Even if China did have the ability to cut off all Chinese nationals running such things, it wouldn't stop the many others who would take over the infrastructure they've already set up and continue to grow it. China doesn't have many tools that can't already be used by our own countries, which mostly involves doing police work until you find where they've set up a facility and convincing the host country, which is almost never China, to go in and shut it down, then repeating hopefully fast enough that it costs them more to make a new one than they gain from it.

It takes one click to join Uber One, but quitting might need 32 actions

doublelayer Silver badge

Re: Microsoft next

Perhaps you didn't notice it from the article, but it isn't legal in the US. That is why the US is trying to stop them. The problem, which is a problem in a lot of countries, is that there are a lot of protection laws that don't get enforced. That goes for the EU as well; I'm still waiting for any meaningful enforcement of GDPR, for example. The FTC occasionally pursues actions, halfheartedly, on a few large companies, but that doesn't really count either especially as most of these get dropped and forgotten about before any consequences arise.

Cursor AI's own support bot hallucinated its usage policy

doublelayer Silver badge

You should be able to and you should do it, but in my experience, neither of those shoulds are done frequently enough that you can count on it. It is far too easy for bugs to slip through code review. It's still worth doing it, because sometimes a bug that the writer of the code can't see for the life of them jumps out immediately to a reviewer, but it doesn't always happen. Meanwhile, a lot of people with experience are used to reviewing code written by someone who has already eliminated the obvious bugs because those wouldn't have run right, experience that isn't helpful when they're reviewing code that could have any size of bug in it. There are also plenty of people who don't review code as thoroughly as they need to because they prefer speed. I admit that most of these negatives would probably be correlated with people who don't care that the LLM produces wrong results, but there may be some people who realize they're not going to do the testing necessary to confirm that the output wasn't riddled with bugs.

doublelayer Silver badge

Re: Two possibilities

It would, which is why I called the LLM version "unacceptable". It's far from new though. Lots of companies have had the help bot where you type your question, it runs some algorithm against the FAQs, and sends you to one that often has nothing to do with your problem. Maybe the LLM can more accurately connect queries to canned answers.

doublelayer Silver badge

A great advertisement for their service

Meanwhile, by having an LLM hallucinate a support result, I hope they have demonstrated the randomness of these models to their customers. Were this a normal business, that would just show that the company is lazy in a way that can ruin your experience. But this company isn't normal, it's one that uses AI to help with programming. I hope the customers are now thinking the obvious question: if it messes up this badly with a simple support question, what is it doing to the code I give it?

doublelayer Silver badge

Re: Two possibilities

I'll take the other side of that and bet on 2. Customer says they're receiving a message, is this correct? Most of the time, the answer to that is yes. Throw that question at a bot told to never say it doesn't know the answer and to pick the most likely result, you'll often get the answer yes. Similar with consumption limits. Tons of services have those. This service probably has them too, just on different aspects, so the bot has some data from its training data saying that there is indeed a limit. Parrot that back with incorrect context and you've got a plausible reply that's completely wrong. Which is why LLM support is pretty much unacceptable unless it's a voluntary opt in with warnings in front of it, in which case is it really worth building.

Hacking US crosswalks to talk like Zuck is as easy as 1234

doublelayer Silver badge

Re: After the laughing...

Bluetooth is packetized on 2.4 GHz. That gives you lots of problems. You're going to have to work very hard to connect over distances of 3 km just because the signal is likely to be attenuated by everything in between. You can turn your transmit power up very high if you want, but the light controller won't. Normal Bluetooth connections use collision avoidance so it's not hard to identify the signal you're interacting with, but it isn't trying to avoid collisions with something 2 km away. If you're able to receive your signal at that distance, you have to manually tell it apart from all the other signals between you and it, and you have to do this for every packet it sends. It's not just Bluetooth devices that will interfere. 2.4 GHz WiFi and any other use of the unlicensed band is possible.

Even if you can fix both of these problems, you still have to have someone with relative physical proximity wherever you want to attack, and it's not clear how much control you may have over the box if you succeed. If anyone knows that you're likely to do it, they can detect a signal that's much more powerful than it would be on 2.4 GHz and identify that you're doing it and where you are.

CVE fallout: The splintering of the standard vulnerability tracking system has begun

doublelayer Silver badge

Re: We've been here before

What's your proposal? Because it seems the IT people didn't do anything from scratch. They formed an organization to assign codes to things, the same way that biologists created the International Commission on Zoological Nomenclature. The two organizations issued codes governing how things could be assigned, then outsourced the process of actually assigning those codes to the people who do it. The two processes seem to work in very similar ways.

Google, AWS say it's too hard for customers to use Linux to swerve Azure

doublelayer Silver badge

Re: how many

Depending on scale, that can be true, but I've frequently experienced the opposite. Colos near me have relatively high base prices for having a single server in them. Even ignoring the cost of the server and costs to manually go repair it, just having the space there could pay for several VMs. How cost-efficient the approach is tends to depend on how powerful a server you would install in the colo if you could. If it's something massively expensive, then buying it and renting space to store it will probably be less expensive than renting the equivalent VM. Many small businesses or projects don't have that requirement and can work just fine on smaller VMs. When I've done the numbers, it often ends up being more efficient, sometimes substantially so, to rent those from some cloud provider rather than obtain a basic server and find a place it can operate.