* Posts by doublelayer

7688 publicly visible posts • joined 22 Feb 2018

Ex-Twitter execs sue over $1M+ in unpaid legal expenses

doublelayer Silver badge

Re: Have an upvote

You are demonstrating your ignorance on a number of things. Your last comment did you no favors at all:

if former Twitter execs are later charged with felonies*, should Twitter be expected to cover their legal expenses?

*Not being a lawyer or an expert on the Constitution, how things like conspiracy to violate the First Amendment would actually get charged..

1. Violations of a constitutional amendment are not felonies or in fact crimes at all. This is related to

2. Twitter's executives cannot violate the first amendment in the U.S., which only applies to governments passing laws, which is clearly stated in the text of that amendment and I'm sure you've seen it because everybody on the internet, regardless of where they live, has seen this argument before and the text has been quoted. All of this means that

3. If a law violates the amendment, the courts strike down that law and nobody goes to jail.

doublelayer Silver badge

Re: Have an upvote

"It would not necessarily mean having to commit money into escrow but having their accounts frozen so that payments by then need approval of the plaintiffs."

Whenever you think of ideas like this, check whether they work if reversed. Let's say that Musk cancels his longstanding policy of making logical, well-considered decisions not at all based on who he happens to be unhappy with this morning and starts suing anyone he can think of. Let's say that he's going to sue that guy who told him he was wrong about how the Twitter app worked for breaching the NDA. By releasing some information, he has cost Twitter priceless information. If you could use an action like that to prevent the defendant from paying money, you could prevent them paying for a lawyer, force them into picking the cheapest lawyer if the court refused that last idea, or deny them simple expenses. It works the same way with a company, probably even worse because they're less sympathetic and have a lot more bills to pay than the average person. That could be a great weapon for winning court cases.

A slightly better version would be to require the approval of a judge, rather than the plaintiffs. While better, it's still bad; now you've just got to get a judge that is willing to let you use this mechanism as a weapon and you can use a suit to lock up someone else's money, probably also costing their ability to contest the case well. If it is dangerous and produces injustice when a big company uses it against a small one or an individual, it should not be allowed. Yes, when the big company is on the other side, it may feel nicer having a bigger hammer, but it won't justify the damage done in all the other cases.

Starlink opens final frontier for radio astronomers

doublelayer Silver badge

Re: Sorry but no.

I suppose, but Starlink didn't invent small satellites or groups of satellites. Both concepts were in existence before that, at least as far back as the cube sat concept. The form factor idea was suggested in 1999 and the first ones sent up in 2003, so that's not something Spacex can take any credit for. They may be able to take credit for making launches cheap enough to consider as many satellites, but even if I was willing to make that leap in logic, it would be Spacex's rockets, not Starlink, that gets any credit.

Just because on-prem is cheaper doesn’t make the cloud a money pit

doublelayer Silver badge

Re: Risk

Risk management should be but often isn't a focus of the administrators and managers of any system, computers or otherwise. If you decide that cloud is your structure, you have to decide whether you want geographic redundancy (global outages are much less common than one-region ones), multiple cloud providers, or some cloud and some backup servers, or maybe you don't need any of those things but an outage could affect you in a worse way. Similarly, you have to consider most of the same risks if you're housing your own equipment (how much generator capacity do you need, what if the generator fails and you have a power outage, what happens if your DC is hit by a natural disaster, what happens if it's mismanaged). Some companies decide they really need two redundant datacenters in different locations with independent administration, and some don't. That can cause an outage as well and that outage can be as bad as a cloud-based one. Anything can break and anything important needs to have a book of plans for what is likely to break and what's going to happen then.

doublelayer Silver badge

Re: It's not just about the technology

Businesses have ways to deal with large capital expenses, and they have to do it for all sorts of things. They can manage it for computers as well. It may look nice on the paperwork and they may be able to use it to make a specific financial statement look good when they switch on, but it usually doesn't bring that much to the business's financial situation.

The primary case where the difference becomes important is when a business needs to buy a lot of computers right now, and the cost of outright purchase would be higher than temporarily renting them. Or in other words, it's the spiky use cases with extreme scaling requirements that is the most obvious case where cloud is useful. In most other cases, there won't be much of a crisis to treat it as capital expenses; if it's small, the expenses aren't going to be very large, and if it's a lot of servers, then you can get the same kind of expenditure flow by buying replacement hardware on a staggered schedule.

Cloud can be useful and there are times when it can be cheaper, but how it appears on the statement isn't particularly important in most cases.

doublelayer Silver badge

Re: Obvious???

"I think it should be obvious that if I am paying someone else to do something for me, it's likely costing them in the ballpark of what it would cost me, plus they're going to add a bit on top. [...] It really is economics 101, straight out of Adam Smith."

It's so entirely contrary to one of Smith's core points that this really needs countering. It's the concept of comparative advantage. I could generate electricity, but I don't have the skills to do it well, I don't have the scale to do it efficiently, and I don't have the business to sell it to anybody other than my own use. Hence, it would not cost me about as much to generate electricity as it does for me to buy it. Even if I don't value my time at all, I would have to spend significantly more currency per joule I generate. That's not automatically true of everything, but it is not only not obvious that the costs are the same, it's flat out wrong.

Smith made this point at length as he was arguing against the tendencies of governments to use this flawed logic, restricting their ability to use resources efficiently. To use the famous quote, it's not that he bought from the butcher and baker because he didn't want to spend the time butchering and baking, but also because they were better able to do that than he was; years of skill and having a town willing to buy helps.

This doesn't mean that you have a comparative disadvantage in running servers. You're posting here, so you're probably in the group that has an advantage. That is why your employer hires you to do it rather than deciding that "Well it will cost us about what they're going to charge and they're going to want a salary, so let's try to skip them". You're better able to administer systems than they are so they hire you to do it, and if you decided that a certain way of storing and operating the servers was most advantageous, it could be true whether you did the housing or hired someone else to.

doublelayer Silver badge

"Hosting your own hardware will always be cheaper in the long run than hosting someone else's simply because you're not paying the "someone else" tax"

That's very flawed logic, and it doesn't work with most other things. Consider your response if I said one of these things:

"Generating your own electricity will always be cheaper in the long run than buying someone else's simply because you're not paying the "someone else" tax"

"Growing your own food will always be cheaper in the long run than eating someone else's simply because you're not paying the "someone else" tax"

"Carrying your own packages from your house to your recipient, even if they live on another continent, will always be cheaper in the long run than using someone else's delivery service simply because you're not paying the "someone else" tax"

I'm guessing that you don't own a farm, power plant, and global mail system. You might own one of these things, but usually only if you're the someone else providing your service to people who would definitely not be doing better if they were trying to do it. This doesn't automatically mean that on prem is more expensive, and for many use cases it will be cheaper, but your reasons why it is are bad. If you use the wrong reasons to defend your stance, it may harm you when those reasons are proven invalid but there are right ones that could have done the trick.

Is it time to tip open source developers? Here's one way to do it

doublelayer Silver badge

The theory is that if everyone relies on the leaf component, then they get lots of small donations from anyone that uses something that upstreams. They kind of have to do it that way, because otherwise the incentive is to make something small but a bit useful then add it to as many other things as I can in the hope that people will use them and I'll get a cut of their donations. That could always happen on a plan like this, but if the payments are small for things that aren't used directly, it's less worth it to attempt to attach a small do-nothing library than to build something more useful.

doublelayer Silver badge

Re: Good luck with it, but

That sounds very similar to the tip proposal. Both of them require that somebody who uses the code decides to voluntarily pay well for it. I'm not sure how yours differs other than the fact that you're focusing on big companies whereas the tip proposal works for any size of company. Unless either is widely adopted, it's likely to produce some relatively small donations.

It's also likely to limit which projects get support; the big tech companies already donate large (for us, not for them) amounts to open source projects, but not to every project that gets used somewhere in their company, since that's less tracked. That's why the Linux kernel gets a ton of donations from big tech companies, because it's a single, large project that they understand that they rely on a lot. The tip proposal appears to be focused on automatically following dependencies to distribute the funding to all of them, which will work better for the deep tree that is Node dependency hell and not bad for a structured package repository like Python has, but not so well for a bunch of libraries pulled in from GitHub or internal clones.

Samsung reportedly leaked its own secrets through ChatGPT

doublelayer Silver badge

Re: They copied all the source code, entered it into ChatGPT, and inquired about a solution

That might be better, but it's incompatible with the methods they're doing and prohibitively expensive to do more than once a year or so based on the way they've been making new ones. Part of the reason for this is that the models don't keep every byte of their text training data, although they keep a lot of it, and they thus can't tell how to weight some new text because they don't know how large their sample sizes for various calculations are. Maybe AI researchers will come up with new types of models that can do this better, and if they do I won't be surprised to see those methods tested and employed, but for now the model types they're using do all the training from scratch.

doublelayer Silver badge

Re: Seriously....

"Either Twitter didn't want to be seen to crap on disabled people or they didn't know he was disabled. I think probably the latter."

So the contract specifically saying it, agreeing on accommodations, signed by the company didn't tip you off that they might have known a thing about it? Musk probably didn't know, but the company did. This also suggests they did, in fact, do some investigation into what they were buying and they thought it wasn't going to cause problems they couldn't handle. And, since he was in a job that involved a lot of managing between teams, his mobility restrictions probably weren't that big a problem until Musk was looking for an excuse and still chose wrong.

I'm also not sure why you think that Twitter agreeing to let someone work from home when they were using a policy where everyone could work from home and for an employee in a different country who presumably didn't want to move across eight time zones counts as "woke". However, since you used the word, I can safely assume that your logic would be incomprehensible or stupid and thus I don't have to look into it too deeply.

doublelayer Silver badge

Re: How do I upload a PDF file to ChatGPT?

Maybe they just did a select all on the text layer and pasted that text into the chat window. It wouldn't help with images and some layout would have been lost, but the words would have gotten through.

doublelayer Silver badge

"I drummed into these newbies over and over again, never say anything in email that you wouldn't be happy to see on the front page.

The same is true today, but also includes web forms of all descriptions. It just defies common sense that people would feed anything confidential into a system they do not control"

Maybe they did it because your advice was absolute to the point of uselessness. Before those businesspeople used email, they would say the information into the phone, connected to lines they also didn't control, or sent through the post, which they likewise didn't control. Neither was encrypted, both could be intercepted as easily as email could, and there was no available alternative that had more security short of in person only meetings. Some might have interpreted your warnings to mean that email was especially bad at security, which compared to existing mass-market technologies it wasn't. Those who understood that they were both insecure probably thought about your advice, realized they hadn't been told about a better option, so went with it anyway.

People have a poor understanding of security, most often due to not caring to learn about the details. It doesn't really help to give them unrealistic standards that they won't be able to meet, because they'll come to the incorrect conclusion that computers are never to be trusted and they have to decide between living off the grid or having no security. If you present someone with that false dichotomy, they're likely to be among those who decide that, if everyone is spying on them and there's nothing they can do about it, why not hand over all their data to Facebook, Google, and any page that suggests you log in, because at least that's convenient.

User education is hard. I get the idea that it might be better to overstate the risks in order to err on the side of security. The problem is that simplistic answers may lead the users into errors they wouldn't make if the more difficult work is done or if the IT security team makes the decision for the company and uses network blocks and warnings to enforce it.

doublelayer Silver badge

It's the other way around. It doesn't keep the data and reprocess it because it isn't designed to be easily retrained on new data. In order for this type of model to get new data permanently added, it has to be added to the training set and the training routine run over again. That full retraining takes a lot of time and money, so they try to do it as little as possible. Technical limitations prevent them from automatic reingestion.

doublelayer Silver badge

Re: They copied all the source code, entered it into ChatGPT, and inquired about a solution

"And if everything that gets asked of it also gets re-ingested back into the model, then anything commercially sensitive will automatically become available to everyone else who uses it, making it a total no-go area for anything to do with your business if you have any sense."

Everything said during a session is not ingested into this model, because the model can't do that. It is trained once and if you want to put some more data in, you have to start training it again from scratch, so that never happens. That's why GPT tends to be at least a few months out of date and can't explain anything that involves something happening recently. This means that if you put in some fact it didn't already know, it's not going to show up in anyone else's conversations, nor even yours if you have a later one. That is not a risk.

What still is a risk is that you've still sent a bunch of data to a different company without analyzing what they say they're going to do with it and what they actually do. Nothing prevents them from using that to train the next version of the chatbot, being hacked, or selling your conversation history, so unless you have a legal contract, that could still be a risk.

Google: If your Android app can create accounts, it better be easy to delete them, too

doublelayer Silver badge

Re: Is this the same Google

No, Google does it for cash and lovely data, but it also is the way they synchronize paid app purchases across devices. They weren't doing anything for our own good, but along with their data collection comes some features that most users will want.

You don't have to do that, though. You can use an app like Aurora Store (which is on FDroid) to access the Play Store. You can anonymously install offline apps, at least until Google breaks it, and you can use it to obtain APKs for commercial apps which will require your own account to purchase them, but not to install them. This may break if the app concerned expects you to have a Google account or uses it internally. For example, I installed an app that uses Google Play for in-app purchases, and that one was not going to be happy unless the system had a Google account configured. I avoid Google tracking whenever I can, and I find Aurora helps to make that easier, but there will always be some things you can't do if you're using Google's platform without turning on their spyware.

doublelayer Silver badge

Re: So....

Just because they would have to delete the account doesn't mean they could take your money or would give you any. That's still illegal, so even if they decided to interpret this literally, it would just make the process of closing an account faster. If they actually deleted the account without finding out where you wanted its contents sent, the money they gained is still legally yours and has to be turned over to you. If they can't find you, they give it to the government to do that. Governments do that in a variety of ways, but many have offices to handle it (not very efficiently, but they do).

doublelayer Silver badge

Re: So....

"So, if I delete my Barclays banking app account, then all my bank accounts will be closed and the money disappear?"

No, if you push the button marked "delete account" on one of your bank accounts, then that account would be closed and the money would have to be transferred somewhere else. If you happen to have a separate account just for online access and the bank doesn't do that automatically without the extra online account, then deleting that account would just take away your online access but the accounts with money in them would stay around. I doubt you'll even be able to do either of those because since when can you set up a bank account just in an app. Whenever I've done it, there have been more forms to fill out that could be doable on a website but in some cases required physical writing on paper in one of their buildings. If the app doesn't let you open an account, it's not required to close them.

Thieves smash hole in wall to nab $500K in Apple iKit

doublelayer Silver badge

Re: Process

Which part? That they serialize parts so that you can't swap them between device, because I'm afraid that's old news, not a conspiracy theory. Their reasons for doing so were to avoid stolen parts and a vague thing about customer safety which they didn't explain, possibly because it would be unlikely to convince anyone who didn't already accept their reasons. Did I make another claim you found to be paranoid, conspiratorially-minded, or flawed?

doublelayer Silver badge

Re: Process

That will be the excuse Apple provides for why phones have to have all parts locked so that you can't use them for spares. It will make some sense when they say that, which is why they won't mention how it also gets them plenty of business for unlocking those components from devices that weren't stolen. Depending on the models, the criminals might be able to use some of the parts as spares, but with a lot of pieces that will be unusable after they're removed or because of serial locking.

Child hit by car among videos 'captured by Tesla vehicles, shared among staff'

doublelayer Silver badge

Re: "Tesla driving through a residential area at high speed and hitting a child on a bike"

"(There's so much hate for Tesla around here. Why?)(BTW -- I don't own one.)"

There are a lot of reasons that account for that, and it's hard to know what a given person's might be. A lot of it is probably related to the fact that it's a Musk-owned company and he is a controversial figure. Amusingly, it used to be that Tesla was automatically opposed by people who hated that the cars were electric, but Musk's recent political moves have flipped the groups that tend to hate what he does. Some people may also think the cars are poorly designed, and I've certainly seen those arguments (at this point, I should probably say that I don't own a Tesla, and for that matter I don't own any car and thus haven't compared models).

Then you have people who have strong views on self-driving technology, which Tesla has at least talked a lot about. Some object to the fact that Tesla's software is limited and has safety problems but yet is running on the roads today. Some may also have a problem with the promises of improvements, all of which have turned out to be lies. There is another group who hates self-driving altogether and would oppose any car that attempts to build it.

There are probably other reasons, such as people who oppose the subsidies that Tesla has received, people who disapprove of the repair policies of the company, and people having strong opinions about which car companies are good or bad and whether expensive cars are better. I'm specifically bailing out on that last one because I know little about the comparative differences between car brands and will make up my mind on those if I buy a car. I've probably missed some others and each detractor probably has multiple reasons that combine to produce their attitude.

doublelayer Silver badge

Re: Human error

As long as you collect a bunch of data, you can have this, so the solution tends to be not collecting data you don't need to, like video when the car is off. I'm also not convinced they even got much use from video when the car was on, and there's no way I would have opted in to that. Some misuse will happen whenever there are humans, but we can still take some relatively basic steps to limit the misuse anyone can do.

Techie called out to customer ASAP, then: Do nothing

doublelayer Silver badge

Re: This is a job for .... Justin Case!

"If the people that can cause the contract to be cancelled are all out on holiday, that's a good point to do the risky stuff."

That depends a lot on what those people are like. Are they the kind of people who will freak out if they see a problem, but if the problem's fixed by the time they show up, all is good? Then you would be right. Are they the kind of people who want to see what is happening and will be annoyed if you change things when they're not there to monitor? Are they the kind of people who value you more if you're present and look busy, so having people there to do the change while they can see is likely to give them confidence that you're productive and good at the job? In either of those cases, it's better to wait until they're present.

I'm referring more to the damage that's really done by a system being down. Managers yelling at you isn't always fatal. Customers furious about an unplanned system malfunction can be worse. If your goal is to optimize for causing the least damage, there are reasons not to do that on a holiday as I described in my last comment.

doublelayer Silver badge

Re: The BOFH had a term for this

Rule number zero of troubleshooting (and design):

User behavior and ability to miss details are more unreliable than mechanical parts and, therefore, the first place you should look if there's a problem.

Before checking whether the switch isn't engaging right or the cable is not working, check that the switch has been set to on and the cable is connected to the right things and those things appear to be working. It's related to the first step of troubleshooting: verify that there is a problem and that you understand what the problem is, not what the report said.

doublelayer Silver badge

Re: This is a job for .... Justin Case!

"Stupidly? If there's downtime to make the cut-over it's a very sensible time."

Probably not. To do a proper switch, you need plenty of staff there who can make the switch, quickly back out the switch if it goes wrong, and detect problems that would either be big enough to require rolling back or making emergency changes to the system. That's a lot of staff. You might also need the people who managed the switch if there's any part of this that would need approvals. Making all of those people work over a holiday wouldn't be popular when you could try using a weekend for it instead. This is all if all the people needed for a switch are in your company. If there's a chance you need to call someone else for support, a holiday is not a great time to rely on that.

There is probably a lot less downtime from the perspective of how many users are using the system than there is from the perspective of how many people are there actively working on the system. It's not guaranteed that failing the system during a holiday will be safer than failing it some other time, and if the fewer staff means that it takes longer to fix than it would at another time, it could make it worse. For operational reasons, there's a reason not to do it, and for a not having your technical staff angry that they had to work on a holiday and the management who decided this do not reason, there's another.

Welcome to open source, Elon. Your Twitter code just got a CVE for shadow ban bug

doublelayer Silver badge

Re: Surprise two-way flow

I don't know if he made clear statements either time, but it's obvious which side he'd be on now. He now owns a company that would profit from news and has gotten into fights with real journalists when they had the audacity to report what he said and how others responded. For both reasons, anything he can do against them will now be considered a good thing. It's clear that whatever he considers a good thing in the morning becomes official company policy by noon, even if everyone else thinks it's stupid and it's diametrically opposed to policy as of yesterday evening.

ChatGPT becomes ChatRepair to automate bug fixing for less

doublelayer Silver badge

Re: That's just great, Ollie

Unfortunately, it's an approach I've seen in humans as well. It's exemplified by this quote:

"We observed that including useful information such as test failure error or even the failing test name itself can provide additional information like the type of bug (e.g. null pointer exception) and the expected correct behavior of the code,"

I've seen people take this and fix a bug, but never change what actually happens. Sure, they've put in something that checks for null and doesn't crash the program, but they haven't asked questions like "where did the null come from", "is that null behavior we accept or not", and "could anything else result in a null ending up here which isn't the case I've just added in". Debugging isn't a process of looking at what it is doing and making it stop doing that, but looking at what it isn't doing that it's supposed to and making sure it does that.

Benchmark a cloud PC? No way. Just trust us, they work, says Microsoft

doublelayer Silver badge

Re: Benchmarks are important sometimes..

This is exactly the flawed assumption I was talking about. Your comment assumes two things:

1. "On-premises you have network people [...] to take care of external security."

2. "In a cloud there's no such thing [as network administrators]"

Both are dependent on the administration of the network. There are on prem deployments that are missing administrators to handle situations or where the administrators are not competent to handle security issues. Just having the servers in a building you own doesn't make those admins pop into existence, nor does it automatically train those admins in making sure the systems are on secure networks rather than just being available. There are admins who are good at plugging in network cables and assigning static IPs who don't understand which things need to have public access and which don't, and when those people decide on the firewall rules they often leave them more open than they need to because nobody is complaining when that is configured. If you choose to deploy those servers in the cloud, it doesn't auto-fire those administrators, nor does it force you to have a policy that prevents them from applying secure standards.

Your comment contains all the important details and somehow still misses the point. Wherever you put your resources, you should have someone who understands how to deploy them securely and is empowered to make sure it has happened. Leave them out and your on prem situation will not save you. Include them and your cloud situation can be secure. This is the same logic that was used by cloud salespeople who said that, because they were big companies with a lot of security people, your deployments would be safe if you just moved them to their resources. That sales pitch was wrong, and so is it's exact opposite where on prem deployments are automatically more secure. Nothing is automatically secure.

doublelayer Silver badge

Exactly, or even a variation with a predictable floor. For example, if you're sharing desktops on a big server, then you could theorize variation where you get more of the power of that server if your desktop happens to be the only one or one of few operating at a certain point. This would be a nice bonus during low-utilization times, and it would mean that benchmarks taken then wouldn't be reflected when a more typical load was using the resources. However, you'd still need some agreed lowest level of performance so you know what you're paying for. Microsoft's offering doesn't appear to have that.

I'm also still not sure what the point of the desktops are when you're running them from a machine that's likely as powerful. Using cloud for more powerful servers or ones that need better connections to a lot of data makes sense. Using them for user desktops isn't efficient since you'll still be providing hardware to the users that could be doing the job. I don't think there will be much savings in downgrading the hardware to be a thin client.

doublelayer Silver badge

Re: Benchmarks are important sometimes..

"The risks of getting things wrong are debatably higher too, as the number of "hacks" on S3 buckets shows."

This is more just standard failure to do the obvious security, and in AWS's case, they've changed defaults to try to help with that problem. If someone doesn't check that their buckets aren't publicly accessible, they may not be checking whether their internal files on the public server can be accessed without authentication. If you intend to shoot yourself in the foot by not checking any configs, you can do that pretty easily on cloud or off it. Competence to at least check the basic things is unfortunately not universal, and if someone thinks that switching to cloud or avoiding cloud will help with that, they're looking in the wrong direction.

It is now safe to turn off your brain: Google CEO asked Bard to plan his dad's 80th birthday

doublelayer Silver badge

Re: Conspiracies et al

Wonderful collection of incorrect stereotypes you have there. I'm one of those young people, but it's pretty obvious to me that chatbots aren't intelligent whenever they have to do something that takes more than one web search. Ask any complex question about a subject you know a lot about and their lack of ability to reason becomes clear. This subject doesn't have to have any connection to the things you think are being "thought policed", whatever those are, and can be something as simple as mathematics (if somehow you think mathematics are restricted by the young, then I give up).

doublelayer Silver badge

Re: Asked it to respond as if it were the planet Pluto

Alright, I'll give it a shot.

Hi human. I'm Pluto, and I don't care about any of your problems. You're way over there. I couldn't care less what you do for your father's birthday. Also, tell NASA to cut it out with the probes. I've seen two of them just fired past me out into the rest of the universe and one that was poking around much closer. I'm sure they don't want those probes out there, and don't you even think about trying to land one over here. Bye now, and by the way, I'm going to last a lot longer than your planet will when the sun vaporizes it in a while.

Version 100 of the MIT Lisp Machine software recovered

doublelayer Silver badge

Re: RISC Machines

This depends on how complex something has to be before it jumps from RISC to CISC. ARMV9, with over a thousand instructions, or ARMV8 which has most of those and a bunch of others because it supports all the 32-bit instructions as well are pretty complex. It's not just load, store, and some mathematical operations these days. There are a lot of instructions that execute complex operations or that work on multiple pieces of data in one go. Is that still as RISCy as RISC-V which has about 60 instructions in the base set? If not, is RISC-V still RISC when it's used along with a bunch of extensions?

Plagiarism-sniffing Turnitin tries to find AI writing by students – with mixed grades

doublelayer Silver badge

Re: Saving Time??

It might have, since it removes the part of the process requiring original thought. I can't say I've done something like this, but the closest I came was in a literature class I took as a young student. The teacher of this class was not very good, and whenever they assigned a literary analysis assignment, original thought was not recommended. You could come up with an original idea for something contained in a book, rigorously defend it with quotations, comparative analyses of other books by the author, verifiable or likely references contained in the source material, but if it wasn't the interpretation the teacher thought of you would still score poorly. Meanwhile, if I listened to what the teacher said the theme was, paraphrased it, and backed it up with the first quotes that came to my mind, I'd get a good grade. Either that or I'm still deluding myself that some of my interpretations had value and I was always terrible at it.

In that case, I was saving time as a side-effect of writing to get the highest grade rather than the other way around, but it was more efficient to figure out what the simple answer was and write for that rather than to think of a novel one and find the information to demonstrate it. The work was reduced from thinking, proving, and writing to just writing. I was less happy about the result, but it took less time to get it.

doublelayer Silver badge

Re: ChatGPT isn't good enough yet

I think the suggestion is that, because the raw GPT output wouldn't receive a good grade because it is too bad, that students could start with that and modify it into a good essay. The modification would be equivalent to the work required to write the essay from scratch. This works well as long as the GPT output is really that bad, because any student who knows what they're doing could create a valid essay and would probably realize the GPT output wasn't helpful. It breaks down if GPT can generate working answers for some questions without requiring the student to do a complete rewrite, which might mean that the proposed policy is only useful in the classes the poster teaches and not necessarily anyone else's.

doublelayer Silver badge

Re: ChatGPT isn't good enough yet

"My experience, from asking chatGPT to generate answers to various exam questions I have set, is that chatGPT answers would get 2ii marks at best, and that it is pretty obvious that they were not written by a human, or at least someone with a good grasp of structuring an argument and use of evidence."

I suggest that this may be due to a small sample set. If you are setting more advanced exams, your questions may be the kind that a chatbot can't answer plausibly. This will affect the lower level classes first, because the answers in those cases are simpler to generate without really knowing the content. Any student that takes something sufficiently difficult will find that GPT is no longer useful, but it could create problems for those taking prerequisites where the questions are easier. I'd expect this to be faced by teachers of younger students before they get to you.

doublelayer Silver badge

Re: Doesn't add up!

I think it's two independent statistics. Their model probably provides a confidence number based on how likely it thinks the text is to be AI-generated. That number is generated by an opaque process and isn't necessarily correct. They've looked at how well the model is predicting this, but not to try improving the results. Instead, they just ask themselves how high their threshold needs to be to have a marketable statistic for accuracy.

They need to do this because, if their false positive rate is too high, they'll have universities and students alike protesting at their offices. Students told that their self-written essays have been rejected because a computer decided it might be AI-generated and there's no way to prove that it wasn't are not going to be in a charitable mood, and they would be right about that. Universities don't want students to get that angry. To keep this rate down, it's likely that this software is letting all sorts of AI-generated stuff through on the theory that it's better to let a bunch of guilty students through than to punish more than 1% of innocent ones. I suppose I have to agree that false positives are a lot worse than false negatives, but I'm dubious that this software is great on either.

doublelayer Silver badge

Re: A rather important question

"the fundamental purpose of education itself (to facilitate individual growth of understanding) has been abandoned."

When you're growing individuals' understanding, they start by not understanding something that everyone in the field will need to understand. You need to teach them the basics and then verify that they understand them. Let's take computer science as an example. When you're doing early CS education, the basic code the students write isn't groundbreaking, because they're demonstrating that they understand basic concepts. One student's recursive factorial function will look a lot like everyone else's. Even at higher levels, students doing the same assignment will end up producing similar results. I graded a bunch of students' basic HTTP servers in C. Yes, they all looked a little different, but nobody came up with a radical reinterpretation of how to implement a protocol as simple as HTTP. They were demonstrating that they could write a program that implemented a spec, manage network resources to deal with a bunch of incoming connections, and didn't use up too many resources in doing so. Had they gone to any of the other schools that has an HTTP server as an assignment, they could have gotten someone else's code and I wouldn't have been able to instantly detect it. That's it.

Essays can be similar. Not all of them are new original research. Some of them are intended to check that the student knows about the topic they're talking about, has correctly interpreted what they've read, and can get results from that. Eventually, they will need to do more original work than that, but not at the earlier levels where demonstrating understanding and the ability to describe what they know is being tested. As things get more complex, sending a prompt through GPT will become worse and worse. Even for earlier essays, GPT is liable to end up spitting out wrong answers and causing the cheaters to submit flawed essays. That isn't guaranteed, though, which is why chatbot-generated essays are still an issue.

doublelayer Silver badge

Re: pointless

Depending on the level, the essay might have two purposes: making sure they understand the topic and helping them improve their writing. They'll eventually have to write something to express their ideas. Those who do not do well with the written word probably need to learn more about how to write well, because in our world, it's rarely optional. That's not universally true, but low-level education is trying to train everyone in general skills, and writing is one of the most general ones out there.

"you're not going to get kids who don't want to do the work to retain any of the knowledge by forcing them to write an essay."

How are you going to get them to retain the knowledge? Some kids are lazy about doing the work in whatever form but understand it, and that's great, but for those who don't want to do the work and don't understand it by themselves, do you have a plan for checking whether they understand it and fixing this problem? I don't think homework in a different form is going to change things very much. You'll have students who don't want to present in front of a class, don't want to have one-on-one discussions with teachers about the subject, and don't want to do the homework assignments. That's a problem with not wanting to do the work, not an indication that we need to change what the work is. There are cases where the essay is poorer than alternative methods, and change makes sense in that case, but not just to deal with students who don't want to do it.

doublelayer Silver badge

Re: pointless

"If the essay can be fully automated by ChatGPT it likely means the entire subject can be automated (and removed from human jobs)"

Yep, definitely. Writing an essay describing how the cleaning process of a historical oil spill went wrong can be automated, so let's just let the AI figure out how to do the next one. The point of the work isn't writing it up, but understanding what didn't work, being able to explain why it didn't work, and being able to use that to figure out what will work later. Why does C4 explode? Easy to automate, but future chemists need to know that not so they can answer a quiz question, but so that their new compound that hasn't been described online yet doesn't explode unless they wanted it to. Other examples are available.

It's been that way no matter what tool was used. A simple calculator can automate all the primary school arithmetic assignments as long as you can explain to a student that you read from left to right, push all the buttons that have the symbols, then write down the thing the calculator says. If you let them use a more complex calculator, it can handle the next few levels as well without much human input. This would be great for jobs that just involve adding up numbers, but spreadsheets eliminated most of those. We still teach it so that the people know what numbers and operations to put into the spreadsheets, because numerical jobs don't involve a list of formulas people just want solved.

Tesla ordered to pay worker $3M-plus over racist treatment

doublelayer Silver badge

Re: How about direct support absent the middleman.

What If instead of cash, the company was instead on the line for mental health needs.

All your mental health bills get invoiced to their accounting department to be paid immediately.

We have received your expense report of April 3rd. We will need to see receipts from the providers along with a signed statement from the care provider that you did in fact discuss mental health issues related to this company's previous actions. Please submit this through the web portal, you know that one that someone wrote in 2003 which has a broken script nobody is ever going to fix and a 60 kB upload size limit. Remember that all expenses must be submitted within seven days of the service being provided. We will analyze your claim at our earliest convenience.

How is this supposed to help over the company simply giving them the money to pay for this stuff? It certainly is worse for the victim, but the company also incurs the cost of maintaining this service which adds complexity over being fined and having to pay that fine. I'm sure some companies would accept the trade, but those companies are also going to be the ones who find a way not to provide very much. They're the kind of companies who would quibble which expenses are about mental health and which ones aren't, and I don't have anything approaching a list.

In addition to all that, the penalties assessed in a trial like this aren't just for mental health expenses, but also a punishment for breaking the law in the first place. If you're worried about people pursuing incorrect actions to get money, I'm not sure how often it happens, but I can accept the logic that it could. The solution to that would appear to be to give the victim some amount for their suffering, then assess a punishment fine that doesn't get sent to the victim. You still have to do that or a company won't stop the behavior.

In the battle between Microsoft and Google, LLM is the weapon too deadly to use

doublelayer Silver badge

Re: Futile

You haven't noticed a change. You've noticed what already was the plan. OpenAI is a company that wants to make money. They're not going to release their models for free as open source. They were never going to, U.S. government or not. They didn't do that with any of their previous flagship models either. OpenAI wants everyone to pay them to integrate ChatGPT into whatever workflows they can, and they'd like them all to do it right now before people realize that it's not as useful as they wanted it to be. I can't automate the boring parts of my job by having a bot write code, because the code won't work right and I'll spend longer fixing it than I would writing it.

Why UK watchdog abandoned its Apple monopoly probe

doublelayer Silver badge

Re: time limits stipulated in the UK's 2002 Enterprise Act

"Also remember to Apple the EU is a relatively small market compared to China"

China's population is over twice the size of the EU's. That makes it a larger market if you could have all of it, but it doesn't make the EU market inconsequential. Also, Apple doesn't have and will never have the entire Chinese market, so the EU is a bigger one. Let's look at numbers:

Apple shipped about 17 million iPhones in China for the fourth quarter of 2021. That was the highest quarter of that year. They had 22% of the shipments that quarter, but in other quarters, that fell to 12%. In the same quarter, Apple had 31% of the EU shipments. The larger EU markets also have significantly higher market share for Apple. 40% of Germany's market, 35% of France's, 30% of Italy's, and similar levels in most smaller EU members mean that Apple sells a lot of phones there. They would like to increase their popularity in large countries and have been doing a lot with India to be able to do so, but they would be very unhappy to lose the EU market.

Hey Siri, use this ultrasound attack to disarm a smart-home system

doublelayer Silver badge

Re: Voice filtration may help

"If there's a way to turn off the "specific voice" component, so ONLY pressing the button would activate the assistant, that would pretty well stop these kinds of attacks."

There is, and if you don't train it on your voice, that's the default.

"Bonus points if the microphone doesn't get turned on unless the button is being pushed, i.e. the "assistant" only listens when the button says to."

Yes, it has that. Because it's on a phone, the microphone is still connected, but if you don't have the voice activation turned on, Siri won't be processing any input from the mic.

doublelayer Silver badge

Amusing typo

"And finally, iPhone 6 Plus wasn't vulnerable to either attack, likely because it uses a low-gain amplifier while more recent iPhones tested use a high-grain amplifier."

I'd like to try a high-grain amplifier. Do you think that it's also nutritious?

doublelayer Silver badge

Voice filtration may help

They have demonstrated that they can activate a lot of voice assistants, but all but one of them is going to talk to the user while executing the malicious commands. That gives the user a chance to hear that something is going on, and more importantly, for most of the interaction, they can simply shout no to cancel it because most of the questions, such as authorizing a transaction or confirming a lock are going to ask a yes or no question and the local voice will be more easily detected than the ultrasound.

The only one they can activate without making a loud sound is Siri, but that one will pose some extra problems. Unlike some others which listen for anyone saying their wake word, Siri is activated by pressing a button or by a specific voice. Activating the voice wake word requires the user to train the phone to recognize their voice specifically, and it then doesn't generally activate on someone else's voice. If you have a friend with an iPhone, try it and see if theirs turns on. This means that an attacker can't just create a single track to activate Siri on any device, and if they don't already have a recording of the victim saying the wake word, they can only hope to activate with other samples. This might provide some insulation to practical use of the attack.

Paid and legacy Twitter verification now indistinguishable

doublelayer Silver badge

Me: "Because posting on Twitter isn't advertising."

You: "Sure it is. There's a whole ecosystem based on 'influencers' promoting their 'brands' (ie themselves), which then translates into advertising and sponsorship deals."

Which isn't advertising, you know, the paid-for product kind of advertising. It's useful, but it's different. Twitter also does advertising, it's called advertising, and it works like advertising. Posting there is just posting, and although people sometimes do it to make money, they sometimes also do it just for its own sake.

Me: "Posting on Twitter is free"

You: "Nope. Distributing tweets costs money, just as dstributing newspapers does."

Yes, but Twitter doesn't charge for it. It is not free to Twitter. It would be free to me. It is still free to anyone else, tick or no tick. Twitter can change that but they have not.

But if it's 'worthless', why all the fuss about removing the tick? Surely all the complaints about being de-loused suggest current tick users see some value in the service, but just don't want to pay for it.

I don't care very much, and the people who have millions of dollars aren't choosing not to pay for it because they don't like spending the $96 per year. They either don't like Musk and don't want to give him money, or they think the tick is so useless now that it's not worth $96 a year (and I would agree with them on that). They choose to post about that, but that doesn't mean they care strongly about the issue, since Twitter appears to be used to post random thoughts people have, not all of which are of major importance to them.

So what you're saying is that it's possible to do some sender verification on tweets that don't have a tick? If so, again why are people so bothered?

Of course it is. Check the handle. You can't forge those. The tick was useful for people who were too lazy to do that. Now it isn't useful for anything. Again, I'm not bothered and I don't think other people are as bothered as you appear to believe. Just because The Register chooses to write an article and I choose to write a comment on the article doesn't mean either of us cares that much. So Musk broke another feature of Twitter; that's what I expect these days. Some people who liked Twitter might complain about someone coming along to smash parts of it up, but that doesn't make it everyone's opinion.

doublelayer Silver badge

"Nope. Closest model is paying for services. Businesses tweet to drive traffic, subscription fees and ad revenues. Businesses would expect to pay for TV or radio ads, why should they expect the Internet to do that for free? Especially when again, they've been busily implementing paywalls,"

Because posting on Twitter isn't advertising. They also advertise on Twitter for which surprise! they pay money. Posting on Twitter is free. Twitter is within its rights to make it not free, but they haven't, so people don't voluntarily pay them for the privilege. The paper could pay for a tick icon, but like everyone else's tick icon, it is worthless. I don't pay for worthless things and neither will most companies.

"look on the bright side, if the NYT, Pelosi or even the Whitehouse lose their ticks, it'll make it easier for them to deny they ever said stuff."

No, it wouldn't, since it is still easy enough to check the history without a tick icon, but given your list of suspects, I doubt you ever do that level of research.

doublelayer Silver badge

Re: Zuck on that

"here we have a bunch of people who could afford to pay refusing to do so. And the only principle in play seems to be that they're cheap."

Maybe their principle is that they don't pay for nothing, and right now, their tick means nothing. It no longer serves to verify to people that they are who they say they are, since it only translates to "they give us money". If El Reg came out with a new feature where I could pay them something and they'd turn off the ads, I'd consider it (I'm blocking the ads again because something must have gone wrong with the ads in the past, but still I'd consider it). If they had a plan where they took away my badge icon which I didn't ask for but I could pay to get it back, I wouldn't be paying. It's not because I'm cheap, but because the badge icon has no value to me and a meaningless blue tick may have no value to the people who have lost it now that it no longer means what it once did. It may not have had much value to them even then, but it certainly won't now.

FTC urged to freeze OpenAI's 'biased, deceptive' GPT-4

doublelayer Silver badge

Re: It is singularity already

"what is there in the process of creating a statistical model that is explicitly saying "do not be creative"?"

The goals set for the model to meet. In most cases, the people making the model didn't try to create criteria for the model creating something new. The only criteria they put in were for likelihood of similarity to existing text for chatbots or likelihood of corresponding to captions for picture bots. Neither was trying to have their system create stuff from scratch.

That would be difficult to do anyway. A lot of creativity is basically taking a random idea that is biased by but not directly from learned experience, then subjecting that idea to testing. My brain can come up with a lot of random things, but a lot of those things either need refining to make them good ideas or are just rubbish. The important aspects to human creativity which any computer will need to do are idea creation and idea filtration. So far, neither has been performed by the models. They could do idea creation by pointing a random number generator at their input, but that wouldn't be biased toward good ideas so it would generate a lot of bad ideas. They put only a little effort into filtration, but they filter undesirable output after creating it, not by filtering the original concept which is why ChatGPT will occasionally output something they tried to filter out while humans tend to have more reliable filters for what they consider good or bad things to say.