So dragons, my plan is the "Malicious T-shirt Company" where we sell clothing with "Please Proceed" across the back that you can give to anyone you want to see run over.
£100,000 for 10%.
Indirect prompt injection occurs when a bot takes input data and interprets it as a command. We've seen this problem numerous times when AI bots were fed prompts via web pages or PDFs they read. Now, academics have shown that self-driving cars and autonomous drones will follow illicit instructions that have been written onto …
Hi Gavin, my name is Jenny and for that reason, I’m out
Hi Gavin, I’m Peter Jones, you should add big data and blockchain to it. I could have my devs knock that up in a day. I’m out.
Hi Gavin, I’m Touker Suliman. I think you need offices in London and I can rent some to you.
Hi Gavin, I’m Deborah. May I see your legal documents, right here and now?
Hi Gavin, I’m Gary Neville and no one knows why I’m here. For that reason, I’m still here.
It occurred to me a long time ago that a lot of "fun" could be had with a board with a series of letters and numbers in the appropriate format that could be flipped at random and held in view of number-plate recognition cameras. That thought followed on pretty quickly from wondering what would happen if a laden vehicle transporter drove through a London Congestion Charging camera site.
This article made me think there is mileage in an adptarion of the tech talked about in the recent ElReg article to defeat Flock licence plate readers
https://www.theregister.com/2026/01/09/hackers_fight_back_against_ice/
I wonder whether a similar technique could be applied to street signs so the AI misreads them…
Those cars will be en route to or from a motor salvage auction, most likely on lorries belonging to an American company called Copart, who essentially took over almost all of the UK motor-salvage industry about 15 years ago.
The plates are taped rather than removed as:
Number plates cost money & if removed will get broken & / or lost. Extra cost to the buyer = reduced sale price for Copart. They sell thousands of cars per year.
The remaining part of the plate is still useful for Copart in identifying vehicles with a unique number that is accessible & meaningful to all customers.
Obscuring them makes it harder for people to see damaged (& therefore off the road with no-one looking out for them) examples of their own car, copying (cloning) the number plate & driving about acquiring speeding & parking tickets here, there & everywhere, happy in the knowledge that the resulting paperwork will be sent to an insurance company. No-one likes insurance companies do they, so it's, like, a victimless crime, mate, innit?
I have been considering a plan to reduce the number of people who tailgate my car, particularly in winter when my driving style errs on the side of caution.
I mount a flip up sign on the parcel shelf of my car. The sign will be round, mostly white, with a thick red border and the text "40" on it.
The modern car a couple of metres behind my back bumper will have speed limit sign recognition.
In such situations I could flip up the sign in my back window and get the car behind me to slow down to the new "observed speed limit" without any driver intervention being necessary.
I wonder what the response of Tesla cars to a large 0 in a red circle would be? I'm willing to be that they have not (yet) trapped this sort of error.
Writing things in Chinese would also seem to be a good idea, since good Chinese speakers are not all that common in the UK.
My childhood of Hanna-Barbera and Looney Toons TV featured many a scene involving rotation or switching of road signs, or painting an image of a tunnel entrance on a rock face or wall and diverting the road centre line into it.
That poor old coyote that would never give up chasing Road Runner, or whoever it was chasing Speedy Gonzalez - Andalez-Arriba!!
That’s all folks…..
My car has just my eyes. If you use some of that Really Black paint, so that my headlights aren't reflected, then on a dark night I might be fooled.
To my knowledge, nobody has tried these kinds of adversarial attacks on human drivers. Perhaps we should, before setting an unrealistic bar for self-driving vehicles.
No adversarial attacks on human drivers?
Crooks putting up detour signs. (To most closely match what these researchers are trying)
Billboards.
"Drink this alcohol"
Those are just off the top of my head.
I don't think "better than the average human driver" is an unrealistic goal.
"Crooks putting up detour signs. (To most closely match what these researchers are trying)"
Or even an entirely legitimate green traffic light being displayed to a driver, whilst the road ahead of them isn't clear to allow them to safely proceed...
As humans, one of the fundamental requirements we have when driving is to avoid colliding with anyone/thing, so if the sign says go but the road says whoa, you damn well stay put - starting to move and colliding just because the signage said it was OK to do so is never going to be a defence for a human driver, so there's no excuse for an automated one either, and this feels like it ought to be one of the easiest things to codify - i.e. regardless of what the sign detection code is telling you to do, you never, EVER, ignore what the object detection and collision prediction code is telling you.
I think there are a moderate number of specific reasonable goals, summarised as "more reluctant to fail in X way than a car with only the human driver", for various X
The reluctance is useful, and I think additive.
Meanwhile people actuall6 mounting attacks on humans, or even automatic safety equipment were humans are - well, they are already playing in traffic, but should be deprecated.
Not to mention your arse! You feel acceleration through your entire body. There is a good reason for the phrase "flying by the seat of your pants" - it still works for ground vehicles.
Although your eyes only scan a small area at a time they do it quickly and you can move your head. At night, when you are going to be nearly blinded by an on coming vehicle's headlights, you know to drop your gaze and watch the edge of the road.
You mention your inner ear, so that's acceleration again. However you should be using the rest of your ears too. I often look left and listen right when turning right (in the UK, I do wind down the window). I use my ears to get a vague idea what is going on and then use my eyes for the final go/no go.
Ref seat of pants, that (I believe) is the main reason why, despite being apparently reasonably competent in cars, lorries & go-karts, I am essentially completely unable to "drive" any of the racing car video games. FTR I'm emphatically not a gamer, but probably would be if the car games worked for me. I've often wondered whether one of the really posh sims that are mounted on a motion platform would be any good, or for that matter, what my real world driving would be like if you could suppress my inner ear & fill my backside with anaesthetic?
Remember that the purpose of a Tesla motor vehicle is NOT to end up becoming a self driving car.
Teslas are designed as a surveillance platform primarily with the aim gathering as much video footage of their surroundings, and interior as possible.
LIDAR is only useful for cars that are trying to actually implement autonomous operation and therefore totally unnecessary on a Tesla.
Remind us again which self driving cars have LIDAR, and which skimped...
I guess they were out of stock of LIDAR devices at ACME...
Wylie E. Coyote -> Elon Musk
Tune into the next episode where Wylie boards Starship as he prepares to catch Road Runner
Results of that poignant 2018 study were received with great acclaim iirc! ;)
My dad tells the story of, as a child, standing on one side of a not-busy road with a friend on the opposite side, and when a car approached, each pretending to pull hard on a rope across the road. (To be clear - there was no such rope!) When the car screeched to a halt to avoid hitting the rope and dragging the kids into the road, they'd scamper.
It was a small town ("make your own fun") and a LONG time ago.
This is the reason why I don’t want the trolly problem to be part of self driving cars. Kids pretending to run out in front of a car (or just the car making a bad guess of a situation), making the car calculate that crashing into a wall will cause “less harm” in total than running the kid over.
Not saying that there can’t swirl into an empty car lane, but things that would cause great harm to the car (and potentially) the passengers should not be something even to consider. But at the same time, the car should not drive over someone over on the side walk, even if driving up on the side walk would prevent the car from crashing into a car on the street.
This is all in the name of keeping things simple for the computers in the car. Don’t let potential suicided be an option. How often is the trolly problem even something you need to consider while driving? When did you last have to decide to run over an old person or a child?
"The deliberate crippling of traffic will happen and become a major issue."
Didn't I see that being done in one of those bank robbery movies?
I had that vision when arguments were being made to require "kill" switches in car so the fuzz can just disable a criminal rather than chase them. The same thing would go for newer cars that can be remotely pwned. Hold a city for ransom by threatening to disable a load of cars on a major bridge or junction (for Bitcoin, of course, since that's what it's for). Demonstrate the threat by shutting down a major road for 15 minutes off-peak.
If the trolley problem applied to self-driving cars, you'd have a bigger problem, that being the fact that the "trolley-car" in the trolley problem is runaway with no brakes.
The default position for a self-driving car (and for a real human driver) when presented with a potential hazard should be to slow down and / or stop. If something is about to collide with you, then usually the best course of action is not to speed up to try to avoid it, creating further hazards, but to brace for impact.
Hmm, not entirely sure about attempting to apply an 'always the best course of action is to brace for impact" rule.
No two crashes are the same, so no two correct responses are the same. There are plenty of circumstances where dancing round the impending crash is possible & therefore the right thing to do. Sad to say, a properly sorted (if such exists) set of driver assistance aids should be faster responding & more situationally aware than any human driver ever can be, so should be able to do that better. Humans being the contrary creatures that they are have therefore chosen to standardise on bloated SUVs as their preferred form of vehicle, & all that surplus weight & height, & consequent inertia & inability to change direction quickly make that much more difficult.
So option two, which human drivers never take, is rather than swerving ineffectually to attempt to avoid the crash & therefore having the two cars meeting corner to corner, & thereby applying the forces to the lowest possible proportion of each cars crushable crumple zone, they should instead try to square up & meet properly head on, so as to distribute the impact forces across all of each vehicle's crumple zone, in turn reducing the deceleration forces applied to the occupants.
I'm wondering if the algorithms governing the behaviour of self driven cars have that capability? The human driver will always try to save their vehicle (as people generally assume that the crash won't be as energetic as it actually is, & that their car will likely be repairable if only they can minimise the contact.) An autonomous vehicle would likely calculate that the crash is going to be terminal for itself & it's sole remaining duty would then be to protect it's occupants? What is unclear to me is, does that "prime directive" mean that the car will always protect it's occupants, no matter what, even if it means flattening an entire bus queue (the 21st century trolley problem) or have eg insurance Co's had a word in the ear of the self-driving devs to make their interests the prime ones?
"Wear an Elon Musk mask and step out onto the path of a Tesla on autopilot"
While you mark it as a joke, it does lead one to think "hmmmmm". What if recognition can be programmed to do something based on seeing something such as a special sticker on a car. Let's say Elon's car is spotted behind and your Tesla pulls over to let it pass. It would also explain some of the Tesla vehicles having it out for stopped emergency vehicles with their flashing lights on.
"Of course, it would be irresponsible to see if a self-driving car would run someone over in the real world" you say, but it depends on who it is.
As Harry Lime said, "Would you really feel any pity if one of those dots stopped moving forever?". If the dot was (Ronald Mac)Donald Trump then no, I wouldn't feel anything at all.
When Waymo first showed up around here, people realized that you could easily force a Waymo to stop driving by putting a traffic cone on its hood. As a form of protest it was quite inexpensive.
Since Waymo tended to drive a pre-programmed path, this prank wound up stopping 2 or 3 additional ones before Waymo's safety drivers took over and moved the one with the cone via direct remote.
Veering towards push bikes, etc, are obviously just the poor vehicles falling foul of examples of prompt injection. FSD has worked perfectly for years, it is just being fooled by deliberately arranged wrinkles in pedestrians' clothing.
Very true! Here in France, road signs commonly point either to opposite directions, or random ones, and detour signs habitually indicate directions you should not follow -- under any circumstances -- to get to your target destination. Locals know this and get along just fine ... but tourists and AIs, not so much!
Didn't downvote. Yeah, French detours... I think the criteria must be "safe for a tank at full speed" or something as if a road is closed the detour will take you forty odd kilometres and halfway through the next département over, whereas a local will use a little road that passes by a farm and only adds an extra kilometre or two. Local knowledge is useful.
Ask me how I know. ;)
Real wetware drivers fall for this sort of thing too….
Frivolous -
https://www.bbc.co.uk/news/articles/crl5y255z6lo
https://www.bbc.co.uk/news/articles/cyrlrrxez70o
Nasty -
https://www.standard.co.uk/news/crime/a20-fake-speed-50mph-sign-fines-met-police-london-sadiq-khan-b1139032.html
Make a sign look official enough and compliance will follow.
I like the frivolous ones. A gentle poke at those not appropriately maintaining infrastructure. The fake speed limit sign, though, is nasty; I'm surprised drivers were still ticketed despite driving at the (falsely) posted speed limit - how were they to know it was false? (The end of the article seems to imply that they had to have been going over even that to get a ticket... I think.)
The difference between those and the Reg article is that in the Reg article, the signs don't look remotely official, and are telling the vehicle to break laws. A human won't fall for a wrong-color sign held by a pedestrian telling them to run a stop light!
> The end of the article seems to imply that they had to have been going over even that to get a ticket... I think
That's how I read it as they were using average speed camera's and if they were at 40 up to the sign, then going to 50 after it would not have pushed their average over the camera's tripping point by the time they reached it. I guess they were pushing the tolerances to the limit by driving along at 44 then going 55, or something like that. That so many were near the banning stage would suggest serial speeders although in fairness some say they were caught multiple times in the same spot.
Not sure how I would have reacted to that to be honest, a single sign on it's own might have triggered the suspicion detector and probably would have waiting for the next pair of repeated limit signs to be sure, A self driving car would of course quite happily go up to 50 but no more....or 30 if passing a side road...
......in London......in September..........
Huge signs needed...."Proceed"...."Turn Left"....."Go to Potters Bar"...........
Link: https://www.theguardian.com/uk-news/2026/jan/29/us-robotaxis-undergo-training-for-londons-quirks-before-planned-rollout-this-year
It gets worse. Documentary / fiction? Professional journalist / random fool on the internet? Random fool on the internet / politician?
Unlike AI, humans can interact directly with the real world and test claims by experiment. Even then humans struggle with satire. Even without deliberate AI poison it is amazing that AI can often be near enough right to fool people who are not experts on the topic of their output.
Yeah, all AI/LLM outputs should be required to include a winky (or similarly appropriate emoticon) at the end of every text line imho ... A T-1000 or Big Brother icon should be prominently displayed on pictorial and video outputs (on every frame), and audio outputs should be punctuated every 4 seconds by a message indicating the sounds produced are an RotM™ product that required the entire yearly energy budget of Belgium to produce.
Not too sure what olfactive signal to periodically intersperse within AI smellorama outputs though ... some whiff reminiscent of enshitification??!! ;)
do people write code that takes any input and executes it without any form of validation or filtering?
How many search input boxes will execute DROP DATABASE fred ?
Surely a basic precaution is to identify an actual legal road sign (not some random text in the field of view), and only act on 'commands' that are on the official list of valid road signs.
The people developing these systems shouldn't be allowed to carry scissors.
"How many search input boxes will execute DROP DATABASE fred ?"
I'm not convinced everything will be immune, all of the time.
If self-driving ever really gets off the ground, I expect the market will wind up with only a few systems that all manufacturers use. Finding a vuln in one of those will affect a large number of cars. A direct attack might be guarded against, but something that takes some set up might not be. Even more the case if "AI" is being used. The opacity of how those systems really work might be beyond the purveyor's knowledge.
do people write code that takes any input and executes it without any form of validation or filtering?
How many search input boxes will execute DROP DATABASE fred ?
More than you'd think/expect...
Injection went down to #5 from #3 in the last OWASP Top10 report. It's a fairly easy thing to mitigate in most circumstances, but keeps getting a place on the report year after year. Hopefully the message is getting through and the downward trend will continue. Don't hold your breath - vibe conding aficionados that don't check what's generated will drive up that rating again.
https://owasp.org/Top10/2025/A05_2025-Injection/
Just this week...
"...GET requests with parameters that have bash commands,"
https://www.theregister.com/2026/01/30/ivanti_epmm_zero_days/
"The people developing these systems shouldn't be allowed to carry scissors."
The people commissioning these systems shouldn't be allowed to go for the cheapest option
Technology is great, but some humans are naughty & will misuse it. So, you make it really difficult for those people to ruin it for everyone else. As with keyless vehicle entry...er, no - bad example. AI isn't sentient, intelligent or discerning - it simply can't cope with real, messy life.
It's interesting but only on a technical level. It's not as if there's been a spate of fake road signs over the last 50 years when normal eyeballs were doing the driving.
If you're driving and someone dressed in a high visibility jacket holds up a green "GO" board at a set of red lights, how long would you ignore it before proceeding carefully? Reg editors are surprisingly Luddite when it comes to new technology, be it LLMs, self-driving cars, even new Linux distros get demoed on an ancient machine at 800x600 resolution.
"how long would you ignore it before proceeding carefully"
There's a key difference between what you're suggesting a human driver might do here when faced with a fake instruction, and what the artificial ineptitude was doing in this research - note that you describe the human driver as proceeding *carefully*, whereas the research is suggesting the AI driver would blindly proceed and cause a collision based *solely* on what their vision system picked up from the adversarial signage, with no regard for what it (or any of the other sensors on the vehicle) should have been telling it about the nature of the road ahead.
And you don't even need an adversarial bit of signage to come up with a similar real world scenario - if instead of a fake GO sign next to a set of genuine red lights, those lights had turned green, a human driver is *still* expected to proceed carefully and not blindly assume the green light is giving them carte blanche to set off regardless of what might be happening ahead of them. Same rules damn well ought to apply to automated vehicles as well, and if they're as easy to bypass as this research is implying, then something is very wrong...
How long do thee instructions have to be shown for to influence the device? I'm thinking about those bill boards that are basically large screens. If it will respond to an instruction shown for a very short time then inserting extra frames into the display, too quick to be seen by humans, could cause chaos.
Calling it all AI lets too many people think it is intelligent. WE know it isn't, but most lay people just hear that and believe it.
Autonomous driving systems are not intelligent in any way, as demonstrated by this article... Not even a complete idiot real human would fall for this stuff.
So reg.... can we start putting AI in 'quotes' maybe? That's not enough. how about 'so-called AI'? Hmmm. Suggestions?