Operating as Designed
as a defense........
Tesla has prevailed in a crucial Autopilot death lawsuit in the US, with a jury today deciding the automaker's software wasn't at fault in a 2019 accident that killed a Model 3 owner and seriously injured two of passengers. Returning a verdict nine to three in favor of Tesla not being at fault, jurors in the city of Riverside …
Victim blaming as a defense. Tesla's keeping it classy as usual.
And I hate to sound all conspiratorial here, but Twitler has been able to find extensive logs on cars that caught fire and burned up, and all kinds of other things, so he could go on Twitter and... blame the victim... starting to detect a pattern here. It seems a bit convenient that they couldn't get any data from this car before it crashed. Maybe it's legit, I don't know, it's just sort of hard to trust anything from a company that has demonstrated multiple times it has no qualms about violating the law.
There should be a law that if a crash is detected, that aside from any sort of accident notification system like automatically calling 911 all data access to/from the car should be disabled. If Tesla engineers are able to remotely access a car, or the car downloaded the logs of the accident and they can see it will be bad for the company, they can easily delete incriminating evidence or say that nothing was recorded. Or edit it so it only shows data up until a certain time.
A car that's "always connected" is a very different thing compared to cars in the past where it would get into the hands of the police where (so long as they weren't responsible for the accident) it would be safe. Giving privileged access to the company's engineers before a case is even complicated gives plenty of time for the lawyers to get involved and do whatever puts the company is the best position legally. Tesla has done enough shady stuff in the past I would not trust them at all - I hope that their engineers aren't allowed anywhere near a car and third parties using information required by the court to be provided to them by Tesla are the ones who access whatever data is stored on the car, which is then provided equally to both Tesla and the plaintiff.
Not saying other automakers should be trusted either but no one else is selling cars claiming to be "full self driving" etc. and they aren't quite as "always on" as Tesla's though even if the engine control side isn't permanently online usually the entertainment is. My 2021 Audi requires paying them like $400 a year for various features that require connectivity (I assume a lot of it is paying for the cellular subscription) but since it doesn't provide anything I want since e.g. Carplay takes care of maps and so forth I don't pay for it. But the car is still connected - it still get a message that the MMI interface version has been upgraded when I start the car every few months.
" If Tesla engineers are able to remotely access a car, or the car downloaded the logs of the accident and they can see it will be bad for the company, they can easily delete incriminating evidence"
That's what comes to mind when reading this statement:
"(Tesla)...also claimed it wasn't sure if Autopilot was even engaged let alone defective as described."
They log thousands of data points, it's literally unbelievable that they don't log whether Autopilot was engaged or not.
And as to this: "Tesla argued in the case it wasn't liable for the accident, as Lee had allegedly consumed alcohol before driving...". Not sure what that has to do with anything. Even leaving aside the legalese 'allegedly', 'consumed alcohol' could be applied to anything from one beer to an all-night binge, and 'had' could be applied for anything from hours before to just stepped out of the bar. Since this was a road fatality I would expect a blood alcohol level test was par for the course, so either he was over the limit or not (and given that Tesla used the weasel wording they used and not "he was over the legal limit" is a strong indication that he wasn't over the legal limit).
And even if he was over the limit, that's not a defense that would hold for any other part of the car. If a defective axle catastrophically fails and the car crashes, there's nothing any driver can do with it drunk or sober. Seems like its simply a case of Tesla having lawyers who are very good at sowing doubt and manipulating jurors.
It is creepy seeing completely empty self driving cars weaving around SF's streets, automated. I saw one following an Uber driver, who quickly pulled to a stop to let someone out. The Waymo SDC in an instant flicked to the right to go around the Uber and continued on. It was very human-like and very slick.
I was impressed. But that was just one instance on a road it must have been trained on countless times. I can see it all working, if conditions are as they were during training. Which isn't a given. There seems to be little initiative in them.
C.
They can probably work well enough, most of the time in well-structured environments (cites, major roads), but I doubt they would move very far in some of the rural roads in the UK where even a human can find it hard to work out the route down the road.
As an aside, an Indian friend of mine is wondering how they'll cope where he lives - a road marked with three lanes generally has 5+ streams of traffic in it, and the "rules" to merge into it from (say) a parking lot are basically:
1) Move nose as close to the nearest stream as possible; then
2) Slowly force your way further out; then
3) Wait until a vehicle thinks it cannot pass (at which point there are n-1 active streams); then
4) Pull out, starting a new stream.
He says he can see an owner picking up a new vehicle, instructing it to "take me home", and never getting out of the lot...
About a Year ago I collected my new car. It had several "driver assist" enhancements.
The lane keeping warning is just that, a warning (steering wheel shakes and becomes stiff to turn). It will keep the car in the lane if it is a very gentle curve but I have not tested it too rigorously: one I am chicken; two it throws a hissy fit if I don't use the steering.
The "automatic cruise control" will maintain speed, up to the cars idea of the speed limit*, and react to the speed of the car in front. This system is more "self driving" but I have never, for the reason stated first above, given it full control when I think the closing speed could be too great.
The crucial thing is that there is no claim made about self driving in any way. You know the thing, BIG BOLD WRITING IN RED etc.
Rarely is the self driving aspect of use because I don't trust it. It has convinced me that I was always right to be sceptical of Teslar's Autopilot.
* Mostly correct but sometimes a dangerous thing - dropping from 70 to 30 mph for no apparent reason. You get shouted at by other drivers.
I deliberately purchased a poverty spec car precisely because it was unburdened by any driver assist options. I've been able to stay within the lines since I started driving nearly fifty years ago; when I find myself unable to do so I shall stop driving.
The only remaining tool is a basic cruise control: I tell it to go at 120kph and it goes at 120kph; if the car in front is doing 110kph then it's my responsibility to slow down or go around it.
Yes, I am aware that this is an unfashionable view.
I use the driver aids when the road is quietish, it makes a surprising difference to how tiring the journey is.
But on busy roads, they get influenced too much by other vehicles different interpretations of how to drive so I don’t use them. I just switch them off when I don’t want to use them. Simple concept,
Ah yes, I also hate power steering, abs, automatic transmission, parking sensors, auto dimming/adjusting mirrors. In my opinion my C reg Rover Metro 998 Metro City with a manual choke was the pinnacle in automotive design and anything else should be dismissed.
It's progress and becomes the norm very quickly. Embrace the change, cars now are a much nicer and safer place to be than they were years ago.
I wouldn't want a car that used it's perception of the speed limit, period. My car has a speed limit sign camera and the sat nav has GPS based speed limits, on a variable speed motorway both can be wrong! The camera has issues with the illuminated signs, often reading 50 as 80 instead, and isn't very good with temporary signs at roadworks, on one occasion it managed to read a sign as 120. The GPS only knows the "normal" speed limit, assuming it hasn't changed since the last update of course. This is UK with decent speed limit signs, I'm not sure how they ever work in the US where signs are very easily missed.
I've got lane departure warning/avoidance (i.e. a warning and it will steer back into the lane) but it will quickly detect if you're not putting steering input in yourself. Overall it's useful IMHO but a long way from self driving, and makes no claims beyond being an aid.
The adaptive cruise requires me to set the speed limit, again it's just an aid though and requires some knowledge on how to use it, for example, on a motorway in slowing traffic if the car in front pulls out to the next lane you could suddenly speed up and go inside it.
This is where the behaviour of the systems and the names they are sold under comes in to play.
There is "Lane assist" and "lane keeping". Sounds like you have the former, which gently nudges the steering to say you are close to, and trending towards the lane boundary. This is not intended to keep you on course - but the names can be confusing to anyone who does not have relevant automotive domain experience.
Then you have ACC and Stop & Go ACC - the former only works above a minimum speed (sometimes as high as 30mph), where as the other will work to a stop and allow reactivation by touching the throttle pedal or "resume" button.
I drive a Lexus IS300h with ACC, and a Hyundai Kona Electric with Stop & Go - and have to remember which one I am in, as the behaviour is completely different (though I do not have to think about it when driving).
The Stop and & Go on the Kona is really nicely implemented, especially when it's used with the "Auto recuperation" feature* active. It has a tendency to slow down sooner than I would (but that means it will feel "in control" to a wider range of drivers, some of whom would otherwise consider the braking to be "late"). Other than that, it has never made me feel like it is not in control (though I may be biased, as I was working on the development of similar systems in the early 2000's).
One thing I like will all the implementations is the extra feedback I get as a driver, as you can feel the car in front slowing down even if they do not brake - well before you would otherwise notice it visually.
* I've not found anything to explain exactly how this works, but it seems to use the sensors to work out when you (as a driver) would need to slow down, and will then automatically bring in an appropriate amount of deceleration by means of energy recovery if you are coasting up to a junction / obstacle, even when the Stop & Go is not engaged.
"Mostly correct but sometimes a dangerous thing - dropping from 70 to 30 mph for no apparent reason"
My car doesn't link cruise control to what it thinks the speed limit is, but it *does* display what it thinks the speed limit is. It seems to work on a combination of GPS data and image detection of signs, and both have issues - GPS data not always correct, and for the signs it sometimes misses signs and sometimes detects off-ramp (slower) signs as applying to the main highway, which as you say can be very dangerous. Much better to have a 'normal' cruise control that keeps to whatever speed the driver sets, and additionally disengaging if it detects a car too close in front / slowing down
The worst part when your car "reads" the signs it that it often reads signs that are not relevant to your destination, like the ones showing a reduced speed for an exit lane (when you are not exiting), or even worse, reading the speed limits on the rear of a truck as actual signs...
The lane warning which snatches the wheel back when it thinks you are drifting over the white line is really annoying on anything other than a dual carriageway or larger in my experience. I used to get regular access to Hyundai Ioniq pool cars with the feature and every time I tried to avoid a pothole (most of the time these days) on a single carriageway road, it obviously thought I had fallen asleep and would take over and drive me back into the flaming pothole. After a few experiences like that I turned the "safety" feature off before moving away from the car park.
> dropping from 70 to 30 mph for no apparent reason
I believe that the speed limit detection is a mixture of "reading" speed limit signs [Captcha: Tick all pictures with a 30 mph sign] and a GPS database. If the "reading" output is below some reliability threshold, I guess the GPS takes priority, and then where the (motor|free)way takes a flyover crossing a lower speed limit, maybe it momentarily switches down? In the UK, at least, there are probably many more motorways going *under* slower roads.
> If a car is in “self drive” mode it should display a flashing light.
https://www.edmunds.com/car-news/first-drive-mercedes-drive-pilot-eqs.html
Mercedes, along with other automakers, is working on a system of turquoise lighting outside the vehicle that can signal it's driving itself.
If the car has logging (most do these days, to some extent), then the only causes of data loss I can see are:
1) The media being destroyed by fire or impact (the loggers are not designed to survive crashes, unlike on aircraft)
2) The last block of data not being committed due to power loss - though it would be easy enough to mitigate against this.
3) If you want to stream it back to a server, then loss-of-signal will be a real issue*
* "active services" are often unavailable where I live due to lack of mobile data.
For cases like this we need a mandatory data logging standard pertaining to vehicles with some level of selfdriving/autonomy where data is stored in a known format and readable by the road safety investigators/police. Preferably these data loggers would then also be crash and fire proofed to some level to preserve data. It is absolutely ridiculous that the data to investigate whether a company might be at fault would have to come from people employed by that same company. The conflict of interest is just too great.
What lost this case for the plaintiffs is obviously that the driver was impaired. California, like other jurisdictions, has rather strict DUI laws and even if the driver was relying on Autopilot this wouldn't have excused them if he had survived the accident. As it is the balance of probabilities would suggest that Autopilot wasn't on or had got disabled by the behavior of the driver so blaming Tesla for the driver's behavior just because its Tesla isn't going to work.
It doesn't matter what you think of Tesla, Musk, Autopilot or anything like that. The simple message is that you just don't drive while impaired.
"When it comes to vehicles with driver assist technology, Tesla is the leader in fatal accidents by far, accounting for more than 70 percent of the automated driver assist technology accidents logged by the NHTSA."
Do you really mean that? There's so much wrong with it two sentences that I don't really know where to start.
1) Driver assist technology --> The car "supports" or rather "assists" the driver. The driver is still in control and has the responsibility. So the car can't be blamed when the driver does not pay attention.
2) Tesla is the only car manufacturer that knows when and for how long its systems were activated. Hence Tesla is the only one that can provide information as to whether assist technology was run when the accident happened or shortly prior to. Since other manufacturers don't konw, they can't report any of this to NHTSA. But the framing done here is obvious.