Attention, everybody! Attention! I have an announcement to make, one that it looks like needs announcing far more often than I’d like. It’s about automated driving tech, and, more specifically, the terminology we used when talking about such tech. I’ll be the first to admit that the terminology can be confusing, and this confusion has the potential to really mislead people about how automated driving-assist systems actually work, which can lead to people misunderstanding the capabilities of such systems, which can then lead to actual safety concerns. So let’s just take a moment to address a recently popular bit of nonsense, what some carmakers are calling “Level 2++”
We’ve been seeing more of this recently because Mercedes-Benz just announced they’re dropping their Level 3 automated driving system (which is itself a whole disgusting can of confusing worms) and is instead focusing on their MB.DRIVE ASSIST PRO, which they have been referring to as a “Level 2++” system.
It’s not even subtle; here, look at this video about the system they released:
It’s possible that you, a healthy, well-adjusted person with a rich, full social life, has never really bothered to think about these automated driving levels, and you may not realize why I feel this is so dangerous and stupid. If that’s the case, mazel tov on your fulfilling life, and allow me to explain a bit.
The problem with saying an automated driving system is Level 2++ – or even Level 2+, a term that automated driving tech company Mobileye uses – is that these sorts of terms attempt to turn the Society of Automotive Engineers (SAE) automated driving classification system into something that it is fundamentally not. Remember, the Levels of Driving Automation are not a measure of how much or how good a given automated driving system is: the levels only indicate the division of responsibility between the human and the machine. That’s it.
Do I need to pull out the chart again? Fine, here’s the chart:
All the levels refer to is the division of labor between you and your car. Level 1 is all you, human. Level 2 means the car is performing some amount of the driving task, but you must be monitoring it nonstop and ready to take over at any moment. Level 3 is a real mess, because there’s really no one in charge. Sometimes the car is in total control, and you don’t need to pay attention, until you do, then you’re in control. As far as I can tell no company has described exactly how to make those handoffs work well.
Level 4 means the car is in charge completely, at least within certain boundaries; robotaxi companies like Waymo are at this level. And Level 5 is pure magic, a car that just drives on its own everywhere, all the time.
That’s it! That’s all the levels mean! It has nothing to do with how advanced a driving assist system is or what sorts of capabilities it has. If it cannot be absolutely trusted on its own and it needs a human driver to be always watching, that’s Level 2, no matter if it’s Tesla’s FSD or GM’s SuperCruise or Mercedes-Benz’ MB.DRIVE ASSIST. Those are all Level 2 because they all leave the human driver in ultimate control of everything and on the hook if anything goes wrong.
That means, yes, the people in this Tesla from a video that’s been making the rounds, who are sleeping while the car drives itself using FSD, are being idiots, and should anything go wrong, it will be 100% their fault, because FSD is still a Level 2 system that requires constant human oversight, period.
Adding one or two pluses after the Level 2 is simply nonsensical. Level 2 isn’t a level of driving advancement or skill; it just means a human has to be watching all the time. Logically, you would think a Level 2++ system then would need more human focus and attention, because, again, Level 2 means a person must be watching, so slapping a ++ to the name would mean you’d better be really flapping watching.
Terms like Level 2++ just confuse consumers, who are already confused. Mercedes is using the term because they believe their system is so advanced, it needs to have a different label than other L2 systems, but they couldn’t be more wrong. They’re conflating two different things: the supervision category of the SAE Levels and some general idea of technological sophistication. But you just can’t do that.
That’d be like saying you have a six-year-old kid who you can’t leave at home alone without a babysitter, but they’re a really smart kid, and they can read and make awesome things out of Lego, so you call them your Kid++. Does this mean that because your 6-year-old Kid++ is good at lots of things and can heat up their own Hot Pocket, you can just leave them on their own for a weekend? Hell no. But this is exactly what Mercedes-Benz is doing with that Level 2++ name. It doesn’t matter what it’s capable of: it’s still a Level 2 system, and you still have to babysit it.
It’s just stupid marketing crap, but in this case it’s stupid marketing crap that actively confuses drivers and obfuscates what these systems’ capabilities actually are. People already wildly overestimate what their automated driver assisted systems can do; widespread use of these terms – and media outlets just accepting them – just makes everything worse. The SAE level system is already confusing enough to people, so why do this?
There is no such thing as Level 2+ or Level 2++. Or, for that matter, Level 2+++ or Level 3+ or Level 4- or any other marketing idiocy. Levels do not indicate what the systems do; only how they work with you, the human. Period.
Top graphic image: Mercedes Benz/YouTube








You know, I think of Mercedes as Germany’s Stellantis. Even not accounting for when they were actually merged with Chrysler. They’re the third place player behind VW and BMW with legacy brands that are losing their shine. And they do embarrassing stuff like this.
They left out the part JT put back. Level 2++ungood.
“Doubleplusungood” is a term from George Orwell’s novel Nineteen Eighty-Four, coined in Newspeak, meaning extremely bad or terrible, using prefixes (“doubleplus-“) to intensify the core word (“ungood”) to eliminate nuanced thought and control language. It’s the ultimate negative descriptor,
Gosh! I’m currently driving a Level -1 car! Not at this moment. Not as I am typing. It does have an AT, so I guess there’s that it’s shifting for itself. Maybe my 5M Jetta TDI was a Level -1. I’m so confused.
We used to have a 2018 (apparently) Level 2 MDX that was not very good at it, so I never trusted it to do any of it. (It was pretty good at a lot of other things.) Even the automatic low/high beam headlights were pretty bad at times. So, when I drove it, I shut all that stuff off.
I’ve got the Farts++.
(That’s marketing speak for sharty diarrhea.)
Goddamn, Stef …
This is all too much for me. I’m still trying to get my head around how an EV can be a Turbo.
The next version will probably be labeled L3-lite.
Doubt it. First we need to get through Level 2.5, 2.5+, 2.5.1, 2.5.1 PRO and so on.
Just let me know when we get to whatever level it is that the car can be driven by Muppets. Initially I wanted a Fozzie but I think I’d prefer a Gonzo.
I want Statler and Waldorf up front heaping scorn on the other cars while I crack up in back.
2++? I’m guessing it’s like gasoline at $2.99 a gallon as opposed to $3 a gallon?
Good old capitalism. You can have the innovation, but you have to accept a bit of trying to kill you.
Level 2 ++? Why not? I also believe prices can be lowered by 600%. I can’t wait to get paid for taking a new car. Wonderous times we live in.
Hey we’ve seen negative interest rates and oil futures going to -$37/barrel so why not cars?
I’m a Level 0– human.
That’s still too high for me to be the sort of marketing dick who thinks this shit is ok.
Marketing. I hate marketing, always telling you something that polishes the turd
Pretty sure it was Asimov who said: “Marketing the last refuge of the incompetent”.
I like Torch’s definition of the pluses.
So, Level 0+ is no automated driver warnings at all
Level 0++ requires a manual transmission and no ABS.
Level 0+++ gets into things not seen since the early ’30s like manual spark advance.
The bigger interest to me is where does the liability shift from you to the company? My argument would be at level 3 when it is enabled, anything not related deferred maintenance should be on the company. If, per the SAE definitions, you are not the driver, you should not be liable for anything that occurs while it is driving until it comes to a stop and tells you to take over. Again, deferred maintenance causing issues aside.
Yes! Also any definition should ban delaying handoff until the self driving software detects a crash is imminent to avoid liability (looking at you Tesla).
I thought I read somewhere about the finer details on L3 that it required it to stop at the side of the road to do a handoff.
You may have read that here as a suggestion from Torch. Pretty sure that is not in the actual standards. I was trying to download the most current standard, but I think my workplace is blocking the download, so I can’t confirm.
Looks like the vehicle is required to come to a controlled stop in path if the driver does not take the handoff, but has no requirement to get to the side of the road or come to a stop before handing off control. Minimal risk condition may be added to L3, but it does not appear to be required until L4.
(DDT: Dynamic Driving Task)
That’s a pretty big oversight of L3 then. If you’re “not driving” then you should at least be given warning before the takeover happens. Not some of these Tesla-like “oh whoops here you go, you’re in charge now” instant handoffs.
Think of it like you are a WR and the play called has an option. Neither you or the WR knows if you are getting the ball until the last second and there is a huge DL bearing down on you and the QB.
I read that as L3 tries to do the hand off to the driver, by only if the driver doesn’t accept in a timely manner does it come to a stop… What “timely” actually means, well, I’d hope at least 10 or 15 seconds at a minimum, but it’s probably undefined intentionally
You have correctly identified the major flaw with the whole L3 category of the SAE ratings.
That would be completely reasonable. A live handoff with zero warning just seems like a recipe for disaster and I was being a bit tongue-in-cheek on singling-out Tesla but the truth is I really wouldn’t put it past any automaker.
But, also, how does that work in practice? How often could we expect the system to recognize it will not be able to handle something 15 seconds down the road?
Also curious how these systems respond to something like a deer running into the road. I suppose it’s possible it could actually respond faster (and closer to the limits of traction/stability) than a human. And that you’d need a change of pants if it did.
That’s already been discovered in a Tesla – it wipes out the deer and continues driving without stopping even after the slaughter.
https://x.com/TheSeekerOf42/status/1850747727224987865
The most glaring example I recall was the “test” Mark Rober did where the Tesla would run in FSD then deactivate when the impact was imminent, apparently so it wouldn’t show as active at the time of the crash.
I put test in quotes because it wasn’t exactly scientific and was sponsored by a LIDAR manufacturer so it may not have been 100% impartial, but it was informative.
I’ve been assuming this was the case since they started claiming some highly improbable statistics regarding crashes with FSD enabled. It fits my preconceived notion of Tesla so it must be true. 😉
I really tried to be as even-handed as possible in how I worded that. The avoiding being blamed for self driving crashes may not be the intent, but it has the same effect and I wouldn’t put it past any company without real oversight.
Very much the way businesses never count deaths on their location, because time of death is declared at the hospital.
A very good example.
The best way I can explain what I’d consider an acceptable state would be to give two examples of fairly common driving conditions.
First if the car is self driving in deteriorating road conditions (heavy rain, fog, snow, etc.) to where either the sensors are no longer effective or the road becomes borderline impassible. This would be perfectly reasonable to expect the system to give warning of a handoff (your 15 seconds is a good example) before activating the hazard lights and stopping, preferably on the roadside.
Second, to truly be self driving everyone needs to accept that sometimes s#it happens. In your example of a deer running into the road; sometimes uncontrollable, unpredictable outside factors and the laws of physics dictate that a conflict will occur. This is where, in my opinion, the self driving systems need to take it on the chin and remain active to try to mitigate the severity of the accident (threshold braking, maximum avoidance allowed by grip, minimizing the angle of impact to make it as oblique as possible, etc.). A truly level 3 (or higher) system with a suite of sensors and constant data from an advanced vehicle stability system should be better equipped to assess the situation and determine the least bad option than a human who wasn’t driving a fraction of a second before. In this case liability (or lack thereof) could be pretty easily demonstrated through the vehicle data and the point it was flagged as unavoidable. If you have comprehensive coverage, your insurance would already cover “act of God” incidents or if the condition were caused by another driver the fault would be pretty easily demonstrated by the data.
I’ll admit these are pretty specific examples, I was trying to answer your question while giving plausible real-world examples to illustrate the point I was trying to make.
Thanks for that. The slowly deteriorating road conditions is a great example I hadn’t thought of. And yeah, the computer ought to be able to handle an emergency better than a human. It takes our brains a lot of time to recognize a threat and then react. Plus, without specialized training and muscle memory, most of us are not going to have the optimal reaction in the half second or so we have to decide what to do vs a computer that can process the situation in milliseconds and execute a plan based on logic/math rather than panic. Honestly, if the system could be refined enough, I’d rather have the inverse where I drive all the time except the rare emergency situation where it detects I’m unexpectedly within 2 seconds of a crash. Even if it’s not perfect, it will handle it at least as good as I would and likely better.
Yeah, for all the faults with existing self driving (and there are many) there definitely is potential. As you said, even lightning fast human reaction is downright glacial compared to the capability of a processor.
Are you familiar with saccadic blanking?
Humans have software issues most people are unaware of.
Recently I was on a wide divided highway, bright sun, and a pack of deer crossed from the left side at an angle after I was past where they would be easily visible.
So I saw them very last second, but one was moving and fast enough to get in front of my truck, and I’m going 45-60 mph.
I was startled but managed to slow enough to let the fast one by.
I wasn’t even sure what they were, but they were dark brown to black deer, the whole pack.
Sensors covering a wide area to the side would have been helpful, but I don’t know how you would filter that.
A live handoff with zero warning seems to be the same as the standards for Level 2, so this make me wonder if there is even such a thing as Level 3…?
It seems like an unholy estuary between human control and AI control that will probably be a blend of the worst of both and a liability nightmare.
So what you’re saying is, I can expect L3 to be offered on every new car for the next 15 years…
Unfortunately, you probably aren’t far off…
That is diabolical.
I don’t easily see liability shifting to the manufacturer — not because it shouldn’t shift to the manufacturer, but because they won’t allow that to happen, by method of controlling the governing laws. If liability shifted to the manufacturer, they would be required to carry auto insurance on your car. Not going to happen.
Much related issue: does anyone know how this is dealt with, with Waymos where they operate? If a Waymo hits you, are they at fault and their (assumed) insurance pays? Or are you screwed if you live in one of those cities because your local legislature voted them unqualified indemnity?
I am at least hopeful that enough lawsuits will get through that it will happen or people will reject it.
Yeah I agree. Even if the OEMs succeed, all it will take is one or two prominent stories about someone going bankrupt after their self driving car slammed into a school bus. No one’s going to be cool relying on these systems after that.
How about the passenger of the Waymo who gets injured?
Yes, the personal injury aspect is part of this liability question along with the vehicle damage and surroundings damage aspects.
Insurance companies and manufacturers don’t accept responsibility now.
At all.
So then if you’re in an MVA involving a Waymo, you and your insurance carry 100% of the liability, regardless of circumstances? That’s nuts, and also what I was afraid of.
That’s been my experience lately, regardless of amount involved.
When other people are clearly at fault, their insurance blatantly lies, daring me to confront them
I think I’m going to report every instance to the insurance regulators and spread the documents around of people caught lying, and terming it an accident while uninsured.
They like their criminal customers so much, I’ll ensure they are stuck with them.
When their insurance refuses to cover them, they are accurately not insured.
That is also not a legitimate insurance company. So uninsured.
Laws requiring me to overpay for insurance that does nothing is simple racketeering.
Not the driver but you are the owner so still liable
To me there are only two levels: I drive and it drives while I ignore it. Robotaxis aside, I’m pretty sure only one level exists right now.
Nah.. there’s one additional level… you officially are driving, but have the cruise control on and you’re checking your phone.
And that’s the driving-not-driving level.
That’s Big Taltima (Rogue) Energy level.
I’m not good enough to check my phone while driving.
It’s fiiiine… just steer with your elbows!
Knees, the proper method is knees.
But then you won’t be able to heel-and-toe…
I have a police report stating that someone that hit me, off the road, at 60 mph making a left turn was not looking through the windshield!
I was parked off the street.
Alternative interpretation, they think they built level 3 but don’t want to accept liability so they call it “Level 2” but still want to charge based on 3 so ….
I vote the first judges to see this allow MB to be sued into oblivion because they clearly marketed it as “more than level 2”, and neither smoke nor fig foliage tips the scale.
I’m waiting for Level 5+, can drive anywhere anytime and also comes with a red scanner, turboboost, and a snarky attitude.
Michael, your choice of music is regrettable and after forcing me to endure 2 Fast 2 Furious, I am thinking of giving you the ejecto seato, cuz.
Oh yeah? Well I’m gonna get Level 5 PRO+ when it comes out… just so I can out-snark you!
It refuses to go places it does not want to go. It gets bored waiting for you when you are in the store and goes home leaving you stranded. I takes you out into the country and throws a beer into a field. When you go to retrieve it, it takes off on you.
I kinda understand the need for such a term (but I still think what Merc did is wrong), because of the flwas in the definition of L3 driving. Nobody wants to officially call their basically-L3 ADAS system right now because no governing body has set the standard for the minimum handoff time from auton to human driving. If it’s 2 seconds, several current systems could qualify, but if it’s minimum 10 seconds, that might as well be L4. Thus you still need to call it L2, but how do you differentiate that from a base model Corolla or Slate Truck which is also classified as L2 with their adaptive cruise control?
It’s simple – just ignore the levels entirely and call your system “full self driving” to get rid of any confusion. Hope that helps.
I think anything under L3 (or maybe even L4) should just clearly define all individual features. You don’t have L2, you have adaptive cruise, lane keep assist, lane change assist, and navigation-aided steering assist. Or whatever else.
But I also don’t believe any system that requires the human to take over should be classed as any sort of self-driving. If you think the car is doing everything, you will likely not be paying enough attention to take over. Jason has mentioned requiring vehicles to be able to safely come to a stop on the shoulder or something, and I support that. It could give the driver a warning that it plans to do so and the driver could decide whether to take control.
I generally agree. In China where these systems are getting quite common, marketing has settled on calling the navigation-aided steering assist feature some variation of [highway/city] NOA (Navigation-On-Autopilot) or NGP (Navigation Guided Pilot); I don’t like that they use the word Autopilot in the acronym, but they emphasize the L2 part and they use phrases in Chinese meaning something like ‘assisted driving’, and NOA is just an acronym from a foreign language.
With China issuing its first L4 test licenses to automakers (not robotaxi operators which have already been testing for a while) in the past few weeks, it seems like they’re intending to skip the ambiguities of L3 altogether and go straight to L4.
I believe during the auton crackdown last year, the Chinese regulators required that when inattentive drivers were detected, the car should pull over to the shoulder after several warnings, and this feature is live in L2 systems today.
Yeah but the Chinese government doesn’t really care if they kill off a million people or more.
Autonomy: “the quality or state of being independent, free, and self-directing.” I’d say there’s nothing below Level 3 that should be called autonomous. If you have to monitor it, it’s not independent. It’s your kid at college who still needs you to assist with adulting.
China banned this kind of misleading marketing in April last year after a prominent fatal crash of a Xiaomi SU7 in its highway semi-auton mode (it was the base trim level with the least advanced capabilities). Now automakers must emphasize that the driver is still responsible in L2 with terms like ‘assisted driving’. The misleading marketing wasn’t necessarily the cause of the crash, but it was a good time to crack down on it anyways given that the country’s biggest auto show was the week after.
This reminds me of Intel when they were having problems scaling their node down per their tick-tock cadence. Well, we failed to move to a smaller nm scale die, but we improved performance somewhat by throwing more power at it. Smoke and mirrors.
14nm++++
14nm+-
My favorite was when they crammed 2 P4s onto a single chip and called it Pentium D. My boss at the time did not consult with me before buying a whole office worth of them in Dell’s smallest chassis. 130 watts of nothingness which blew right at the hard drives and roasted them all.
Well if it has more power that’s a trade off right?
Ralphie having a vision of his teacher at the chalkboard writing LEVEL 2 + + + + + + + + + + and everybody cheering
So good…
Subaru like “You’ll shoot your EyeSight™ out, kid!”
In the heat of battle with his autonomous driving system 2++++++, my father wove a tapestry of obscenity that as far as we know is still hanging in space over Detroit Michigan
Some people are Baptists, others Catholics. My father was an anything-but-Stellantis man.
Does anybody else have a sudden craving for a Hot Pocket?
Does anybody else suddenly want to make a wooden coat hook?
But is it a semi-autonomous Hot Pocket that will automatically dispense lava onto your tongue under the correct conditions, or am I the one ultimately in charge of that?
Hot Pockets: Every Bite A Different Temperature!
You have to collaborate with your microwave to get the dispensable lava.
I’d rather have a Pizza Pocket.
I prefer the pizza rolls the size allows more even heating. Although French Bread pizza is good for the mouth blisters. Ask me what I had for lunch
Of course you would Canadian.
What about Bagel Bites? Because when pizza’s on a bagel, you can have pizza anytime.
(Too obscure?)
“Why do you just make 2 even more autonomous?”
“But….but these go to 2++“
when you need that extra push over the cliff
Outstanding. This story is turning into a COTD breeding ground.
I understand the expedience behind categorizing the levels with simple numbers, but they should have more qualitative names.
Level 1 = Cruise control
Level 2 = It helps, but your insurance pays for mistakes
Level 3 = It pretends to be driving, but your insurance pays for mistakes
Level 4 = It absolutely does everything, except pay for mistakes, that’s your insurance
Level 5 = The car maker’s insurance pays for mistakes
Nice, I think that’s what also gets lost in this mess, who is liable when something goes wrong. Basically it’s the driver until Level 5.
Oh, I think it will still be the driver at level 5 – your car, your choice to take it out for a drive…
“Basically it’s the driver until Level 5.”
And then your claim gets denied because you used it in unapproved conditions… like a snow storm… or in a Tornado.
At first I read that as “…or in Toronto.”
No Tornados in Toronto. But we do have some Toronados and Tomatos in Toronto. And you might even find some Tortulaceae.
Maybe a waterspout.
Or on the road in public
This could work out ok. Insurance doesn’t want to have to pay out so they would pressure the OEMs to do better. They’d create their own ratings to influence consumer behavior, similar to their crash testing. OEMs will listen to them more than to us so I’d say our interests are in alignment.
Almost. If “Level 4” is what a Robotaxi is today (and doesn’t even require controls in the car), then the automaker’s insurance is paying at L4. If I understand the distinction between L4 and L5, it’s basically geofencing and weather restriction. An L4 Waymo Robotaxi can drive itself around San Francisco or Phoenix, but you can’t pick one up and drop it on Rt 3 in rural Maine in a snowstorm and expect a good result. You should be able to tell an L5 car to go anywhere in any weather a human can handle and expect to get there unscathed.
I have no interest in any of this until I can buy a car that can do L4 on Interstate highways in decentish weather. If the weather is too bad, fine, I will drive and take responsibility. Or more likely, stay home.
I would argue even L3 when it is enabled would be automaker liability, personally. If you are considered “not driving” then how are you liable?
In L3 you still have to be ready to take over. The unanswered question seems to be “how quickly”. Less quickly than L2 evidently, but there is no legal definition. Ultimately, that is the bottom line that legislatures need to figure out – at what point is the “driver” no longer liable and whoever programed the thing is? Nobody knows at this point.
I don’t see how you solve this. The biggest “oh shit” moments come suddenly. Like someone merging into you or an unsecured load coming loose in front of you. What happens if there’s 3 seconds between the time the hazard appears until impact? Presumably for this whole regime to work the car will need to be able to handle that. And that sounds like the type of scenario where the car would give up and hand over control to you. And, if it CAN manage those situations, well it seems like it should be able to handle every other task and challenge just fine.
I think for L3, the car is going to HAVE to stop before the meatbag can be required or even allowed to take over. And L3 cars don’t even need to have manual controls at all don’t forget.
I mean, I always thought the endgame was third party robotaxi operators.
That’s the theory.
This is a grim future if humans play wordle or nap 95% of the time rather than drive, but are expected to take over in the 5% of times conditions are too severe for the computer. So basically we’ll all be out of practice (or for future generations never have learned at all) other than for the most challenging conditions. No thanks.
After a few pints a friend gets home with a combination of his ‘self driving’ Tesla and ‘driver’ teenage son. Exactly the situation you describe.
It’s the Airbus FBW problem writ large. With WAAY less redundancy to keep it from happening.
In fly-by-wire Airbus aircraft, normally the pilots are never directly flying the airplane. When they are not completely on autopilot (already the majority of the flight) they are moving the sidestick, which tells the computer what they want to do, and the computer makes it happen as efficiently and safely as possible. No feedback, and no true linear relationship between what the stick does and what the plane does – the computer smooths everything out and keeps you from doing something stupid (theoretically). But in extremis, when the computers have issues enough, it throws it’s little electronic hands in air and says “Jesus take the stick”, and that sidestick suddenly directly and linearly controls the control surfaces with no intervention and no safeguards if things are bad enough (obviously, there are levels of degradation before it gets to that). They practice this a few times a year in the simulator, of course, but when the shit is hitting the fan to that extent – do you REALLY want the airplane suddenly responding rather differently to your inputs than it does 99.9% of the time? One of several areas of engineering where I prefer Boeing’s pilot-first approach to this in their FBW aircraft. Boeing’s approach is basically the other way around – the pilot is always flying the airplane directly, but if he does something stupid the computer can intervene. So if the computers fail, the airplane still flies basically the same, it just lacks the guardrails.
There is very legitimate concern in the industry that Airbus pilots in particular lack good “stick and rudder” basic flying skills too often – especially in other countries where you can be flying an A320 with as little as 100hrs of total flight time. Literally flight school right into the right seat of an airliner (it’s 1500hrs in the US, arguably too far in the other direction). And there have been a decent number of crashes where the pilots didn’t know what “mode” the airplane was in, or didn’t understand the implications of that on those safeguards. Or the most egregious one of all, Air France 447 where the Jr. F/O just held the stick all the way back and stalled a perfectly good airplane into the ocean (the cause of the computers giving up had passed), while the other pilot could not see or feel what he was doing and tell him to cut the shit. The Captain got back in the cockpit (he was on his rest break), saw what was going on pretty quickly, but they ran out of altitude before the airplane recovered from the stall. Literally <30 seconds sooner and they would have been fine… For someone who flies all the time, I probably have too much morbid fascination with airplane crashes.
This sort of thing coming to a car near me is vaguely terrifying. But I have to think that L3 and up cars aren’t just going to say “Jesus take the wheel!” like L2 does. I hope that the requirement is that the thing pulls over and STOPS first, THEN potentially allows manual control from there. But you can’t do that in an airplane.
Except t- the car maker insurance refuses to cover mistakes and then so does your insurance so you are OOP.