Home » Internal Report Shows Cruise Didn’t Think Its Robotaxi Dragging A Pedestrian Was A Big Enough Deal To Fix The Cars

Internal Report Shows Cruise Didn’t Think Its Robotaxi Dragging A Pedestrian Was A Big Enough Deal To Fix The Cars

Cruise Bad Top
ADVERTISEMENT

In October of last year, GM’s autonomous cars division, Cruise, had their self-driving license revoked by the state of California due to an incident that happened on October 2, 2023. In the incident, a pedestrian was hit near the intersection of 5th and Market streets in San Francisco. The pedestrian was initially hit by a human-driven Nissan, which launched the person into the path of an autonomously-piloted Cruse Chevy Bolt, which does make an attempt at an emergency stop. Unfortunately, the person was trapped underneath the Cruse AV, which then attempted a “pullover maneuver,” dragging the pedestrian under the car for about 20 feet, making their injuries significantly worse. Today, a report was released by GM from Quinn Emmanuel Trial Lawyers called REPORT TO THE BOARDS OF DIRECTORS OF CRUISE LLC, GM CRUISE HOLDINGS LLC, AND GENERAL MOTORS HOLDINGS LLC REGARDING THE OCTOBER 2, 2023 ACCIDENT IN SAN FRANCISCO, and while much of the report is about Cruise’s response to the incident and their subsequent hiding of crucial information to the media, it also reveals information that highlights some of the issues and reasons why this sort of disaster happened at all.

We covered the cover-up and media/regulatory handling of the incident earlier today; what I’d like to do now is talk about parts of the report that seem to confirm some speculations I made a number of months ago about the fundamental, big-picture causes of why the accident occured, because I think it’s important for the automated vehicle industry at large.

Vidframe Min Top
Vidframe Min Bottom

Cruise has had a permit to operate AV robotaxis in California without human safety drivers since 2021, and as of 2022 had a fleet of 100 robotaxis in San Francisco, expanding to 300 when they got approval for nighttime operation. The robotaxis have had incidents before, but none as serious as the October 2 event with the pedestrian. The person lived, by the way, just so you’re not wondering. So that’s good, at least.

The report describes what happened in much more detail than had been previously known, and it’s some pretty grim stuff. The timeline breaks down like this:

On October 2, 9:29 pm, the initial impact of the Nissan and the pedestrian happens. Within a seconds of this, the pedestrian hits the hood of the Cruise AV, then falls to the ground. The Cruise AV, noting that an impact has happened, undertakes a “pullover maneuver” to get off the road, normally a good idea, but this time not, since the pedestrian, trapped under the car, was dragged about 20 feet.

ADVERTISEMENT

At 9:32 the Cruise AV “transmits a medium-resolution 14-second video (“Offload 2”) of collision but not the pullover maneuver and pedestrian dragging.” At 10:17

“Cruise contractors arrive at the Accident scene. One contractor takes over 100 photos and videos. He notices the pedestrian’s blood and skin patches on the ground, showing that the Cruise AV moved from the initial point-of-impact to its final stopping place.”

That all sounds pretty bad with the blood and skin patches, of course. Pulling off the active traffic lane is, of course, a good plan, but not if you’re going to be dragging a person, something that any human driver that just smacked into a human would be aware of.

The report covers a lot more details that were previously not known. For example, the normal distance for the pullover maneuver seems to be 100 feet; only 20 feet were covered because of this:

“The AV is programmed to move as much as 100 feet but did not do so here because the AV detected an imbalance among its wheels, which then caused the system to shut down. Specifically, a diagnostic indicated there was a failed wheel speed sensor. This was triggered because the left rear wheel was spinning on top of the pedestrian’s leg. This wheel spun at a different speed than the others and triggered the diagnostic, which stopped the car long before it was programmed to stop when engaged in its search for an acceptable pullover location.”

So, the robotaxi had some idea that things weren’t right because of an imbalance among its wheel speeds, but the reason wasn’t some technical glitch, it was because the wheel was spinning on the person’s leg.

Sensors 2

ADVERTISEMENT

 

This grisly anomaly was noted another time in the report, in a section that confirmed that at least the legs of the person were visible to the AV’s lidar systems:

“In the time immediately prior to impact, the pedestrian was substantially occluded from view of the lidar sensors, which facilitate object detection and tracking for the collision detection system. Only the pedestrian’s raised leg, which was bent up and out toward the adjacent lane, was in view of these lidar sensors immediately prior to collision. Due to a lack of consistent detections in this time frame, the tracking information considered by the collision detection system did not reflect the actual position of the pedestrian. Consequently, the collision detection system incorrectly identified the pedestrian as being located on the side of the AV at the time of impact instead of in front of the AV and thus determined the collision to be a side impact. After contacting the pedestrian, the AV continued decelerating for approximately 1.78 s before coming to its initial stop with its bumper position located forward of the Nissan. The AV’s left front wheel ran over the pedestrian and triggered an anti-lock braking system event approximately 0.23 s after the initial contact between the pedestrian and the AV’s front bumper.”

It’s worth noting that the AV stopped not because it was ever “aware” there was a person trapped beneath it, but because the fact of a person being trapped beneath it caused some unexpected technical faults, i.e. the wheel speed sensor.

It also appears that cameras detected the pedestrian’s body as well:

“The pedestrian’s feet and lower legs were visible in the wide-angle left side camera view from the time of the collision between the pedestrian and the AV through to the final rest position of the AV. The ADS briefly detected the legs of the pedestrian while the pedestrian was under the vehicle, but neither the pedestrian nor the pedestrian’s legs were classified or tracked by the ADS after the AV contacted the pedestrian.”

So, the person’s legs were visible to both lidar and at least one camera on the AV, but the AV did not bother to attempt to identify just what those legs were, and even if it couldn’t identify them, it didn’t even bother to flag the unknown objects sticking out from underneath the car as something worth of note or alarm.

ADVERTISEMENT

Cruise does have humans that check up on the robotaxis, especially if something like an impact is noted. The report mentions that

“According to the Cruise interviewer’s contemporaneous notes, one Remote Assistance operator saw “ped flung onto hood of AV. You could see and hear the bumps,” and another saw the AV “was already pulling over to the side.”

It is not clear why the Remote Assistance operator didn’t do anything to halt the pullover maneuver, or even if there would have been time to do so. Also unsettling is this chart of questions Cruise put together to prepare for expected inquires from media:

Questionschart2

What’s interesting here is just how much the AV did seem to know: the chart says the AV “detected the pedestrian at all times” and “the AV detected the pedestrian as a separate object from the adjacent vehicle as soon as it made contact with the ground.” It also notes that a human driver would not likely have been able to avoid the impact – definitely a fair point – but neglects to mention anything about dragging the person under the car after the impact.

And this leads us to that fundamental problem I mentioned from earlier: the problem with AVs is that they’re idiots. Yes, they may be getting pretty good at the mechanics of driving and have advanced sensory systems with abilities far beyond human eyes and ears, but they have no idea what they’re doing or where they are. They don’t know they’re driving, and while they can pinpoint with satellite-given precision where they are on a map with their GPS abilities, they have no idea where they are, conceptually.

ADVERTISEMENT

These limitations are at the heart of why this happened, and why it would never happen to a human, who would see a pedestrian smack onto their hood and immediately think holy shit, I just hit somebody oh god oh god I hope they’re okay I better see how they are and so on. The AV has no ability to even conceive of such thoughts.

Av Pics

In fact, the AV doesn’t even seem to have an ability that four-month-old human babies have, called object permanence. I say this because if they claim the AV knew about the pedestrian, knew that the car hit the pedestrian, how could it somehow forget about the very existence of the pedestrian when it decided to undertake the pullover maneuver? A human would know that the person they just hit still exists, somewhere in front of the car, even if they can’t see them at that moment, because objects don’t just blink out of existence when we don’t see them.

In this sense, the Cruise robotaxi and a two-month old baby would fall for the same trick of hiding a ball behind your back, believing that ball no longer existed in the universe, and daddy is a powerful magician.

Object permanence may not seem to be something that would be necessarily required to make a self-driving car, but, as this event shows, it is absolutely crucial. It’s possible such concepts do exist in the millions of lines of code rattling around the microchips that make up the brains of AVs, but in this case, for a human being laying prone under a car, its legs visible on at least a camera and lidar, the concept does not appear to be active.

ADVERTISEMENT

This is all connected to the bigger idea that for AVs to be successful, they need to have some general concept of the area around them, a concept that goes beyond just the physical locations of cars and obstacles and GPS data. They need to know, as much as possible, the context of where they are, the time of day and what’s likely to be happening around them and how people are behaving and if there is anything unusual like police barricades or a parade or kids in halloween costumes or a group of angry protesters and on and on.

Driving is a social undertaking as well as a mechanical one; it involves near constant, if subtle, communication with other drivers and people outside cars; it involves taking in the overall mood and situation of a given area. And, of course, it involves understanding that if a person smacks into the front of your car, they’re very likely on the ground right in front of you.

These are still unsolved problems in the AV space, and based on some of the reactions of Cruise employees and officials as seen in this report, I don’t get the sense that solving them is a priority. Look at this:

“Cruise employees also reflected on the meeting in subsequent debrief discussions. In one such exchange, Raman wrote: “do we know for sure we didn’t note that it was a person.”

My issue here is that the question asked by Prashanthi Raman, vice president of global government affairs, seems to be very much the wrong question, because no answer there is going to be good: if it [the person who was hit and dragged] wasn’t noted as a person, that’s very bad, and if it was, that’s even worse, because the car went ahead and dragged them 20 feet anyway.

Even more unsettling is this part of the report:

ADVERTISEMENT

“The safety and engineering teams also raised the question whether the fleet should be grounded until a “hot fix”—a targeted and rapid engineering solution—could be developed to address how to improve the ability of Cruise AVs to detect pedestrians outside its nearfield and/or underneath the vehicle. Vogt and West decided that the data was insufficient to justify such a shutdown in light of the overall driving and safety records of Cruise AVs. Vogt reportedly characterized the October 2 Accident as an extremely rare event, which he labeled an “edge case.”

This is extraordinarily bad, if you ask me, partially because I think it hints at a problem throughout the AV industry, from Tesla to Cruise to Waymo to whomever. It seems Cruise at least considered rushing through some sort of fix, a patch, to improve how Cruise AVs detect pedestrians and objects/people that may be lodged underneath the car. But ex-CEO/President Kyle Vogt called the incident an “edge case” and declined to push through a fix.

This assessment of wrecks or other difficult incidents as “edge cases” is, frankly, poison to the whole industry. The idea of an edge case as something that doesn’t have to be worried about because it’s not common is absurd in light of life in reality, which is pretty much nothing but edge cases. The world is chaotic and messy, and things you could call “edge cases” happen every single day.

A pedestrian getting hit is not, in the context of driving, an edge case. It’s a shitty thing that happens, every single day. It’s not uncommon, and the idea that a vehicle designed to operate in public won’t understand the very basic idea of not fucking driving when a human being is trapped beneath it is, frankly, absurd.

Pushing a problem aside as an “edge case” is lazy and will impede the development of automated vehicles more than anything else.

I’m not anti-AV. I think there will be contexts where they can be made to work well enough, even if I’m not sure some near-magical Level 5 cars will ever happen. But I do know nothing good will happen if companies keep treating automated driving as purely a tech challenge and ignoring the complex situational awareness challenges of automated driving, challenges that include at least some attempt to understand the surrounding environment in a deeper way and, yes, implementing systems that will prevent AVs from driving if you’re stuck under them.

ADVERTISEMENT

Relatedbar

Cruise Stopping Its Driverless Taxi Service Reveals What Self-Driving Cars Need To Focus On

A Video Showing A Police Officer Yelling At An Autonomous Car Has Me Worried About Robocar Emergency Overrides

GM’s Cruise Robotaxi Company Was Terrified Of The Media: Internal Report

Share on facebook
Facebook
Share on whatsapp
WhatsApp
Share on twitter
Twitter
Share on linkedin
LinkedIn
Share on reddit
Reddit
Subscribe
Notify of
87 Comments
Inline Feedbacks
View all comments
Elhigh
Elhigh
3 months ago

Car crashes are almost by definition edge cases. When everyone does everything perfectly, they don’t happen. For the people in charge to discount them thusly is NOT encouraging.

Cheap Bastard
Cheap Bastard
3 months ago

“In fact, the AV doesn’t even seem to have an ability that four-month-old human babies have, called object permanence. I say this because if they claim the AV knew about the pedestrian, knew that the car hit the pedestrian, how could it somehow forget about the very existence of the pedestrian when it decided to undertake the pullover maneuver? A human would know that the person they just hit still exists, somewhere in front of the car, even if they can’t see them at that moment, because objects don’t just blink out of existence when we don’t see them.”

And yet:

“The pedestrian was initially hit by a human-driven Nissan, which launched the person into the path of an autonomously-piloted Cruse Chevy Bolt, which does make an attempt at an emergency stop.”

That human’s object permanence based response? It Bolted.

So how again are humans better?

Do You Have a Moment To Talk About Renaults?
Do You Have a Moment To Talk About Renaults?
3 months ago
Reply to  Cheap Bastard

But the human didn’t attempt to obliviously run over the person they’d just hit to complete some standard manouvre. The Cruze did, which is the point here. If the Cruze had also hit the pedestrian launched their way by the Nissan and had come to a full stop, there woud be no story here, but it didn’t. And despite all previous input all its sensors got about hitting a human being, it took a bad traccion warnining to stop. The bad traction here being a human leg trapped underneath a wheel, from a human that the AV’s sensors had all the contextual information to know was underneath it, but no actual understanding of that context to prevent making the situation worse.

Last edited 3 months ago by Do You Have a Moment To Talk About Renaults?
Cheap Bastard
Cheap Bastard
3 months ago

No, the Bolt did not react ideally. My point was its reaction was better than that of the human driver whom had all the context of hitting the victim hard enough to send her into the path of another car and used that information to selfishly save themselves and continues to do despite knowing full well what happened from the news reports. That is worse.

Had a human reacted as the AV did this would not even be an issue. We are able to know EXACTLY what the Bolt was “thinking” at all times throughout the incident, something not possible with a human driver. A lot of humans would simply lie about what happened. Even a completely truthful human driver in the position of the AV might not have a clear memory of the events nor their decisions and might even be advised by their lawyer to STFU. Humans are also not above becoming so flustered they go into their own robot mode and let their training take over despite evidence that they should not.

I think had there been a human in place of the AV that person would have been praised for doing the right thing by pulling over and all the anger would be focused instead on the hit and run Nissan driver and the red light running victim.

Do You Have a Moment To Talk About Renaults?
Do You Have a Moment To Talk About Renaults?
3 months ago
Reply to  Cheap Bastard

I totally get you, but that’s where humans have liability and the social contract to deter them from the most extremely anti-social behaviours – and of course, a lot of the time that isn’t enough of a deterrent – but at this point in AV tech there’s a lot to be said about how beta testing in public roads is adding to the dangers, subtracting from. While you can’t extrapolate most humans leavin the scene after a hit-and-run, you can very much wonder if most AVs wouldn’t have attempted to complete that pre-determined manouvre because that’s how they’re coded to act. It could be argued that these companies have liability too, but more often than not they’re also very well equipped to minimise legal issues and dodge liability.

To be clear, I believe in high level autonomy in the not so distant future for city driving, once the vast majority of cars are interconnected at all times and sharing real time data about their surroundings in addition to all sensor input, which at this stage does not feel like it’s up to what should be the standard for real-world testing.

Cheap Bastard
Cheap Bastard
3 months ago

I have a very simple litmus test. Which is greater?

Number of injury and property damage incidents involving AI error/number of road miles driven by AI

or

Number of injury and property damage incidents involving humans error/number of road miles driven by humans.

Normalize to the same environment, same time frames, etc to makes the comparison as fair as possible. Bonus points for including the experience of the humans and the sophistication of the AI.

If the AI is significantly more likely to cause a problem then I agree its back to the drawing board. If its within spitting distance of humans or better then carry on.

Douglas C Perrenoud
Douglas C Perrenoud
3 months ago

I’m just a little confused about “the left rear wheel was spinning on top of the pedestrian’s leg.” To the best of my knowledge, all Chevy Bolts are FWD only, so how was this possible? Was it actually the left front wheel? Just curious.

Jb996
Jb996
3 months ago

“edge case”
I would think that executives in charge of a large program would have to know something about systems engineering, or at least Risk Management. Any manager who doesn’t, deserves to be fired.
There is no such thing as “edge cases” to be dismissed. EVERYTHING that can happen, has an associated Probability, and a Severity/Consequence.

Is it okay to have a 1% Probability of a flat tire, with a Severity of few hundred dollars? Maybe.
Is it okay to have a 0.01% (“edge case”) of hitting a pedestrian, dragging them 20ft, and they very likely could die? I’m going to say no on that one.

Managers have to decide what to do with the residual risk: Accept, mitigate, transfer, avoid
Engineers provide options and assessments.

I’m continually shocked how knowledgeable and hard working middle and low level engineers and managers have to be, but executives couldn’t manage their way out of a paper bag, and get paid millions.

Pupmeow
Pupmeow
3 months ago
Reply to  Jb996

“I’m continually shocked how knowledgeable and hard working middle and low level engineers and managers have to be, but executives couldn’t manage their way out of a paper bag, and get paid millions.”

This is SOOOO many big companies. So many. I moved from a consumer brand down into the supply chain and the problem is significantly improved where I ended up. Less impressive headquarters but smarter/humbler executives.

Richard Clayton
Richard Clayton
3 months ago

Bravo! Your observations about the social and perception aspects of driving are poignant. This incident is just one of many that will occur over the next few years due to the randomness of the real world. The industry will respond in each case by applying more lines of code and learning algorithms. Forever. I remember teaching both my kids to drive and lecturing them on what I call “body language” for them to observe carefully of other vehicles and their drivers. Some points were: opposite signalling, not signalling (inadvertent and deliberate), symptoms of inattention or intoxication, intent to commit road rage, etc. One cannot assume that other drivers will behave in a legal or rational manner. This makes things unpredictable. Oh, and they were taught to glance right and left before proceeding on a green light to look for red light runners. Things are much more predicable on limited access highways.

On the show “The Big Bang Theory” Sheldon Cooper doesn’t drive and wouldn’t be able to pass a driver’s test, despite, or in spite of his intelligence. We all know someone on this spectrum. This is because of the inability to cope with this randomness and illogic. Can AV’s ever escape their extreme Autism?

Do You Have a Moment To Talk About Renaults?
Do You Have a Moment To Talk About Renaults?
3 months ago

Ah, yes, an edge case. Working in tech, I am familiar with the termilonogy here. Ultimately, what it does is reflect on the company – either good or bad – depending on how it looks at edge cases; where I work an edge case is enough for whatever it’s affecting not being rolled out in production no matter what it is and what it’s blocking in operational terms.

Now, my employer works in software only, and even the most remote applications of this software would likely not be considered as potentially fatal; I would hope that a mega-conglomerate conducting open-world testing of a potentially fatal product would have at least as high standards as some startup operating a web platform, but here we are. That’s late-stage capitalism for you, I guess.

Last edited 3 months ago by Do You Have a Moment To Talk About Renaults?
The Kyle
The Kyle
3 months ago

it was because the wheel was spinning on the person’s leg.

Pretty sure that the Bolts that these vehicles are based on are FWD. Maybe a better explanation is the person’s leg caused that wheel to STOP spinning and that caused the wheel speed to become unsynchronized with the other three.

Disclaimer, I work for GM, but I have nothing to do with Cruise.

Last edited 3 months ago by The Kyle
Wasteland Firebird
Wasteland Firebird
3 months ago

All right, you did it. I subscribed. A month after heart surgery and you’re churning out articles like this one. I’ve read your stuff for years, but for some reason I was late to the Autopian party. I shall now remedy that mistake. You know when to pull out the wise cracks and you know when to put them aside. Everything you said in this article nails it. I’m not anti-corporate, I’m not anti-self-driving cars. I just think it’s a much harder problem than we realize. And I’m a computer programmer, so I should know! I sent you my best wishes in this video at 23:31 https://www.youtube.com/watch?v=aG4obqCnlco&t=1411s and I shook your hand at Radwood LA a few years ago, I brought my dad’s 1992 MR2 turbo https://www.flickr.com/photos/zombieite/27024099199/in/album-72157631822359661/

Last edited 3 months ago by Wasteland Firebird
Laurence Rogers
Laurence Rogers
3 months ago

Glad to see you here, and glad to see you finally visited the car museum in Forbes!

Last edited 3 months ago by Laurence Rogers
Rhymes With Bronco
Rhymes With Bronco
3 months ago

Why is no one talking about how dangerous Nissan drivers are?

Crank Shaft
Crank Shaft
3 months ago

I do believe AV is a computable task and will someday function quite well, but that development of such should not be left to executives who delude themselves into think there is any nobility or urgency to the task. Any such delusions are driven solely by greed and ambition, not altruism. The need to to financially justify the staggering development costs leads to shit like this situation. Bad things absolutely are going to happen and businesses are always going to cover their asses when they do.

I think AVs are more akin to a moonshot than to a competitive business venture and should probably be a government project. However, I also know that such a proposition is pretty much absurd and never going to happen. I have no idea how it will all get sorted, but I predict many more debacles before we do.

87
0
Would love your thoughts, please comment.x
()
x