This past Friday, January 23, a Waymo robotaxi, driving near an elementary school in Santa Monica, California, struck a child who was attempting to cross the street during normal school pickup hours. There’s an NHTSA investigation currently pending, and happily the child’s injuries were only minor. Because this was an automated vehicle, this story is news, and it should be. It also brings up an interesting question: is it reasonable to expect an automated vehicle to avoid accidents that a human would likely have had? Is our standard that they should never get into any accidents? Is that reasonable?
I don’t have the answers to these questions exactly, but they’re questions worth asking. Based on the description of what happened, I think a human would likely have hit the child as well. Here’s how NHTSA describes what happened:
NHTSA is aware that the incident occurred within two blocks of a Santa Monica, CA elementary school during normal school drop off hours; that there were other children, a crossing guard, and several double-parked vehicles in the vicinity; and that the child ran across the street from behind a double parked SUV towards the school and was struck by the Waymo AV. Waymo reported that the child sustained minor injuries. Investigation: PE26001 Open Resume Page 1 of 2 At the time of the incident, the Waymo AV was operated by Waymo’s 5th Generation Automated Driving System (ADS). No safety operator was present in the vehicle.
… and here’s how Waymo describes it, from their blog post about the incident:
The event occurred when the pedestrian suddenly entered the roadway from behind a tall SUV, moving directly into our vehicle’s path. Our technology immediately detected the individual as soon as they began to emerge from behind the stopped vehicle. The Waymo Driver braked hard, reducing speed from approximately 17 mph to under 6 mph before contact was made.
So, it looks like a kid walked into the street, and was obscured by a big SUV. If the Waymo was actually going 17 mph, that’s well below the California school-area speed limit of 25 mph. This feels like the sort of situation that would end up badly for any driver, human or machine: a kid walked in front of a car. The Waymo did what it was supposed to do, stop as quickly as possible once the kid was spotted, just like a human would.

Did the Waymo do any better or worse than a human? I don’t have enough information to go on. But what I think I do know is that not every crash that happens is avoidable. The NHTSA is right to investigate this, but for all of our criticisms of automated driving and robotaxi issues, I don’t think this particular event is necessarily an indicator of some bigger systemic problem as much as it is a reminder of a bigger systemic question: what do we expect from AVs?
If we are to believe that they will make driving safer for everyone, we still don’t really have enough data to show that’s the case. There’s some evidence they’re safer in some situations, but worse in others, but overall, I think the jury is out. And in this particular case under investigation now, I think the Waymo handled itself as well as it could have, given the situation.

What could have made this better? Could there be even more awareness when the car is known to be in a School Zone at a time when kids will likely be on the street? If so, how could the car’s behavior adapt? Go even slower? Honk every 20 feet? What’s the balance between safety and acceptable behavior?
Assuming we don’t have tech to see through cars, perhaps a robust car-to-car network could help here. If there were a standardized, widespread communications system for cars to inform one another about their surroundings, a car that had passed by the child before they attempted to cross the street could have made the Waymo aware of the child’s presence even if the Waymo’s view of that kid was obscured. Such a network does not currently exist, but could with enough effort and resources, and will, but is that the level we should be aspiring to?
I mean, ideally it is, right? If we’re going through all of this effort to make self-driving cars, shouldn’t we be doing all we can to make sure they’re actually safer? Or is this all just about people who want to look at their phones and sleep while being driven places? The tricky part is that a lot of what it would take to really, genuinely make AVs safer than humans will require large-scale cooperation between companies and governments and everyone, because it’s an overall system that will bring real safety benefits more than technological tricks in individual cars.
I’m curious to see what the findings of this investigation are, and people’s reaction to it. If the child had been more injured, I suspect the narrative around all of this would be very different, and understandably so. There will always be wrecks that can’t be avoided; that’s just the nature of physics when you’re moving around thousands of pounds at pretty much any speed. It’s how we choose to accept this notion, and what we do with that acceptance that matters, and I think we’re just at the beginning of that process.
Top graphic image: Waymo









It really sounds like the Waymo did everything right in this situation. The only thing that I think that could have prevented the child from being struck is if the Waymo had a sounds that would have produced some sort of fear response in the child to deter them from walking out into the street. I’m sure the Waymo produces some sort of sound to alert pedestrians of it’s presence, but I could definitely see a child not inherently recognizing that as danger, so a sound that might make a child less likely to approach could be helpful in a school zone.
The question here is not “can a self-driving car avoid more accidents than humans”, the question is, “who decides the difference between ‘avoidable’ and ‘unavoidable’, and what are the consequences when an accident is ‘avoidable’ but happens anyway?”
There’s a certain context of driving in that sort of environment that humans really have a problem with and I don’t expect a robot to improve upon it. As a driver, you should have eyes on the road, but around a place where there are children, you also need to be watching activity off the road and even then you are likely to have your view obscured.
Five to ten years ago V2V (Vehicle to Vehicle) and V2I (Vehicle to Infrastructure) was all the hot rage in the automotive technological communities. There’s even a full V2X “City” at Willow Run:
https://acmwillowrun.org/
There are several technological challenges to overcome here, the biggest being the availability of a secure and robust mobile network that can support the amount of devices while protecting the PII of said users and vehicles.
The biggest issue though, is who will foot the bill for all of this; it won’t be cheap, and given we can hardly keep our roads maintained; which is my synopsis of why V2X has died on the vine.
Thanks for coming to my Ted Talk.
Lesson learned: The kids you don’t have will never be hit by a car.
That one tool three readings, I though I was having a stroke.
Name checks.
Most of the comments here seem to be stuck on analyzing this particular incident while ignoring the larger question of “when are AVs safe enough? Is it enough to simply be safer on average than meat drivers?”
In fact, this illustrates the problem. Humans tend to focus on the individual aspect and not the big picture. Even if AVs are 90% safer than people drivers, there will still be collisions, damage and injuries. 90% safer would be a massive win for society but would society accept the remaining 10%? No, armchair quarterbacks will come up with a situation where a human would have magically figured out what was going to happen due to some barely perceptible aspect and assuming Verstappen levels of reflex and car control.
This is also ignoring the fact that all drivers are well above average, unless they are the other driver in which case they’re a moron.
That really isn’t much of a question. Of course it is better for society if AV’s are safer than human drivers. Even 10% safer would save about 4,000 lives in the USA per year if we replaced all the humans with robots.
Perfect is the enemy of progress. It is purposely used by invested parties to keep progress from happening.
I am not reading the comments, only scrolled down to make sure at least 1 person made this argument. The news coverage on this has been terrible in implying the kid was only hit because it was a driverless car. Everything I have read points to the only reason the kid wasn’t run over and killed is because this was a driverless car.
The whole zero tolerance for accidents is crazy to me. If the news covered every kid hit by a human driving a car they would need a longer news program.
This is fair, without clear success criteria people pro and con will move the goal posts. The question beyond what is the success criteria, but who creates the success criteria.
Even if AVs are 90% safer than people drivers…
This is problematic too without good data and analysis, again the who.
After all 99.99% of guns don’t kill people but those statistics are unacceptable.
I think we’re expected to keep waiting on that while they figure out the legal ramifications, and who is responsible for what.
I think one of the big sticking points is just that there’s no clear path to accountability for an AV incident. Who goes to jail if a Waymo kills someone? Is that just a fine to the company? Maybe we do a fun dystopian thing where companies hire people whose job is just to go to jail in lieu of a driver in an automated vehicle incident. I think execs should be on the hook somehow so that there’s greater stakes for the direction of incident avoidance and trolley problem handling from the top down.
Who goes to jail if a human driver kills someone? Sure, if there’s impairment involved, that’s clear and I’m all for incarcerating drunk driving killers.
But what about just a mistake? Someone makes a poorly judged left turn, for example, and the resulting collision involves a fatality. I don’t think that’s a jail sentence. Should it be in the case of an AV?
We’re looking at perfect being the enemy of good. Can we bring down the total number of deaths and accidents? That would seem good. Can we bring it to zero? No. But should the inability to bring it to zero prevent us from trying to lower it at all? Should we keep putting roadblocks in place such as “if your product kills someone you go to jail?”
It’s not an easy answer.
You’re completely dodging the point, which is about how accountability should work with an automated vehicle. In an incident with two human drivers, the driver who’s found at fault is the one who faces penalties. That’s up in the air when you’re dealing with automated vehicles, now you’re looking at entire systems employed by a corporation to make their vehicles operate rather than the choices of an individual. If Waymo or Tesla fuck up and smear your grandma as she’s trying to cross the street, or the system decides to hit a kid instead of a group of people, there are going to be big questions about how and why this happened, and whether or not the corporation operated in its own best interests at the cost of people’s safety. It’s not about chasing perfection at the cost of good enough, it’s the opposite. When the not-perfect system does cause injury or death, there need to be clear outcomes that are proportionate to the severity of the incident and also look at the practices of the corporation responsible to ensure that this happens as infrequently as possible, and to even completely revoke the ability for a corporation with unsafe practices to make and sell their product.
I didn’t think I dodged the point at all, but okay. You’re proposing some sort of penalty for an AV operator/builder if their AV does something that results in injury. That’s not necessarily the same as jail time, of course. Your trolley problem example is one where maybe the corporation should be celebrated if the AV chose taking one life over many. Nobody would ever accept that, of course, but that’s the flip side of penalties.
Turning this into “what about smearing grandma” conversation is exactly what’s going to keep us with the current system of poorly trained, distracted/tired/inebriated meat bags ricocheting around the streets. It’s turning the question into one of personal risk instead of a risk to society at a whole, which is right back where I started my first post.
The trolley problem is going to be a big sticking point for AVs, and accountability and justification for big systemic decisions in that area is going to be a snag for the public reception and legal precedent. If it’s not ironed out before wider adoption of this tech, it’s going to cause major setbacks when an AV ends up in situations where the safety of the passengers is weighed against pedestrians or other vehicles, or hitting one person is weighed against another, or any other different versions of the trolley problem. It’s not something that can just be handwaved away, these are going to come up and cause complex problems that’ll affect people’s perceptions of the tech and the companies making decisions that are going to affect public safety. I agree that overall this tech could bring major improvements to public safety on the road, but that doesn’t mean the best course is to charge ahead and then have to scramble to get any sort of recourse when a major casualty event occurs from one of these automated vehicles.
Interesting. Watching the latest Ask Hank Anything last night (with Brennan Lee Mulligan) they discuss advances we’ve made that save lives in an invisible manner. People alive today don’t know that they would have been dead from smog or acid rain or something. Someone who lost their home to a hurricane doesn’t know that if it had happened 40 years earlier that they would have died from it. In short, people don’t know when they should be grateful for something.
So, people who would have been struck by a human driver but weren’t by an autonomous vehicle don’t know that they wouldn’t be alive today because of it. This is potentially a rare case where NHTSA may say, “If it were a human driver, the child would have suffered heavier, potentially fatal injuries,” or even, “That child would be dead if the vehicle had been driven by a human.”
Excellent point. See also: vaccines.
“is it reasonable to expect an automated vehicle to avoid accidents that a human would likely have had?”
Yes. Isn’t the whole point of this exercise? I have no way of knowing enough about this particular incident to judge one way or another if a human would or wouldn’t have done the same.
But, there is more to situational awareness than just seeing your surroundings.
The whole point of the exercise is to remove the need to pay wages and benefits to drivers of commercial vehicles (taxi, delivery, truck…) of all types. That’s it, full stop. That’s something like 3.6 million middle class incomes’ worth of money saved* just in truck drivers alone in the US. Imagine the boost to the bottom line!
*The impact to society and the economy of eliminating that many jobs and income is very much Someone Else’s Problem, of course.
That’s probably the first domino to fall the projected full self driving future.
In this scenario, there’s a far-from-zero chance that a human would have been distracted and hit the child at the full 17mph. Or said human would have been doing the speed limit of 25mph. Either of these scenarios would have easily resulted in the death of the child.
Seems likely to me that the Waymo did avoid the accident that the human would have had.
There is also an equal, non-zero chance that a human could have seen the child before they passed behind the SUV that blocked it from view, and slowed/stopped knowing they would come out the other side. Also it’s possible a human driver would have been conscious they were in an area with small, situationally unaware pedestrians that do unpredictable and stupid things, and been more alert then usual.
Fully automated driving capable of negotiating without failure 100% of all scenarios, and terrain that humans are capable of is further away than the advocates will acknowledge.
100% of Waymos are trained to be more cautious in school zones. You can’t say the same for humans.
I’m tired of safer being the only metric by which anything is measured. That is why we shut down the economy for Covid and a big reason cars are overpriced. Avoiding harm is important, real important, but a life worth living needs to be the reward. Ships are safest in port, but that is not what ships are for.
The concept of safety in factory or service operations is that if you get safety right it is likely the operations are well run and under control, and most of what you do to improve safety usually improves overall operations / organization of the business. While this is sometimes not the case, I can say that the safest factory I have worked with had not had a reportable incident in three years and this was probably the best run/organized factory I have been in.
I think that the lesson here is an unfortunate one for parents… that they not only need to teach children to look both ways before crossing the street; but they REALLY need to look out for self-driving vehicles.
DirtyDave nails it below: Kids are dumb: they would run off a cliff if you fail to tell them NOT to. My own kids have done some astonishingly stupid things that defy self-preservation instincts. I have no doubt whatsoever that this unfortunate recipient of car vs. kid didn’t stop, look, or hesitate to sprint out into the road between two Armadas.
You can’t fix kids being inexperienced in life so that’s not a realistic solution to any problem.
I agree that the car most likely stopped faster than a human could. Kids are dumb. Unfortunately most folks wont read the news articles about the incident and assume the car was an out of control killing machine.
Out of control killing machine. That describes about 5% of Milwaukee drivers during rush hour in my experience.
I trust it was a valuable learning experience.
I’m positive the computers reacted and thus braked faster than a human possibly could have, thereby reducing impact speed and injury, so there’s that positive. However a human would more likely have been aware of the school time enhanced risk environment and been more cautious in the first place. Yet, a healthy percentage of people would also have been too self-absorbed, so I can’t fully blame the Waymo for making a common mistake tons of humans do every single day.
That said, in the insurance world, car vs. pedestrian is always the
driver’scar’s fault. Always, excluding fraud of course.This one is indeed a higher order conundrum for which I have no simple answer (not that anyone cares what I think).
“I’m positive the computers reacted and thus braked faster than a human possibly could have”
That seems like an extremely bold assertion based on the data available. Its certainly possible, and maybe even likely, that the computer reacts and brakes faster than a human, but it’s also possible that you’re wrong. Please be careful when making statements like this; unless you have access to a lot more information about this incident than the rest of us?
I do absolutely agree that a lot of human drivers are guilty of distracted driving, and the computers are likely to be much better than we are in that respect. My personal view mirrors what was said in the article: in some situations the AVs are probably safer than human-drivers, but that may not be true for all situations.
I stand by it. It was not bait. Absolutely no one has to agree with me, but I chose to be open and honest and that is what I will believe until proven wrong. Now, I sure as fuck hope no one bothers to argue with me because it’s utterly pointless. However, understand that my sureness comes from being aware of the sensor suite on Waymo vehicles and that something as simple as a person or object moving into it’s path would have registered, been processed and actually actuated the brakes faster than any human possibly would be able. Automatic Emergency Braking systems improve safety for much the same reasons.
Finally, I absolutely, positively could, as ever and always, be completely full of shit and just as wrong as anyone has been or might ever be. I accept that and hope you will too. Until we some day know for sure, I’m not going to worry in the slightest about it. I genuinely wish the same for you.
I’m of a mind to agree with Crank Shaft here.
If the automatic braking is functioning properly, it *should* be a lot faster to react than a human. In this situation, the vehicle driven by a human would have traveled at LEAST 20 feet further before the brake pedal is depressed.
The signal needs to travel from eyes to your brain, then your brain sends the signals to your muscles to initiate the braking event. That’s about a second of delay.
Assuming (yes, I know) that the autopilot system is only inhibited by how quickly electrons travel over the CAN/BUS, the Waymo has that much more distance to decelerate.
While we have all seen the Waymos driving in circles and confused by stupid stuff on the road, I honestly believe that this driverless car vs. pedestrian incident was reduced in severity as a result of tech.
If what I presume above is true, it’s the difference between bouncing off a bumper at 6 mph vs. getting clobbered at 15 mph.
I’m not sure about your math. A baseball player has 4/10 of a second to see a pitch, decide if it’s going to be a strike or not/whether or not swing. That’s to react to a projectile going at least 80 mph. I randomly watched some TV special about the physics of baseball and it stuck in my mind.
So one full second to go from sight to brain to muscle to brake seems long to me. I don’t doubt the automated driving system can react faster than a human, but I’m not sure the degree of difference is quite as you’ve calculated it. But I could be wrong.
Though in the case of the baseball player, they have trained extensively to get their reaction time as low as possible for a very specific and repeatable scenario. I very much doubt that the average driver in America has done the same kind of practice and preparation to be able to respond as fast as a baseball player, especially for a scenario that is not the expected norm (though we really should treat it as one)
Exactly. Muscle memory and tens of thousands of practice attempts vs. a sudden and unexpected situation has a lot to do with the timing difference here.
Two things:
+1 for saying that this is like, your opinion man, and that you aren’t looking for a fight because fighting an opinion is one of the stupidest things that happens on the internet.
-1 for framing your opinion like a known fact to start with, because that is also a stupid thing that happens far too often on the internet.
EDIT: third thing – I agree with you that in this scenario, it’s far more likely that the Waymo braked faster than a human could.
Well, starting a sentence with “I’m positive” seems more like an opinion than a statement of fact, but I’m certainly not going to argue with you about your opinion on the matter. 😀
Tell that to my wife when she asks me if I left the toilet seat up; “I’m positive I didn’t” is much different than “I think I didn’t” lol. One is insisting I am right, the other is saying I may be wrong so please don’t throw something at me.
For you, I’d ask… how are you positive? And if you are positive, then why not say you know for certain? And if you know for certain, why not say how you know and prove it?
From a linguistic perspective you are technically correct (the best kind!), but in reality it doesn’t really make sense to say that you are positive of something if you just think you are right.
I told her the very same thing to her last night when she left my bed.
Totally kidding!
But seriously, I’m positive because of my understanding of the situation. That’s it. No other valid reason. I could be wrong, but I am currently positive I am not. Yes, that’s a pretty oxymoronic statement, but it’s the contemporaneous truth.
Maybe we’ll see evidence that I’m wrong and I will readily admit so. But absent such, I’m positive until then. 🙂
Matt jumped all over me for saying the same the about the CDK Global ransom situation, but I was 100% correct. This is a similar situation. I know for sure I’m right, but I also know for sure that I could be wrong.
You’re entitled to your opinion, and honestly I don’t disagree with it – I thought my comment even said as much.
What bothers me is when opinion is presented as fact. If Waymo release a video that shows the kid in line-of-sight from the vehicle for 3 seconds before being hit, my knowledge of the facts changes and my opinion would likely change, I am guessing yours probably would too.
I enjoy the comments here and want to keep it civil, thanks for being a part of this community.
Again, I literally started with “I’m positive”, not ‘It is a fact that…’
I maintain that I chose the correct wording. But do know that I fully support your point. I asserted something in a very certain way which could appear as a statement of fact to many.
You have a different interpretation of what “I’m positive” means than I think quite a few of us. I generally take it to mean “I am 100% sure” which sure sounds like indication the statement is fact.
So we agree. 🙂 However, and this is the point you seem stuck on, is that I’m still positive and so I’m not sure how else I should phrase it? From both a semantic and pedantic perspective, I still feel like I chose the correct words.
Now if you want to talk about the insurance thing, that’s where I may apparently have my head firmly ensconced in my derrière.
It’s important to keep in mind the average time for a human to notice something and decide to act is 300ms. To start to respond is ~ 750ms and to apply brakes is more like ~ 1.5s.
Those are averages, for an experienced driver, who is actively paying attention.
300ms is an eternity in computer time. There is no doubt a computer responded faster than a person could. Automatic braking systems in regular cars consistently out-brake humans in these circumstances. They’re faster, full stop.
What’s at question is the automated driving component. Would a person, on average, have noticed the kid before they dipped behind the car? Would they have been going that speed? Slower? Faster? Could the accident have been avoided not because the person responded faster but because they were more aware there was likely to be a child there in the first place?
There’s not a 100% way to know for sure, and of course this circumstance is not all circumstances. I don’t even really have an opinion on if automated cars should exist…
Having ridden in a Waymo and watched how it interacts with pedestrians I’m quite confident is saying that the Waymo was more aware of the school zone situation than the typical human. This is likely why it was going 18 mph in the zone instead of the speed limit. To give itself more time to react.
I visited Phoenix last Thanksgiving and had the chance to ride in some Waymos. After the first ride my wife and I were sold and used Waymo exclusively for the rest of the visit instead of Uber.
Two interactions with pedestrians led me to my conclusion above:
In either of these situations the typical human driver would have just continued on at or above the speed limit and ignored the pedestrian.
Thanks for this post. It’s nice to have input from someone with experience.
It is likely that the Waymo stopped faster than a human driver would – but would the human driver be driving at even 17 mph if they were driving through a school zone at let-out time. I try to avoid these situations both due to traffic slow downs (One of my favored routes out of my neighborhood needs to be avoided between about 2:15 and 3:00 pm for this reason). I think 17 mph is reasonably slow for this situation, but if I saw the zone swarmed with rambunctious kids I might be going a little slower – or maybe not…
Not in NC. If you aren’t at a crossing or intersection, it may very well be the pedestrian’s fault.
Excluding fraud, the car insurance will pay every time. In NC or anywhere else in our currently troubled states. I sincerely hope this is never proven to you or anyone you know. You could fall out of a tree BTTF style and you’ll still get paid.
Why? Insurance companies do not ever want to risk their coverage limit getting pierced by trying to fight a claim and getting hammered for bad faith.
Nope, one of my coworkers nailed a pedestrian who ran out in front of him, pedestrian at fault, co-workers insurance paid nothing.
Do you have any names or paperwork on this? I’m not trying to be offensive – I’m genuinely curious to see the circumstances, as something would be very out of the ordinary there.
This seems to be the normal operating process in several Southern States where I have lived and have friends. It is likely not this way elsewhere – I doubt that anyone would totally absolve the driver who hits a jaywalker in Massachusetts. Not sure about the rest of the country.
Cars are supposed to violate the laws of physics because someone can’t be bothered to look before they cross the street?
It’s just not that simple. Were you going 1MPh over the limit? Did you have a bull bar? Was your car lifted or lowered? Were you on the phone? Were your lights off at dusk? Did you have an air freshener that obscured your view at that crucial moment?
Do you see where I’m going with this?
Nope, sorry.
Please don’t be sorry. IATA in virtually every situation extant, not you.
Also, NC is a contributory negligence state. If you’re any part responsible for the incident, you get nothing. They have been getting more strict about this over the past few years with auto claims – especially when the monetary damages are significant.
Indeed. You may not be able to collect from a defendant, but you will still collect from your insurance company. The functional reality for insurance companies is that they will pay either coming or going so they choose the least costly way.
Now that said, I must apologize and admit that I have zero experience with NC and I now think I’m happy about that.
I now suspect that’s why Uninsured Motorist coverage is mandatory in NC. That’s where from you should collect if you somehow are contributory in your own pedestrian/vehicle collision. Note that I am going to call myself dead fucking wrong and thank you for edifying me. It’s a lot more complicated than black and white, but without question, I am technically wrong. Apologies and thanks.
Again, not correct. Uninsured only covers you if the other person is found to be at fault and doesn’t have insurance or enough insurance. If you’re found partly at fault they won’t cover, and if you can’t identify the person (say hit and run) then they won’t pay because you have to prove the person at fault doesn’t have insurance. Uninsured effectively is insurance for the other person. It’s pretty cheap, too.
Who covers it if you’re partly at fault is your own medical insurance (for bodily injury), medical insurance through your own auto insurance (also for bodily injury, pays regardless of fault, but most people just carry enough to pay their health insurance deductible), and your comprehensive insurance (for vehicle damages.)
The basic truth is insurance coverage varies wildly from state to state, and you can’t really make a blanket statement about them covering anything uniformly across the country.
Understand that say, in a hit and run when the other driver cannot be identified, that such constitutes, in and of itself, an uninsured situation. Not because we have to prove the other driver was driving without insurance, but that because they are unidentified, it is an ‘uninsured’ scenario. UM/UIM are very much insurance on yourself, not the other driver, although it can pay other parties too.
Taking this pedantically further, if another insurance carrier is not liable to pay, then that also constitutes an uninsured situation. That is the why for the mandatory UM/UIM requirement. If everyone is required to have insurance, then why would you need to require UM? Do you see the logic? But by requiring UM, if one policy doesn’t pay, the other will. What the fuck would be the purpose of insurance if every carrier could simply assign some ephemeral amount of fault to each driver and declare they didn’t have to pay a thing? You throw fault around as if it is automatically assigned by some authorized party. It’s not. Only a court (or arbitrator) can assign disputed fault. Trials and such cost insurance companies big time. As such, they pay a lot of claims they might otherwise be able to weasel out of. Then they subrogate against each other. That’s where the sausage really gets made.
Do you not know that medical insurance subrogate against auto carriers all the time?
Here’s another one: Both driver are at equal fault and have the same insurance company. Do you think the carrier just gets a pass because both were at fault? Seriously. I just can’t even.
Note that as an insurance agent, with a kid who was a pedestrian hit and run victim, with indeterminate fault (the hit, not the run part), there was never, ever, ever any question that our UM/UIM coverage would pay in full (which it very much did).
I made an erroneous blanket statement, but then so did you. We are both at fault. 😉
With all due respect, Crank Shaft, you offer insightful commentary in your comments and we want more, even if you may, on occasion, have a rough edge. Thank you for your commentary.
Aww, shucks. Seriously, thank you for saying that. I’m just as insecure as most folks.
I read everything Jason writes about this subject, because he actually “wrote the book” about it (Robot, take the wheel).
Even though this book was published several years ago (2019?), I found that it raised many of the issues that still need to be resolved.
Last time I checked, it was available on Amazon. I was able to find the eaudiobook version at my local library,
Eaudiobook? Is that some sort of fancy book-scented cologne?
It’s E Audiobook. Short for electronic audio book. Normally not a distinction that needs to be made but Jason also released this particular audiobook on a scroll for use with a hand cranked phonograph.
All the talk about car to car communication, but in the meantime with human drivers, this is why tinted windows are evil.
“it’s an overall system that will bring real safety benefits more than technological tricks in individual cars.”
And that is the heart of the matter. The present “system” is not a safety system at all, but rather a system of technological scams designed for the monetary profit of the current system owners.
A human driver would probably be paying attention to all the children that he could see, on the other side of the street, and not noticed a child stepping out from behind a double parked SUV. Sometimes not paying attention is safer because you have attention to spare.
I was going to say a human would have been speeding through on their phone. I believe if it were a human driver this child would be in much worse shape
I agree on that last point.
I tend to brake for rumors, but I am something of an outlier.
I’m glad it wasn’t me driving.
Yes a massive network of cars that all know exactly where everyone is at all times sounds very comforting…
This. Screw more of that data and AI crap. Who will check and manage that network? Private entities like Stellantis, VW (Electrify America is a fail) or perhaps local governments like the Alabama state agencies. Or a wall st financial behemoth who buys out the infrastructure and squeezes the life out of us just to get from A to B. Ah wait, the Feds! Yea, all that would work great.
A lesser writer/thinker would have used this accident as a reason to dump on self driving cars. Thank you for acknowledging it’s messy out there and not falling for “omg it hit a kid get these off the road.” I have no doubt that’s how this is being covered elsewhere.
I would say to go see Gizmodo, but one thing that is always worth remembering is that is journalism, it’s the editors in charge of headlines, not the authors
I was actually involved in an investigation into a collision avoidance RADAR for automobiles in the early 1980s.
Knowing the intent of every vehicle and pedestrian would require a lot of data. Now recognize that there could be a lot of vehicles in radio range. Reliable data comm of that volume at the rate and latency required is still not all that easy.
From what I remember, the data latency is one of the biggest issues. We may have the image processing capability to know what the collision threats are, but it still needs to be shared with other vehicles and pedestrians.
And how do you stop a pedestrian or human powered vehicle from ignoring the request to stay out of the trajectory of a motor vehicle which cannot possibly avoid a collision?
Don’t you remember 5G is gonna fix all that?
I detect your sarcasm, but that was never the promise. There is a fair amount of latency in an IEEE 802 network. High data volume, yes. Low latency? Not that simple.
I have no doubt. I just remember during the initial rollout the promise was it would enable instantaneous mesh networks specifically for things like self driving cars.
To the layman a thousandth of a second is instantaneous. In some control systems it is an eternity.
One example I faced was a tool turning a screw at 1000 RPM that reported the torque required 1000 times a second. It was overdriving screws.
And it was connected with gigabit Ethernet. Still took about 4 milliseconds to stop. Unconstrained that was 4 turns. At .75 threads per mm, over 3 mm of travel. That guaranteed that it overdrove the screws.
Holy screwdrivers Batman – that is one fast screwing machine.
A mesh network as imagined / described (I remember that period) increases latency as each participant in the momentary mesh of n cars in range of yours needs to multicast, and you get O(n^2) connection negotiations even before you transmit any data. Those negotiations are lengthened by any security protocol overhead (e.g. TLS 1.3 does a lot of work even before any data is exchanged).
Even if each car’s wireless NIC (or Bluetooth radio) is nominally capable of transmitting data quickly, the added overhead of multicasting and encryption, even if it’s 10ms per node, can make a difference. An extra 1 second at 65mph before you engage the brakes is several car lengths. If you’re following or being followed too closely, that latency can mean a crash vs a close call.
Some protocols let you prioritise transmission speed over accuracy, which can help, but may not, if some of the data gets mangled between another car and yours, or doesn’t arrive at all.
There’s no free lunch. A mesh network may work under ideal conditions (relatively dense traffic at low speeds, like city driving), but those are conditions where unassisted human reaction times are already good enough.
It’s always the network. And when it’s not the network, it’s DNS, which is the network.
It’s always the network.
Considering how GPS on navigation apps (Waze, Google Maps, and so on) is delayed, self-driving technology is still rightfully suspect. How often have you been told to make a turn just as you pass that road? This week I was told “You are approaching a railroad crossing!” a couple of seconds after crossing it.
I feel like the main difference, safety-wise, between humans and automata, is anticipation. A computer may react faster, but maybe a human would have seen the kid running on the sidewalk, and so they’d be on their guard and sort of ready for it, when the kid suddenly appeared in front of them. All a self-driving car can do is react to a situation that’s already happening.
In other words, “I bet that guy is going to want to change lanes real suddenly, when he realizes that he’s behind a school bus and approaching train tracks,” is the sort of helpful thought that a self-driving car can’t have.
What percent of human drivers will have that thought though?
It doesn’t have to be a conscious thought. People anticipate all the time, whether or not they notice it.
AI is actually good at that, the problem is you get a lot of false positives. If you get rid of the false positives because you’re braking for hallucinations, then you start running over kids.
Yes, anticipation critically informs both action and reaction when driving. What I’ve heard about automated driving and assists is that they’re (too) shortsighted, literally and figuratively
About 25 years ago I was on a rural divided 4 lane, probably going 65 or so. Kid on a skateboard launched from behind a mailbox – maybe 50 feet in front of me, Full panic brake, haul the wheel left into the median. Pretty sure the kid crossed my back bumper as I was full sideways. I got it sorted out and stopped in the median, pointed in the right direction!! Guy who stopped behind me was amazed I didn’t roll the car.