The cost we bear
Autonomous cars are trying to transform an industry built on death, but is that the part they want to transform?
A cyclist got killed here a few weeks back. He was 57, I assume a commuter, riding in a wealthy suburb. He was hit by a truck turning left as he rode through an intersection. I was talking to my spouse about the crash, and about what might have stopped it. The truth is, I think, that this kind of thing is close to inevitable if we want cars to be practical for urban transportation. We've accepted this cost—thousands and thousands of lives, every year—as a society. Accepting it, allowing ourselves to live with this level of violent death, inures us, and primes us to allow more and more carnage at the margins. It's a grim progression, and it's also very much what the less ethical players in the autonomous vehicle space—I'm thinking particularly, if not solely, of Tesla—are relying on to smooth social acceptance of their deadly technology.
Left turns across traffic have been a bugaboo of long standing in the autonomous car world. They are still something that autonomous cars can't handle very well. Keen-eyed observers have noted that Cruise's vehicles in San Francisco continue to avoid them where possible. The reason for this is that there is almost never a large enough gap in oncoming traffic for the predictive systems in an autonomous car to be confident of making the turn before an oncoming car gets there. This is because the prediction systems in autonomous cars rely in large part—in sole part, in many cases—on extrapolating future positions using the physics of the current situation. Cars have a position and velocity, and the system extrapolates from that to estimate where the car will be at a moment N seconds in the future. Then, the autonomous car—the "ego vehicle", is the term of art—can check if a driving plan that has it passing through the opposing traffic lane at time T+N will likely result in it hitting the oncoming car. This system can be, and often is, usefully elaborated—you can predict multiple possible paths for the vehicle based on historical data or a priori guesses about what people might do in a certain intersection, you can add some fuzziness to your estimates based on vehicles accelerating or braking—but that's the gist of it. That system – one based fundamentally on the physics of the moving objects in the environment – turns out, in actual traffic conditions, to almost never expect that the ego vehicle will be able to turn safely.
That raises the natural question of how people, who make left turns across traffic all the time, manage it. There are two parts to the answer. The first, the one that I talked about a lot at my company - and have mentioned quite a bit here—is that human driving is a series of social interactions. When we're trying to make a left turn in a car we're thinking about how likely it is that the driver of the oncoming car is going to be willing to let us go. This judgment is partly about how far away they are, and partly about how fast they're going, but it's really a holistic judgment about what's in their head. We might come to different conclusions for different kinds of cars. We might come to different conclusions at different times of day. We might come to different conclusions based on how aggressive we think that the other driver will be, or based on how aggressive we think the other driver thinks we will be. Human drivers will use their car's position to test the willingness of oncoming drivers to yield, pulling far enough into the oncoming lane to try and trigger uncertainty in the oncoming driver about whether we're just going to go, no matter the risk. A complex social negotiation happens every time a human-driver car makes an unprotected left in front of another human-driven car.
That's one part of the story. The other part of the story is quite simply that human drivers have a much greater willingness to accept risk than autonomous vehicles do. The truth is, most of the time we make a left turn when driving a car we're doing so with inadequate information. We're making a leap of faith that oncoming vehicles will notice us and will, if necessary, take the actions that they have to in order to avoid a collision. We take the adage of our driving lessons and the law—that we should make a left turn when it is safe to do so—and implicitly append "or, at least, when it seems as safe as possible, based on past experience". We are not, as human drivers, necessarily aware that we're doing this, in part because it almost always IS safe, and in part because the ability of human individuals to act confidently even in the face of poor or uncertain estimates of risk is generally necessary to get us through the day.
The level of risk-taking that human drivers accept is something that most autonomous car makers do not believe they will ever be able to accept in their vehicles. One of the reasons the rollout of autonomous cars from reasonably reputable players like Waymo has been so slow is that it is fantastically difficult to figure out how to make a motor vehicle that is practical for urban transportation but which does not take undue risks. This is perhaps the primary reason that it is not clear to me that autonomous cars will ever be a desirable form of urban transportation. If you turn that construction around, though, the corollary is that there is a level of risk that we will not accept from autonomous cars that we accept every day from human driven cars and trucks on our road, and the cost of that acceptance is measured in blood.
The truth is that an intersection where motor vehicles are making unprotected turns is inherently dangerous. The risk-taking that humans do is an inevitable consequence of trying to solve a problem—how to make the left without risk of an accident—that cannot be solved. We have, as a society, come to accept this without even necessarily realizing that we're doing it. It's an inevitable consequence of believing that cars and trucks—large, fast, heavy, deadly in collisions—are necessary to urban transportation and that we should design our urban landscape to make them not only welcome, but efficient.
This acceptance produces a sort of a ratchet effect; the crash that I started this essay with, where a cyclist was killed by a turning vehicle, is made vastly more likely by the prevalence of personal vehicles that are too large for a cyclist to be seen around them, or for a cyclist to see around. The speed and mass of the modern American passenger fleet, which is ever more dominated by large SUVs and pickup trucks, has also made every aspect of navigating the roads—but particularly navigating around these vehicles as a vulnerable road user like a pedestrian or cyclist—more dangerous. Having long since accepted that some number of thousands of deaths per year are an unavoidable consequence of living in a car-centric world, we are unwilling—or insufficiently willing—to even stop it from getting dramatically worse.
It’s this ratchet effect that Tesla is implicitly relying on in their rollout of autonomous driving. Their adoption of a strategy conditioned on people simply not doing something about increasingly and newly dangerous vehicles until it is too late is why I developed a deep dislike for Elon Musk well before he took on the task of transforming a major social media platform to one that explicitly supports antisemitism and eliminationist transphobia. Most of the other autonomous car makers are aware that this strategy is a risky one; the Uber crash showed that even one incident where an autonomous vehicle causes a road death can be existentially damaging to a brand’s rollout of autonomy. That does not mean, however, that they want Musk’s gambit to fail. Some very likely do, but the truth is if people came to accept a road full of self-driving Teslas incapable of behaving safely—in the same way that they have come to accept oversized SUVs and pickup trucks that render formerly commonplace activities like riding a bike or having your children walk to school increasingly unthinkable in even the remaining parts of the country where they’re viable—that would make life a lot easier for the Cruises and Waymos of the world.
When I worked in autonomous cars my goal was, in a sense, to hold the line on what level of risk we would accept from robots on the streets. I spoke on panels and podcasts about the existential risk revealed by the Uber ATG-caused death of Elaine Herzberg, and I worked to offer companies in the space a solution that would allow them to move at least some of the way towards deploying practical vehicles that did not have to accept the unconscionable levels of risk shouldered by every Tesla driver who turns on Full Self Driving. I failed, and in some sense perhaps it was inevitable that the interest in my point of view would be limited: if you are looking to the past of the automotive industry for guidance, a bet that the public will eventually become inured to the catastrophic ongoing death toll of your product is, ironically, one of the safest bets there is.