In 2016, when we originally raised money for our self-driving car company, we were part of an enormous boom. There were dozens or possibly hundreds of startups with an idea for getting a piece of the notional trillion dollar autonomous car market. The most straightforward way to get involved was to start a "full stack" company. These companies were building the entire software stack for a self-driving car (or delivery robot, or truck, or shuttle bus); the software that controlled everything between the sensors and the wheels. Among the full stack companies there were some that were small, or fly-by-night, or obviously outmatched by the task. Then there were the heavyweights. Tesla occupied their own chaotic space, but besides them the big players included Waymo (recently spun out of Google) Aurora (founded by a google vet), Uber ATG (another google vet), Argo(likewise), Zoox and Cruise. Each of these companies was immensely well funded, serious, and nominally sophisticated. Several of them were acquired for enormous sums—Zoox by Amazon, Argo (sort of) by Ford, and Cruise by GM for a cool billion dollars. The investors in these companies were betting on which one would win what they very much understood to be a race to commercial deployment of urban robotaxis, the brass ring of self-driving vehicles.
In 2023, that race, such as it is, seemed to be largely down to two. Uber ATG flamed out in the wake of their killing of Elaine Herzberg, Argo was sold for parts, Aurora is struggling to secure enough capital after an ill-considered (but maybe unavoidable) public offering via SPAC, Zoox is working quietly within Amazon and has, it seems likely, different and more modest goals than its original plan to deploy fleets of robotaxis. Cruise and Waymo, on the other hand, have large public deployments, and from the outside it would be easy to understand them as in similar positions, neck-and-neck in the race they'd been competing in for close to a decade. The truth of the matter, a truth very well understood by people who know self-driving, is that their situations were vastly different.
Waymo is in its second decade of a deliberate and unhurried testing process. Starting with Google's acquisition of several of the teams behind the successful DARPA Urban Grand Challenge in 2007, the vast tech monolith's plan has been to outlast the rest of the field, iterating on the fantastically difficult problems that comprise the last mile of autonomous vehicles. Their shifts to fully driverless operation—first in Phoenix, then in San Francisco, then Los Angeles—are part of that testing regime, as are their forays into accepting real passenger rides. They are collecting data on what works and what doesn't about a robotaxi service; the revenue they are making right now is largely incidental to their longer term project.
Cruise is in a much thornier spot. The fifteen billion dollars they've raised seems like a lot. I mean, it is a lot, it's a phenomenally huge amount of money. The capital needs of robotaxis, though—whether or not people ever actually want them—are almost unimaginably huge. Fifteen billion in invested capital is,most people in the industry would agree, well in the range of trying to get it done on the cheap.
From the external evidence, this reality hit home for Cruise a few years ago. They had already exhausted the most credulous source of external funding, Softbank, and the news filtering up from the development team through management seemed to be that they were still years away from revenue service. The company fired the CEO amidst a broader management shakeup, and brought back Kyle Vogt, the company's brash—in the “move fast and break things” way beloved of zero interest rate tech investing—founder. In all likelihood, they brought him in because he said that he could get Cruise to revenue service: out of testing and on the road, proving that robotaxi services could make real money.
He did that, after a fashion. Cruise massively ramped up their fleet in San Francisco. They got permission from the city of SF and the state of California first to run their vehicles without safety drivers, then to accept paying fares, then to do that all over the city, day and night. I think the key thing to understand about their ramp up—something that is, to me, clearly evident from the way it happened—is that it wasn't about testing. Cruise was trying to earn revenue. Every time they expanded their service—more hours, more parts of San Francisco, into another city full of eager tech first adopters, Austin—they were trying to get to the next point on a revenue line that went up and to the right. They were marking the graph that would be shown to the nervous investors, from GM and elsewhere, to convince them that this $15B investment was on track and would pay off presently. They had run out of time, or believed they had, for further testing and iteration. They had to show growth.
As this rollout happened, reports of incidents started piling up. Cruise wasn't particularly in the business of affirmatively acknowledging or announcing these, but with so many vehicles on the road being used by the public, there wasn't really any way to hide them. Some of these incidents were silly, as much as anything. A human driver who got pinned by a herd of Cruise cars that got stuck in some kind of a gap of mapping or teleoperation. Some, if you understand the technology, were more troubling.
People—like me—have been saying for years that one of the big issues with self-driving cars is that they won't fail in the same ways that human driven cars fail. They would stop short for no reason, responding to a sensor glitch, or would fail to see objects directly in front of them that a human would have no problem identifying. The fact that they see the world dramatically differently than humans do would make sure of it. Solving the problem of how to make these vehicles behave in a human-like way even though they operate so dramatically differently from a human driver is the central challenge of deploying a self-driving car.
Many of the incidents that came to light with Cruise cars were precisely failures of the vehicles to behave in a coherently human way. Cruise cars stopping in the middle of intersections, or in the middle of a road maneuver. Cruise cars passing much too close to pedestrians, like they didn't see them or didn't understand where they were going. Cruise cars failing to understand the clear—and quite angry!—body language of first responders trying to get them out of the way. These kinds of issues are predictable, related to each other, and indicative of a self-driving software stack that has NOT solved the thorniest technical problems for self-driving cars. Rather than being outliers in an otherwise perfectly functioning system, they were clues, indicators that the work of designing and testing these cars was not finished, that the hardest effort was still to come.
Combine that with an effort to move out of the testing phase, and effort to focus on growing revenue, and you end up with Cruise's problems of today. Building, deploying and scaling revenue taxi service is a challenging and consuming task, one that is not compatible with the kind of careful testing and iteration necessary to even attempt to solve the substantial (possibly even unsolvable) technical challenges between today's state of the art and a self-driving car that works how people expect. By abandoning focus on the technical challenges, Cruise, with Kyle Vogt at the helm, made it inevitable that their vehicles would fail, and, because these are cars we're talking about, they made it inevitable that their cars would cause unnecessary injury.
When a Cruise car stopped on top of a pedestrian who had been hit first by a human driven car, that was quite possibly a situation that was out of Cruise's control. What the car did next, though, was execute a "pull over" maneuver. It drove twenty feet to get to the curb with the seriously injured pedestrian under its wheels. That is, as a matter of ethics and optics, quite bad. In fact, though, the situation is even worse than that. As Cruise cars had gotten deployed more widely, one of the problems that turned up was that their vehicles were unresponsive or insufficiently responsive to emergency vehicles. The behavioral question of what to do, as a vehicle, in the context of an emergency vehicle or emergency situation, is a difficult one. For humans it resolves to attempting to discern the intentions of first responders based on the speed of their vehicles, the presence of lights and sirens, evidence of vehicles parked to respond to respond to an emergency, visual or vocal signals from first responders, and a million other cues that are instantly understandable by the human brain. Per Moravec's paradox, that kind of understanding of intention is fantastically difficult for machines. For Cruise, trying to ramp up revenue deployments, it is a mitigation that is simply not within the realm of technical possibility. So if that is true, and Cruise was getting intense pressure from city and state regulators to improve their performance around emergency vehicles, the most expedient response would be to simply hard-code their vehicles to pull over whenever it seems like it might be important. Err, nominally, on the side of safety.
Do I know that the Cruise vehicle which dragged the pedestrian was executing a hard-coded pull-over maneuver that was insufficiently attuned to scene context? Not necessarily. But if I was a technically-informed regulator, seeing that behavior happen in a situation where, in context, it was affirmatively harmful, it would set off all kinds of alarm bells. From that perspective, Cruise's cover up, where they hid the final seven seconds of the accident video from regulators and journalists, starts to make more sense. Did they risk angering regulators with their lack of transparency? They did, as it turns out. But they very likely knew that the regulators would understand that behavior of their vehicle in that situation is close to dispositive proof that they have not surmounted the greatest and most important technical challenges involved in running their service safely.
I don't really know what happens now, with Cruise. Whether Kyle Vogt keeps his job or not is I think a secondary question. The big question is whether Cruise, trapped as it is between the inexorable demands of the capital it has taken on and the impenetrable difficulty of the remaining technical challenges of deploying their fleet commercially, has a path forward at all. If I had to bet—and I'm sure glad I don't—I don't think I’d bet on them existing in anything like their current form a year from now. The curtain has been pulled aside, and the revealed road ahead for Cruise’s robotaxis is dark, uncertain, and quite possibly impassable.