A place to cache linked articles (think custom and personal wayback machine)
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

index.md 6.8KB

title: Self-Driving Cars are Self-Driving Bullets url: https://themobilist.medium.com/self-driving-cars-self-driving-bullets-955dfd2c5150 hash_url: df1a769924

Why even Elon Musk has been forced to admit that navigating streets is hard

A car is a just a slow-moving bullet with a stereo system.

When we drive a car, we typically think our main task is navigating from point A to point B. But mostly what we’re doing is trying to keep from killing someone. That is Job One. Everything else is secondary. If you were to get into a car and fail to get from point A to point B, that would suck. But if you were to kill someone, that would be orders of magnitude worse.

So 99% of what you’re doing when you’re behind the wheel of a car is attempting to not commit homicide.

This is a useful point to keep in mind whenever you read about the imminent arrival of “self-driving cars”. Because when tech folks tell you they’re building a self-driving car, what they’re really promising is to make a self-driving bullet that can weave through city streets without hitting anyone.

Kind of clarifies the stakes, doesn’t it?

Indeed, this is why tech executives have been so chastened by the challenge. They don’t like to admit defeat. But cars and roads are an environment where they cannot bluster and Powerpoint their way out of mistakes — because this time their errors quite directly injure people, with the unforgiving physics of two-ton hurtling chunks of steel.

I thought of this when I read Elon Musk’s tweet last weekend about Telsa’s FSD, their “full self-driving” software …

“Didn’t expect it to be so hard”: There’s an epitaph you could chisel on the tombstone of self-driving car hype.

The buzz started in 2005 when a Stanford team won the DARPA Grand Challenge, creating a vehicle that drove itself over 175 miles of desert. Over the next decade, companies from Google/Waymo to Uber to Tesla and old-school automakers ploughed boatloads of R&D into the cause.

The hype ran hot, because upending global transportation would offer obscene profits. Uber execs dreamed of robot taxis (with no drivers to pay!); automakers imagined commuters chilling with Netflix while their car whisked them to work. “If you’re well versed in robotics and you’re not working on self-driving cars,” as the head of a major university robotics lab told me in 2015, “you’re either an idiot or you have a passion for something else, because self-driving cars are a multi billion dollar industry.” Breathless articles proclaimed self-driving cars Would Change Everything; I wrote some of them. “Waymo self-driving car”, by zombieite

Musk expertly surfed this wave of hype. Like many automakers, Tesla had begun offering driver-assistance tech — an “autopilot” that helped keep you stay in your highway lane, say — in 2015. But Musk went much further, moistly promising that self-driving cars would arrive any day now. Back in 2016 he claimed that within two years a Tesla would be able to drive itself from NYC to LA, ye gods.

But self-driving dreams soon crashed to earth. Why? Well, the software folks began to discover that the physical world is painfully complex. They were accustomed to working on the Internet, where bits generally do what they’re told. Now they had to worry about atoms: Cameras and sensors getting clogged up by rain and snow and dirt, pedestrians behaving unpredictably. And while self-driving engineers made some genuinely remarkable breakthroughs in AI image-recognition, they still don’t know how to give their AI the “common sense knowledge” that humans use to navigate the world — our generalized know-how about, say, the way ice and dogs and bicycles and skateboarders and floating plastic bags behave. Deep-learning AI can do pattern-recognition at light-speed, but human cognition is more than mere pattern recognition. To navigate the messy reality of city streets, we also reason about things, using our Extensive Knowledge About Stuff. That’s how we deal with the unexpected. Self-driving cars can’t yet do that. No AI can.

So that’s part of why autonomous systems have injured and killed people — from the Uber car that slammed into a pedestrian walking her bicycle across the street to the many times that Tesla’s “autopilot” AI has failed to steer a car away from a brutal accident, as a recent New York Times investigation revealed. In one case, a Tesla driving on autopilot failed to stop before ploughing into a truck ahead of it, causing the truck to roll over, killing a 15-year-old boy riding inside.

These days, self-driving car-makers have become a slightly more cautious crew. The head of Waymo has said that fully autonomous cars might be “decades” away. Uber unloaded its self-driving car division.

It’s taken Musk longer to admit reality. Back in January he was still breathlessly proclaiming that Tesla would release self-driving software “at least 100% better than a human” by the end of this year. His own engineers were trying to tamp down these delusions; in a memo to California authorities, they admitted this wasn’t remotely possible. To get a sense of how far away Tesla is from this goal, behold this March 2021 video of Tesla’s “fully self driving” software attempting to navigate Oakland, in which the car engages in far too many inexplicable and terrifying moves, like swerving the car into the wrong lane during a turn …

Yowsa. So I felt a sliver of almost-hope when I saw Musk’s tweet this July 4th weekend, in which he admitted that wow bruh, fully self-driving cars are super hard, who knew? Maybe he’s finally realizing that he’s not running a company that makes cars, but running a company that makes slow-moving bullets.

Frankly, it’d be good if he dialed back the futuristic hype even further. Tesla should focus instead on improving its actually-existing AI, its autopilot system. After all, the general idea of using computers to help humans avoid collisions is very good; we kill over 36,000 people a year with cars, so anything that drives that number down is worthwhile. One study found simple crash-avoidance tech — like rear-end collision-warning — reduces accident frequency by about 3.5%. Tesla’s autopilot seems to offer a similar improvements, though as this analysis by Brad Templeton notes, it’s hard to know for sure.

Alas, because Musk has been for years bombastically touting Telsa’s fully-self-driving goals, too many users are already treating the autopilot as if it were a self-driving “solution”— and pulling stunts, occasionally ruinous, like riding in the backseat while convincing the software to keep going. Officially Tesla doesn’t approve such idiotic uses of its technology, but it could do much more to thwart and discourage it. This is an area where, from top to bottom, we could use more defensive driving.