You’ll hop into the back seat and off you go, leaving the driving to the computer.
Not so fast.
Driverless cars are indeed coming. Automakers are already road testing them in select US cities with standby drivers ready to take control of the steering wheel if anything goes haywire.
But let’s face it. Even though automotive engineers are developing some amazing navigational technologies to function flawlessly in a driverless vehicle, it could take a while to persuade passengers to take a leap of faith and turn over the wheel to a robot.
In addition to the daunting task of designing smart, driverless cars, there’s another potential pitfall that hasn’t been much talked about. And that’s the considerable risk that the software being designed to autopilot the future fleet of autonomous vehicles could be hacked.
“Driverless cars have all of the problems of regular car security, and then you add in a bunch more computers and sensors and take the human out of the front seat, altogether, so it's a difficult problem,” says automotive security researcher Charlie Miller.
It’s not a problem that’s insurmountable, according to Miller, but one that “we’re going to have to solve if we’re going to rely on these vehicles.”
Miller knows what he’s talking about. Two years ago he and another cutting-edge security researcher remotely hacked into a Jeep Cherokee via its internet connection.
It was a controlled experiment, says Miller. “It was only with a test subject who had agreed to be part of the experiment, so we didn’t do it to any random car.”
Here’s how he did it (don’t try this at home!): “The radio, the head unit of the car was talking to the internet to get things like traffic updates and I got a cellular phone, and the computer in the car had the equivalent of a cellular phone connection to the internet and I was able to connect to it and there was a vulnerability in their code that I took advantage of. Then I was able to run code on their car. I could then send messages to the other components like the brakes and steering and tell the steering that ‘hey, you should turn the steering wheel right now!’ and the car would answer and do that.”
Since shocking the automotive industry with that demonstration (“It was pretty scary stuff.”), Miller’s been working, first for Uber, and now with a Chinese startup called Didi, on making experimental, self-driving cars resilient to hacking.
“I was one of the few people who hacked a car, and so I think it's perfect for me to work on a car to make it unhackable. I know the way that bad guys get in, and I can help try to figure out ways to make it to where they can’t.”
So are automakers taking this issue of security seriously? “I think so,” says Miller. “I mean, they need to get people to trust the vehicles, not just that they're going to get them to the right place and safely, but that they don't have to worry about the cybersecurity threat, as well. So it's pretty important to them for their users to trust, and a part of that is to make sure the security is done right.”
Miller says automotive researchers already understand the problem pretty well and know how to make cars more secure. The task ahead, he says, is mostly a matter of doing the hard work of engineering and design and following through. And then testing, testing and more testing. This will require time. That’s why Miller says it’s important to get going on the problem in order to get ahead of the curve. In other words, don’t wait until "we have 1 million driverless cars driving around, and then say, 'Oh, maybe we should have thought about security,'" says Miller.
Remember that futuristic driverless taxi that pulls up to the curb? Would Miller jump in the back seat and nonchalantly read his newspaper while the robot drives?
“Absolutely, I would get in. I would much rather have a computer driving a car than a person because I've seen a lot of really bad human drivers out there.”