This is a fascinating article.
http://www.newyorker.com/reporting/2013/11/25/131125fa_fact_bilger?currentPage=all
It's cool to hear about some of the specifics of challenges they engineers face - example is those subtle "nonverbal" cues we as drivers give to each other. Example,
"
I'm really looking forward to seeing this, it could be a huge huge shift in transportation.
One of the really interesting things they talk about is risk ...
When you think about it. What's better ...
a) You driving a car where you have a 1 in 100,000 chance of killing yourself, but at least you are controlling it and maybe in that final moment you know you f*(&ed up and it's your fault or
b) You are in a self-driving car that has a 1 in 500,000 chance of killing you, but when it happens it's like ... holy crap a software glitch.
I would hazard a guess that society would feel that B is worse than A just because it was totally out of someone's control, even though on the whole you would be much safer.
So many interesting questions around this concept.
http://www.newyorker.com/reporting/2013/11/25/131125fa_fact_bilger?currentPage=all
It's cool to hear about some of the specifics of challenges they engineers face - example is those subtle "nonverbal" cues we as drivers give to each other. Example,
"
Four-way stops were a good example. Most drivers don’t just sit and wait their turn. They nose into the intersection, nudging ahead while the previous car is still passing through. The Google car didn’t do that. Being a law-abiding robot, it waited until the crossing was completely clear—and promptly lost its place in line. “The nudging is a kind of communication,” Thrun told me. “It tells people that it’s your turn. The same thing with lane changes: if you start to pull into a gap and the driver in that lane moves forward, he’s giving you a clear no. If he pulls back, it’s a yes. The car has to learn that language.”
I'm really looking forward to seeing this, it could be a huge huge shift in transportation.
One of the really interesting things they talk about is risk ...
Still, sooner or later, a driverless car will kill someone. A circuit will fail, a firewall collapse, and that one defect in three hundred thousand will send a car plunging across a lane or into a tree. “There will be crashes and lawsuits,” Dean Pomerleau said. “And because the car companies have deep pockets they will be targets, regardless of whether they’re at fault or not. It doesn’t take many fifty- or hundred-million-dollar jury decisions to put a big damper on this technology.” Even an invention as benign as the air bag took decades to make it into American cars, Pomerleau points out. “I used to say that autonomous vehicles are fifteen or twenty years out. That was twenty years ago. We still don’t have them, and I still think they’re ten years out.”
When you think about it. What's better ...
a) You driving a car where you have a 1 in 100,000 chance of killing yourself, but at least you are controlling it and maybe in that final moment you know you f*(&ed up and it's your fault or
b) You are in a self-driving car that has a 1 in 500,000 chance of killing you, but when it happens it's like ... holy crap a software glitch.
I would hazard a guess that society would feel that B is worse than A just because it was totally out of someone's control, even though on the whole you would be much safer.
So many interesting questions around this concept.