Intuition: It's what self-driving cars lack

By Phil Berg
Special to The Detroit News

I spent several days in 1996 riding on Swedish highways with Christer Gustafsson, longtime chief of Volvo's safety research department. One day he told me he had never had an accident.

“I guess I have spent so much time studying crash scenes I think I just avoid putting myself into situations where they happen," he said. "I don't know, I get this sense.”

Driving a car is complicated, and some are questioning when artificial intelligence might fully be up to the task of taking over completely.

Delphi engineers test autonomous cars at University of Michigan Transportation Research Institute’s Mcity on Monday, September 11, 2017 in Ann Arbor, Michigan.

"There’s growing concern among AI experts that it may be years, if not decades, before self-driving systems can reliably avoid accidents,” New York University artificial intelligence expert Gary Marcus told The Verge technology website.

One hurdle in the development of self-driving cars is whether to enable cars to think like humans, or to train vehicles to navigate roads using other methods than modeling human intelligence.

Some experts believe that people react primarily on responses to feelings, not by thinking through every move. That includes driving.

Self-driving cars being tested today are being “taught” millions of potential situations of how traffic works and what can go wrong, and their systems are able to search these possibilities very quickly. That's the way that IBM's Deep Blue chess computer won games against chess champion Garry Kasparov in 1997. 

Computer scientist Ray Kurzweil says Deep Blue's process is not thinking, at least not like that done by a human. Kurzweil explains the Deep Blue computer's chess-playing ability did not make use of “graceful” game moves, Instead, it won by relying on quick access to an enormous memory. Humans, he explains, can predict future possibilities using methods of pattern recognition. Current AI development strives to replicate this.

Last October, Douglas Hofstadter, a professor of cognitive science at Indiana University, told Quartz.com that the Deep Blue and similar AI successes were based on making computers do highly specific tasks and using large memories of data to compare best outcomes, which is how AI is approached today.

“I don’t think that what we have today is 'intelligence,'” said Hofstadter, who recalled a recent traffic jam in which cars were driving through a median strip to leave a freeway. “This is part, for me, of what 'driving' is. It’s showing that the real world impinges in many facets on what the nature of driving is.

"If you look at situations in the world, they don’t come framed like a chess game or a Go game or something like that. A situation in the world is something that has no boundaries at all, you don’t know what’s in the situation, what’s out of the situation. I don’t think at this point we’re doing what brains do. We’re simulating the surface level of it, and many people are falling for the illusion.”

I've ridden recently in self-driving cars on test tracks — notably in Volkswagen Passats specially outfitted by supplier Continental — and in cars with advanced accident-avoidance systems.

The cars' performances are remarkable when braking, steering and pulling off onto a shoulder when they get confused. But so far, they don't get the same sense Volvo's Gustafsson feels that allows him to drive accident-free.