Wall Street Journal has an interesting article today about self-driving cars:
www.wsj.com
Basically, some AI experts are arguing that AI is not good enough yet for true full self-driving cars. They point out that our best self-driving cars still need some help, like with HD maps and remote operators. So they think we will see limited self-driving cars, like we are seeing now with Waymo and others, but true full self-driving cars, that can drive anywhere with no human assistance, are still decades away. They explain that current AI is good at seeing patterns but is not good at extrapolation:
Other experts, including at Waymo and Aurora, argue that you don't need to "solve AI" in order to have true full self-driving:
-----------
My take: I think the article is right about the current challenges with AI and that we probably won't see true L5 in the short term. However, I think the article is probably wrong about it taking decades to "solve FSD". We tend to underestimate the speed of technological progress. Just look at the how quickly computers have evolved. I think it is very possible that we could see some big AI breakthroughs in say 5 years that help us achieve better FSD sooner than "decades". We might also find clever engineering ways to "solve FSD" without solving AI, as Rajkumar suggests. After all, we've solved a lot of tough engineering problems already without "solving AI". So I think self-driving tech will get better and better and we will see more self-driving cars on the roads in the years to come. I am optimistic that it won't take decades to "solve FSD".
I would also argue that limited L4 self-driving may be good enough for now, at least for the short term. Sure, true generalized L5 self-driving cars, with human-like intelligence, would be the holy grail, but I don't think it is necessary. After all, the goal is to achieve self-driving cars that are safe and reliable and serve a useful application like ride-hailing. Does it really matter how we achieve that goal, as long as we achieve it? If it takes some geofencing, HD maps etc to achieve the goal, so what?
Self-Driving Cars Could Be Decades Away, No Matter What Elon Musk Said
Experts aren’t sure when, if ever, we’ll have truly autonomous vehicles that can drive anywhere without help. First, AI will need to get a lot smarter.
Basically, some AI experts are arguing that AI is not good enough yet for true full self-driving cars. They point out that our best self-driving cars still need some help, like with HD maps and remote operators. So they think we will see limited self-driving cars, like we are seeing now with Waymo and others, but true full self-driving cars, that can drive anywhere with no human assistance, are still decades away. They explain that current AI is good at seeing patterns but is not good at extrapolation:
"Problems with driverless cars really materialize at that third level. Today’s deep-learning algorithms, the elite of the machine-learning variety, aren’t able to achieve knowledge-based representation of the world, says Dr. Cummings. And human engineers’ attempts to make up for this shortcoming—such as creating ultra-detailed maps to fill in blanks in sensor data—tend not to be updated frequently enough to guide a vehicle in every possible situation, such as encountering an unmapped construction site.
Machine-learning systems, which are excellent at pattern-matching, are terrible at extrapolation—transferring what they have learned from one domain into another. For example, they can identify a snowman on the side of the road as a potential pedestrian, but can’t tell that it’s actually an inanimate object that’s highly unlikely to cross the road."
Other experts, including at Waymo and Aurora, argue that you don't need to "solve AI" in order to have true full self-driving:
A growing number of experts suggest that the path to full autonomy isn’t primarily AI-based after all. Engineers have solved countless other complicated problems—including landing spacecraft on Mars—by dividing the problem into small chunks, so that clever humans can craft systems to handle each part. Raj Rajkumar, a professor of engineering at Carnegie Mellon University with a long history of working on self-driving cars, is optimistic about this path. “It’s not going to happen overnight, but I can see the light at the end of the tunnel,” he says.
This is the primary strategy Waymo has pursued to get its autonomous shuttles on the road, and as a result, “we don’t think that you need full AI to solve the driving problem,” says Mr. Fairfield.
-----------
My take: I think the article is right about the current challenges with AI and that we probably won't see true L5 in the short term. However, I think the article is probably wrong about it taking decades to "solve FSD". We tend to underestimate the speed of technological progress. Just look at the how quickly computers have evolved. I think it is very possible that we could see some big AI breakthroughs in say 5 years that help us achieve better FSD sooner than "decades". We might also find clever engineering ways to "solve FSD" without solving AI, as Rajkumar suggests. After all, we've solved a lot of tough engineering problems already without "solving AI". So I think self-driving tech will get better and better and we will see more self-driving cars on the roads in the years to come. I am optimistic that it won't take decades to "solve FSD".
I would also argue that limited L4 self-driving may be good enough for now, at least for the short term. Sure, true generalized L5 self-driving cars, with human-like intelligence, would be the holy grail, but I don't think it is necessary. After all, the goal is to achieve self-driving cars that are safe and reliable and serve a useful application like ride-hailing. Does it really matter how we achieve that goal, as long as we achieve it? If it takes some geofencing, HD maps etc to achieve the goal, so what?
Last edited: