The objects that Tesla cameras see and identify have confidence percentages tagged to them. The objects within the driving surface seem to have 100%.
Where are the percentages shown? I don't see any percentages in Pranav Kodall's videos or in verygreen's. Is it only for certain classes of object?
It's really quite depressing that they are still struggling with the basics. The really hard stuff that Google/Waymo struggled with for so long is well beyond anything seen here, e.g. complex junctions or areas with huge numbers of moving objects.
I suspect that every company, including Waymo, is still struggling with the basics. Until very recently, Waymo was a highly secretive project. By and large, the only public information that has been released about it (until recently) are commercials produced by Waymo, and other information Waymo has chosen to disclose.
In the media and the general public, I think there has been a huge gap between what people assume about Waymo's capabilities, and what we actually know about Waymo's capabilities. Timothy B. Lee, a journalist who covers autonomy for Ars Technica,
tweeted this:
“Until recently my mental model of Waymo was that their technology was basically ready to go in late 2017 and they were doing a last few months of testing out of an abundance of caution, and to give time to build out non-technical stuff like customer service and maintenance.
This week has forced me to totally re-evaluate that. It now looks to me like Waymo is nowhere close to ready for fully driverless operation in its initial <100 square mile service area, to say nothing of the rest of the Phoenix metro or other cities.
...This means I have no idea how long it will take for Waymo (or anyone else) to reach full autonomy. It could take six months or it could take six years. Maybe Waymo will be forced to throw out big chunks of what they've built so far and start over.”
I think this shows how much of a gap there is between assumption and knowledge.
There are a few things the public has been shown that are impossible for us to assess:
- demo videos
- miles between disengagements
- future timelines / statements by executives
#3 is almost as fallible as any random person's guess. Just because you work closely with a technology doesn't mean you can predict the future. Experts are often
as bad at making predictions as laypeople.
Demo videos are easy to make, and have been made since
at least 2012, but they don't reflect much information about a system's real capabilities. For that you need a large sample of driving, not a few cherry picked minutes.
So, disengagements then, right? Apparently not. Amir Efrati at The Information
reported that companies are not obliged to report the vast majority of disengagements that occur. A safety driver or engineer taking a vehicle out for some daily testing — those disengagements don't have to be counted. This makes disengagements numbers pretty much just useless for telling us how many disengagements there actually are.
It's possible that full autonomy is just impractical to solve with current technology. It's also possible that it will be solved quickly by applying existing technologies like
imitation learning and reinforcement learning at the scale of hundreds of thousands or millions of vehicles. I don't know which one it is, or if it's neither.
We are in a double bind:
1) Companies are highly secretive about what they're actually doing, and what the current capabilities of their prototype systems are. Anything released to the public is essentially just an ad.
2) Even engineers and executives at these companies can't predict the future. Experts make wrong predictions all the time. Even with inside information and subject matter expertise, they may be unable to assess how far along progress is relative to the end goal of full autonomy.
That doesn't mean autonomy is overrated or all hype. It just means it's highly uncertain. It could be underhyped for all we know.