Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

I don't expect any automated driving system to actually reach full level 5.

This site may earn commission on affiliate links.
It is not dumb at all. Brad is using the analogy of birds to make the point that technology does not always mimic nature. Airplanes don't fly the same way birds do. Likewise, autonomous driving does not have to be vision-only just because that is how humans drive.
Your statement, that a technology solution does not always follow the natural solution, is perfectly reasonable, though I would argue rather needlessly obvious to anyone who doesn't live in a nest, use fur for warmth or clean their junk using their face.

There's a silly canard about how because humans can drive with just their eyes, surely a computer can as well, but that computer doesn't remotely have the power of a human brain. And just because birds fly with flapping wings doesn't mean that planes are designed that way.
In context, though, the OP is saying that driving based on vision is a 'silly' idea because planes don't fly like birds and therefore cars won't be able to use vision? Or something?

I think there's a pretty strong case to argue the other way. Cars can *almost certainly* drive on vision alone, because humans have proven that the information available via that medium is sufficient (obviously that's not by accident; the activity of driving is built around the capabilities that humans have). What is not proven is that cars can drive without vision; there's plenty of information in the physical world that cannot be gleaned purely by Lidar/Radar mapping of physical shapes. The additional sensors may be useful, possibly even prove to be essential, for fully autonomous due to the redundancy and the capability to cope with an emergency situation. But in an emergency your priorities dwindle to 'get to safety', and detection of objects and their motion is really all you need for that. I don't think you're ever* going to see an autonomous vehicle continuing 'normal' operation in (eg) dense fog, purely because visual information is essential.

Whether HW3/4/n have the correct quantity of the correct type of processing on board to be able to drive on cameras alone is a secondary question that I wouldn't claim to be able to answer. They're clearly getting pretty close at the pure perception layer, it just depends whether the 'the last 10% takes 90% of the effort' rule proves to be true.

* = unless the world/society fundamentally changes to accommodate AVs to the extent that visual information is no longer necessary.
 
Last edited:
In context, though, the OP is saying that driving based on vision is a 'silly' idea because planes don't fly like birds and therefore cars won't be able to use vision? Or something?
There are a lot of people here who say that you need to use a camera only system because that's the way humans do it. That's the silly idea he was referencing. Tesla has been working on camera based computer vision for 8 years now and FSD still hits curbs. Why not add sensors to solve the problem today? Who knows when camera based computer vision will have acceptable performance.
 
I think there's a pretty strong case to argue the other way. Cars can *almost certainly* drive on vision alone, because humans have proven that the information available via that medium is sufficient (obviously that's not by accident; the activity of driving is built around the capabilities that humans have).
Yes, we know that given the power of the human brain, it is possible to drive at human level with just vision (and sound and acceleration.) "Human level" isn't that great but it's acceptable at present.

Given the power of the human brain. Not simply its perceptive power but its reasoning power.

This doesn't tell us much about what is possible, or more to the point practical, using vastly less powerful compute. Maybe it can be done, maybe it can't. Most people think that's a foolish bet, particularly when superhuman sensors are available, and types of compute that are superhuman in some ways while still very subhuman in others. Particularly when electronic technologies all tend to plummet vastly in price once made at scale.

As I said, planes don't flap their wings. Attempting to mimic biological approaches has rarely been the best approach in replicating things living systems do. Oh, it is a useful tool to try, but on its own? Fortunately there is no need to do it with just vision. There is also no need to do it without vision. On the quest to make the first self-driving cars, most researches have decided to use the full toolset at their disposal, and there's not a strong argument why not, unless you've already sold a few million cars with a more minimal sensor suite and want to try to make those work.

But the fact that human brains can drive with mostly vision tells us close to nothing about how to get machines to do it. It might work, and some day it probably even will work. But why cripple yourself early on hoping for that?
 
  • Like
Reactions: diplomat33
Fortunately there is no need to do it with just vision. There is also no need to do it without vision. On the quest to make the first self-driving cars, most researches have decided to use the full toolset at their disposal, and there's not a strong argument why not, unless you've already sold a few million cars with a more minimal sensor suite and want to try to make those work.
As is often the case, this argument completely denies history and commercial reality.

As I understand it Tesla defined the major bones of its current hardware package back in HW2, which was late 2016. They now have a massive fleet of vehicles with a vision-only hardware package for which FSD has to eventually be functional. It is all very well to say that the cars would be better equipped with lidar/radar/sonar and deely boppers, but that won't help the existing fleet. It is also true, despite the whining on this forum, that Tesla are still making progress towards their goal with the current hardware, and with the possible exception of Waymo (who are playing a different game really), still one of the leaders in their field. They have no compelling reason to take the hit that would be involved in changing tack at this point, it goes well beyond 'want'ing to try and make it work.

So why did Tesla not do the seemingly obvious thing in 2016 and stick all the sensors on the cars then? Looking more to decisions based on fact, Tesla's net income for 2016 was minus $675M and the most expensive vehicle they were selling was the model X at about $115,000. Their entire deployed base was 186,000 vehicles with the Model 3 yet to make it big and the company was still in an extremely delicate position. EVs were not freakishly rare but still pretty niche.

So, against that background - the hardware on a 2016 Waymo is estimated to have cost $150,000. Where would you expect Tesla to get the funding to have put that on their vehicles? Who would pay for it? They wouldn't have sold many Model 3s if it had debuted at a price point of $180k, would they? Add in to that the fact that Waymos look 'fine' as a taxi, but with all those sensor clusters they're butt ugly by the standards of private vehicles and that matters if you want people to actually buy them.

You can say 'perhaps you shouldn't sell cars that can drive themselves before you can build cars that drive themselves' and that's a very fair point, but that's not how Musk works, and despite the fact that he increasingly appears to be an awful human being, it's hard to imagine who else would have taken Tesla on the growth journey it's been on.

And as a final mic drop, Waymo have been driving around carefully curated areas for the 8 years since then WITH a $150,000 fridge freezer on the roof (yeah, I know it's cheaper and smaller now) and they're still running in to other cars and construction zones so it's not like the extra sensors are a magic bullet.
 
That Tesla feels they must try to work on the 2016 hardware is their curse, not their blessing. All the other teams followed a fairly sane development process. Let the team use not just the best sensors of the day, but to plan to use newer and better ones which are not yet available, or which are very expensive, making the safe bet that any gear becomes cheap if you make it by the millions. First, make it work, then make it cheaper.

That strange philosophy is often referred to as the "Tesla Master Plan." Listen to Elon talk about it back when he could still remember it. Begin with a high-end expensive roadster. Learn, improve, then make a high end but cheaper sedan, then make a more affordable everyday car, and eventually an entry level car. Don't tell your engineers they have to make it cost $30,000 on day one. That was the Tesla Master Plan

You don't know what hardware and compute will come in the future. You can guess, but the one thing you know for sure in 2016 is that the future robocars you make won't be using what you can buy cheaply in 2016. And of course, Tesla did know some of that, and made my car with a field replaceable computer. They can also replace the cameras though it's harder. Putting a lidar on will be challenging, because they made the stupid mistake of assuming that they knew everything in 2016, knew that LIDAR would not be needed, because of Musk's intuition about it. That's not how you do it. You presume you don't know, and you design for a future where some of your assumptions will be wrong.