Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla.com - "Transitioning to Tesla Vision"

This site may earn commission on affiliate links.
No. Radar is not expensive at all. And the cost of lidar has also come way down. Several automakers are including a front lidar in their luxury brands for L2+ features. Cost is not the issue.

Camera vision provides rich information, including shapes, sizes and colors. In theory, camera vision can give you all the information you need to do full self-driving, assuming you have accurate and reliable camera vision of course. The information you get from lidar and radar, can also be obtained from camera vision. There is reason to believe that eventually (we don't know when), camera vision will be so good, that it will be able to do FSD without lidar or radar. So, I think the main reason Elon dislikes lidar/radar so much is simply that he does not believe they add enough benefit in the long term. He feels that you can do FSD without them, as soon as camera vision is good enough which it should be eventually, so why bother with extra sensors that will just give you the same information you already have. On the surface, that makes sense. There is a certain logic to it. But Elon is putting all his FSD eggs in the vision-only basket. If it works, it will be great. Tesla will have FSD for dirt cheap and super easy to scale. If it does not work, Tesla will be forced to add sensors to our cars.

The reason AV companies use lidar and radar is because there are conditions where camera vision will fail, where lidar and radar will not fail. For example, lidar works perfectly in total darkness where camera vision will deteriorate. And radar works great in dense fog, heavy rain and snow, where camera vision will deteriorate. So having lidar and radar provides extra reliability in different conditions. Lidar and radar work by bouncing a laser or radar signal off an object and measuring the time to come back. We know the speed of light, so we can calculate the distance very precisely. Lidar and radar can measure distance very precisely with less computing power. Camera vision can also get distance measurement but you have to extract the information by analyzing the image, which requires more computing power. Lidar can give you a very high resolution 3D "map". So with lidar, you can also classify objects, terrain etc... Radar can give you very accurate velocity of moving objects even in low visibility conditions. IMO, there is a benefit in having sensors that can give you valuable information like distance and velocity in more diverse conditions and don't require a lot of computing power.

Here is a video of Argo's new lidar. It basically gives the car incredibly accurate perception, even in total darkness:


Anyway, I tried to be fair to both sides. I hope that makes sense.
Very fair. While its true that there are different sensors available today, like the example you provided, we need to think about the capabilities of the current radar sensor. It does not provide that level of detail. If there is fog/rain/total darkness where the cameras can not function to see lane lines and the road, the current radar does not provide enough data to drive. In this case, it's not "better" because you have both sensors because the end result is the same. Its worse that you are spending development time writing code to resolve cases of sensor disagreement and spending CPU time processing radar data that is at best redundant and at worst, must be ignored.

I wish that the timelines matched up better as far as parts availability and software readiness is concerned. It certainly looks bad to have a period of reduced functionality which calls their motives into question but what I think will happen is that they will/have reach a point of parity with the new vision only approach vs the current multi-sensor solution. Note - I'm saying parity, and the current system is not perfect. I expect issues from the new system as well. One they reach parity, however, I suspect they will remove it from S/X and there will be an update that will disable/ignore radar input existing cars to simplify the code base and engineers will continue to optimize the vision only path.

I, personally, would still go ahead with a purchase if I had one planned. The cars are incredibly safe and it would be very difficult to measure the amount of temporary risk increase there is. Its one thing to say it doesn't have a safety feature so it must be more dangerous but that is just a generalization and is meaningless to actual safety unless you are actually in an accident where the specific circumstances would have been impacted by the feature.
 
  • Informative
  • Like
Reactions: pkitch and Doge-1
Despite the impressions of people that higher res is always better, that's not really the case for NNs.
I generally agree- however, Tesla claims their main forward camera can do 250 meters.

A 35° camera at 250 meters is 175 meters wide. The 1.2 MP cameras are 1280×960 resolution, so at 250 meters, each pixel is ~0.2 meters wide, and thus a 2 meter wide car is only 10 pixels wide. A human is about 2 pixels wide. That car needs to get about 15 meters closer to be 11 pixels wide. There is very minimal data out at this distance for the system to be processing, particularly on closure rate of the threat which can only be interpreted from changes in the image over time. This system also suffers from not having a local IMU for the camera, so vibrations in the car cause motions in the image that are not real (although maybe they can actually take advantage of this like the human eye does). This narrow field of view also comes into play in highways with any kind of curve- 35° is only 18° to each side, and an 18° bend over 250m is very minimal.

This all matters because 250 meters only takes about 7 seconds at 80 MPH, and they are claiming superhuman capabilities. Humans have the ability to see other vehicles and threats on the road much farther out than 250 meters. While humans do not see in pixels, some estimates are that with our various processing techniques, we have an effective resolution of tens to hundreds of megapixels in the center of our field of view.

There's some simple physics here that makes a 1.2MP, 35° FOV camera seem somewhat insufficient for driving a vehicle at 90 MPH. It will be interesting to see how quickly they raise the speed from 75 MPH and how well it performs on distant objects at high closure rates.
 
Last edited:
This all matters because 250 meters only takes about 7 seconds at 80 MPH, and they are claiming superhuman capabilities.
Your not just reading the scene once though. What's the frame rate? Your analyzing those object thousands multiple times during those 7 seconds and each time, the objects are increasing in size as you approach.

edit - thanks for the refresh info.
 
Last edited:
What's the frame rate? Your analyzing those object thousands of times during those 7 seconds and each time, the objects are increasing in size as you approach.
Which is why I pointed out that the car has to go from 250 meters to 235 meters to increase even 1 pixel in width, and said "There is very minimal data out at this distance for the system to be processing, particularly on closure rate of the threat which can only be interpreted from changes in the image over time." You aren't getting a lot of size change information at long distances which makes velocity estimation very difficult. And this is for a 2 meter wide car- this will be a lot worse for a motorcycle, human, or debris in the road.

Kaparthy says they run about 35 Hz FYI, not KHz. At 80 MPH, it takes about 100 meters to stop in a full ABS panic stop. You better have already made your decision at about the 125 meter mark. At 80 MPH, they get one frame per meter of distance traveled.
 
Last edited:
I generally agree- however, Tesla claims their main forward camera can do 250 meters.

A 35° camera at 250 meters is 175 meters wide. The 1.2 MP cameras are 1280×960 resolution, so at 250 meters, each pixel is ~0.2 meters wide, and thus a 2 meter wide car is only 10 pixels wide. A human is about 2 pixels wide. That car needs to get about 15 meters closer to be 11 pixels wide. There is very minimal data out at this distance for the system to be processing, particularly on closure rate of the threat which can only be interpreted from changes in the image over time. This system also suffers from not having a local IMU for the camera, so vibrations in the car cause motions in the image that are not real (although maybe they can actually take advantage of this like the human eye does). This narrow field of view also comes into play in highways with any kind of curve- 35° is only 18° to each side, and an 18° bend over 250m is very minimal.

This all matters because 250 meters only takes about 7 seconds at 80 MPH, and they are claiming superhuman capabilities. Humans have the ability to see other vehicles and threats on the road much farther out than 250 meters. While humans do not see in pixels, some estimates are that with our various processing techniques, we have an effective resolution of tens to hundreds of megapixels in the center of our field of view.

There's some simple physics here that makes a 1.2MP, 35° FOV camera seem somewhat insufficient for driving a vehicle at 90 MPH. It will be interesting to see how quickly they raise the speed from 75 MPH and how well it performs on distant objects at high closure rates.
I don't think this is an issue in this case, as the radar had only 160m range, which boost things a further 1.5x (didn't do a new calculation myself as couldn't find if 35 degrees refers to diagonal FOV or horizontal, so didn't want to put the numbers). As you approach the object this will also improve.
 
  • Like
Reactions: mikes_fsd
Your not just reading the scene once though. What's the frame rate? Your analyzing those object thousands of times during those 7 seconds and each time, the objects are increasing in size as you approach.
Exactly, the latest from Karpathy is 1.2MP @ 36Hz which is 18 frames per second.



1622139214501.png
 
A 35° camera at 250 meters is 175 meters wide. The 1.2 MP cameras are 1280×960 resolution, so at 250 meters, each pixel is ~0.2 meters wide, and thus a 2 meter wide car is only 10 pixels wide.

math is a little off here by about 40%:

35deg camera fov at 250 meters is 157 meters wide.

157 meters / 1280 pixels = 0.122 meters per pixel

so a 2m car will be around 16 pixels instead of 10 pixels

F5BA739E-CB49-4333-B7ED-12C3345C09D0.jpeg
 
I don't think this is an issue in this case, as the radar had only 160m range, which boost things a further 1.5x.
I also agree the existing radar here probably wasn't enough for FSD either. The question is if the currently installed cameras are sufficient for FSD at the speed limit on a highway. A 1.2MP, 35° FOV camera is pretty limited for this.
 
Last edited:
This is why this is such a ridiculous thing for Tesla to do to owners over a <$100 part.

The history here is not good. When I bought my first Tesla, I test drove an AP1 car and ordered it. During that order wait, Tesla switched to AP2. They didn't even indicate the features would not be working when I got the car. When I got the car, it didn't even have basic cruise control. They said "two weeks for AP to be working." Well, it took a year for them to even give us auto wipers, and 2 years to hit feature parity with AP1.

The auto wipers thing feels very, very similar. They left out a $5 part because vision can theoretically replace it. Well, it could, Elon was right, but it took about 25 times as long as he estimated.

I don't envy your decision.
I'm not sure your exact timing, but I got my MS in November 2016 and it didn't even have auto headlights! Not auto high beams, just plain old "turn on the headlights when its dark." There was a button (!) to turn the lights on.

To me the worst part was that the salespeople told me in no uncertain terms that my car would be AP1 comparable by the end of 2016. Not even close. The first lane lines danced all over the place and lane keeping was a nightmare. And I think that was like spring of 2017. Now for all the "it was an abrupt change" and whatnot, the reality was that they (the salespeople, literature, video of a car driving itself, Elon's tweets) made it sound like "we've been working on this and it's almost done." Those were lies. Not misinterpretations or differences in expectations or nuances in definitions, but just straight up lies. The truth was "we were optimistic in having a great Autopilot system, but we had a falling out with MobilEye so now things are delayed for possibly quite some time while we design our own system. We're confident we'll solve this problem, but the reality is that this is a very complex problem that we need to get right." If they couldn't even automatically turn headlights on in the dark, there is no way they could have possibly believed they were close to having an AP1 comparable system by the end of 2016.

My car constantly aborts lane changes when a car two lanes over in the blind spot gets mistaken for being in the adjacent lane. This is a problem. It shows they can't estimate distance appropriately. @verygreen has videos of this on twitter. If the 4D all-vision, multi-whatever super system is so good, why not employ it determine distance of cars in adjacent lanes? It certainly can't be any worse than what we have now. But then to go and take away radar? You're going from a system with a proven industry-accepted technique for identifying upcoming collisions and replacing it with a vision system designed by a company who can't accurately estimate the distance to a truck 25 feet away. "But this is totally new! It's V9! You don't understand neural networks!" I guess.

I hope The Button doesn't return in the form of one needed to turn on the headlights.
 
Just wait until they start editing the old family pics and removing Yung Radar and photoshopping in someone else!

We joke but I assume the decision to remove all references to radar probably comes straight from Elon. He was probably like "I want all references to radar GONE! I want everybody 100% on board with Tesla Vision! I don't want anyone seeing that I used to think radar would solve FSD. Vision-only will solve FSD! So Say We All!"