Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla FSD Beta Release 10.5 2021.36.8.8 - 10.5.1?

This site may earn commission on affiliate links.
Yes, but which cameras are they using to build that 360 degree view? Green's implication is they don't use the narrow view camera. From what I've found so far, the narrow view camera was implemented in HW 2.5.
What the hackers can figure out is just tip of the iceburg. I'd not take what they say as the absolute truth.

We have all been in projects where a lot of people don't get the whole picture properly even after several explanations and architecture diagrams and design docs etc etc. What are the chances a hacker can figure out everything with nothing but compiled code ?

ps : BTW, was green looking at FSD code or public ? Not even sure Tesla was doing the 360 degree views by then.
 
That's just speculation on his part. Why would they not use the higher res camera data?
My theory would be more costly (processing power + time), and not useful for the near range depth mapping. Probably also harder to create binocular vision using the narrow since it looks so far out. As humans, we don't really perceive much depth at a distance.
 
My theory would be more costly (processing power + time), and not useful for the near range depth mapping. Probably also harder to create binocular vision using the narrow since it looks so far out. As humans, we don't really perceive much depth at a distance.
In their description of the autopilot hardware, Tesla specifically says the narrow view camera is good for high speeds.
 
In their description of the autopilot hardware, Tesla specifically says the narrow view camera is good for high speeds.
It can see objects further at high speeds, but it cannot easily determine depth. I'd guess this is where radar really helped... there were 2 sources of truth, and the car used these two sources to decide whether to brake or not. Stuck with long range monovision, you don't have that advantage.
 
It can see objects further at high speeds, but it cannot easily determine depth. I'd guess this is where radar really helped... there were 2 sources of truth, and the car used these two sources to decide whether to brake or not. Stuck with long range monovision, you don't have that advantage.
They are determining speed and distance by the change in the images between subsequent frames. The more detail there is the better, and the more accurate it is at distance. They say that vision is more accurate than the radar. Radar only added noise.

The accuracy of the narrow view camera is certainly not less than the main camera at a given distance.
 
They are determining speed and distance by the change in the images between subsequent frames. The more detail there is the better, and the more accurate it is at distance. They say that vision is more accurate than the radar. Radar only added noise.

The accuracy of the narrow view camera is certainly not less than the main camera at a given distance.
Radar adds noise, but is very useful as another data point. It's not an accident that radar + vision has faced less overall phantom braking.

All I'm saying is at a distance, with Tesla Vision, there is 1 source of truth with a single eye which cannot judge depth "well" at a distance, but can at closer ranges due to binocular vision and similar depths of field. You can do depth estimations with 1 eye, but it's a lot more work and more prone to error. This is a human thing, but does carry true to computing as well.
 
Radar adds noise, but is very useful as another data point. It's not an accident that radar + vision has faced less overall phantom braking.

All I'm saying is at a distance, with Tesla Vision, there is 1 source of truth with a single eye which cannot judge depth "well" at a distance, but can at closer ranges due to binocular vision and similar depths of field. You can do depth estimations with 1 eye, but it's a lot more work and more prone to error. This is a human thing, but does carry true to computing as well.
You are at odds with Tesla on the benefits of radar.

Yes, the accuracy of the distance measurement is better the closer the object is. They said vision was about 4 cm up close where it's needed most. The farther away something is the less critical the accuracy becomes. The relative accuracy remains about the same. Radar is also specified as a percentage of distance. I think the resolution of the radar is 0.1 meter.

What ever the limitations of vision are, they are less with the narrow view camera than with the main camera.
 
Radar adds noise, but is very useful as another data point. It's not an accident that radar + vision has faced less overall phantom braking.
Wasn’t radar basically just forward looking ? FSD needs to figure out distance and speed of objects to the side - esp for unprotected turns ...

BTW, I do think radar + vision being better is an accident of history. If the situation was reversed I.e. they added radar later, vision would have been than radar + vision at this point.
 
  • Like
Reactions: pilotSteve
You are at odds with Tesla on the benefits of radar.
Oddly convenient that the removal of radar and the push for pure vision came at a time when there was a radar shortage. Meanwhile at that time S/X cars continued to get radar installed.

Tesla is choosing a novel approach, but the benefits of radar are well known, and in service today in the form of Waymo and Cruise. Yes, geofenced, limited use cases, etc. but they have a real robotaxi - Tesla does not.

I do think there's merit to Tesla's vision-only approach. Others, like Light (Light) have also started looking at this, using multiple cameras to create highly accurate depth maps. Tesla will get much better here very fast, but having had radar taken away from me for FSD, I can definitely say we have regressed.
 
Oddly convenient that the removal of radar and the push for pure vision came at a time when there was a radar shortage. Meanwhile at that time S/X cars continued to get radar installed.

Tesla is choosing a novel approach, but the benefits of radar are well known, and in service today in the form of Waymo and Cruise. Yes, geofenced, limited use cases, etc. but they have a real robotaxi - Tesla does not.

I do think there's merit to Tesla's vision-only approach. Others, like Light (Light) have also started looking at this, using multiple cameras to create highly accurate depth maps. Tesla will get much better here very fast, but having had radar taken away from me for FSD, I can definitely say we have regressed.
I don't know that we can say with certainty that it is the lack of radar that is the cause of phantom braking in VO. FSD with radar has it's issues with false targets causing phantom braking. There could be other changes that have been implemented in the VO vs V+radar. I thought 10.4 was better than either 10.3 or 10.5 in this respect, so there is something they can tune to mitigate phantom braking.

Elon was talking about VO long before the chip shortage.
 
  • Like
Reactions: EVNow
Tesla is choosing a novel approach, but the benefits of radar are well known, and in service today in the form of Waymo and Cruise. Yes, geofenced, limited use cases, etc. but they have a real robotaxi - Tesla does not.
But they are extremely geofenced, expensive non-consumer vehicles. Tesla is not. We have been talking about this for years, nothing new.

We just don’t know whether Waymo and rest of the industry approach will turn out to be the correct approach or just dogma and herding.

After all, the auto industry was convinced you can’t make money selling EVs. Turns out the entire auto industry was dogmatic and self serving.
 
  • Like
Reactions: GWord
Wasn’t radar basically just forward looking ? FSD needs to figure out distance and speed of objects to the side - esp for unprotected turns ...

BTW, I do think radar + vision being better is an accident of history. If the situation was reversed I.e. they added radar later, vision would have been than radar + vision at this point.
The more points of truth that exist for the vehicle to make a decision, the more confident it will be.

Radar is not inherently more useful. However it provides a completely different perspective on the world that is a completely separate data source, which is extremely useful. This is also why there are redundant systems internally in HW3, to basically seek some level of agreement or failover to whatever the tiebreaker needs to be.

I equate that real-world validation to being something like :

* Vision: Hey I think I see something in the road
* Radar: I don't see anything ahead
* Vision: I dunno man, it looks like a giant black object, I want to brake
* Radar: Trust me, nothing is there, I would have heard something by now
* Vision: How confident are you? I'm at a 51%
* Radar: I'm at 90%
* Vision: Ok, I feel better knowing that. Let's go
* Radar: Ok let's go

If you did that pulled out radar from that conversation, it looks like:

* Vision: Hey I think I see something in the road
* Vision: It looks like a giant black object
* Vision: I'm only at 51% confidence it's safe
* Vision: Let's slow down and be certain

For Vision to succeed, Tesla needs to turn that "Hey I think I see something in the road" to now see "I see no obstructions, clear to go".

Radar did phantom brake, but for me it was more prevalent with a highway going under an overpass, which at a distance apparently could also look like a truck crossing the road to both vision and radar. Vision seems to have a very itchy trigger for shadows.
 
  • Like
Reactions: pilotSteve
But they are extremely geofences, expensive non-consumer vehicles. Tesla is not. We have been talking about this for years, nothing new.

We just don’t know whether Waymo and rest of the industry approach will turn out to be the correct approach or just dogma and herding.
There is no singular correct approach. But we do know that what they're doing works quite well. I can hail a Waymo today with no driver and enjoy a true driverless ride, albeit with huge geographic caveats.

Tesla is trying to solve a much broader problem, so yes... vision is needed to solve for what lidar and mapped territories cannot solve. But other sensors are very useful. If they weren't, Tesla would ditch ultrasonics too. They obviously still need them for now... maybe they will yank them in the future as well.
 
* Vision: Hey I think I see something in the road
* Radar: I don't see anything ahead
* Vision: I dunno man, it looks like a giant black object, I want to brake
* Radar: Trust me, nothing is there, I would have heard something by now
* Vision: How confident are you? I'm at a 51%
* Radar: I'm at 90%
* Vision: Ok, I feel better knowing that. Let's go
* Radar: Ok let's go
Looks like what happened in those notorious Tesla crashing into objects situations.

Sensor fusion is not easy.

BTW, I do think Tesla jumped the gun when it removed radar before VO was at parity with radar. But their hands were forced due to parts shortage, probably.
 
There is no singular correct approach. But we do know that what they're doing works quite well. I can hail a Waymo today with no driver and enjoy a true driverless ride, albeit with huge geographic caveats.
No you can’t hail Waymo in NYC. Infact you can’t hail Waymo in 99.99% of US. But you can drive with FSD beta everywhere.

These arguments are never ending - we are comparing two separate dimensions - geography and features. They can’t be compared to figure out who is ahead or better now.
 
Looks like what happened in those notorious Tesla crashing into objects situations.

Sensor fusion is not a joke.
I never said infallible. We're still beholden to what was coded (or not coded).

As it stands today, Vision is behind the curve. I suspect we're going to get there in another year. The transition away from MobileEye was fairly quick as well. For now, Tesla isn't ready to do a follow distance of 1 or speeds greater than 80mph with Vision. That tells you everything about where they are right now.
 
No you can’t hail Waymo in NYC. Infact you can’t hail Waymo in 99.99% of US. But you can drive with FSD beta everywhere.

These arguments are never ending - we are comparing two separate dimensions - geography and features. They can’t be compared to figure out who is ahead or better now.
I did Waymo in AZ a few months back. It uses radar and lidar. It works. It doesn't solve Tesla's problem, but it does solve a real-world autonomy problem very well, without a driver.

You can drive a Tesla everywhere in the US with FSD beta, but always with a driver.