Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

how does the new end to end FSD work, need a block diagram from of data flow from the fleet to DoJO to an individual's car

This site may earn commission on affiliate links.
Lidar is 100% accurate?
It doesn’t have to be 100% accurate/precise. Just accurate enough to distinguish a trailer from an overhead sign. That doesn’t require much accuracy, for a lidar.
And this may be the basis of your problem. Because this is simply not true. Just finished a 10 mile lunch run with no disengagements.
Truly you have a dizzying grasp of statistics. I’ve also occasionally had 10-mile drives with no disengagements. And 1-mile drives with multiple disengagements. In my experience (which may be different from your experience, because we drive different roads), it’s averaged out to a disengagement every 2-5 miles or so, on city streets. (Fewer on highways, but that’s the v11 stack and a different ODD.)
 
It doesn’t have to be 100% accurate/precise. Just accurate enough to distinguish a trailer from an overhead sign. That doesn’t require much accuracy, for a lidar.

Truly you have a dizzying grasp of statistics. I’ve also occasionally had 10-mile drives with no disengagements. And 1-mile drives with multiple disengagements. In my experience (which may be different from your experience, because we drive different roads), it’s averaged out to a disengagement every 2-5 miles or so, on city streets. (Fewer on highways, but that’s the v11 stack and a different ODD.)
Lidar doesn't distinguish a trailer from an overhead sign from a bridge.

Odds are that you are disengaging when the car is completely capable of performing the maneuver. It's really easy for FSD to be untrustworthy when you don't trust it.

Yes, we drive different roads. I've only driven roads east of the Mississippi from the Great Lakes to the Florida Keys.
 
  • Like
Reactions: enemji
Lidar doesn't distinguish a trailer from an overhead sign from a bridge.
You seem to be confusing lidar with radar. See for example here: You don’t need to go autonomous to make trucking safer
Odds are that you are disengaging when the car is completely capable of performing the maneuver. It's really easy for FSD to be untrustworthy when you don't trust it.
When the car is e.g. attempting to drive straight from a left-turn-only lane, or blowing through a stop sign, that’s a valid disengagement, whether the car would have crashed or not. I only disengage when the car is already doing obviously bad or illegal things. Most would probably not result in actual crashes, but that’s not the right standard to use for disengagement. I do also distinguish this from interventions, such as requesting lane changes or applying acceleration.
 
You seem to be confusing lidar with radar. See for example here: You don’t need to go autonomous to make trucking safer

Not at all confused.

Your example, aside from it being a Press Release with a stupidly big font. does nothing to change my mind

Lidar, like radar, doesn't detect anything. It only provides a 2-dimension array of distances.
Radar is actually more a single dimension or single point, but newer mechanisms have added the additional dimensions as well.
Actually they are both point based mechanisms. LIDAR implementations just rotate the sensors to get the extra dimensions. And yes, it is 2-dimensional representation of a 360-degree view. (But not 3-D)

The basic reality, radar uses radio waves. Lidar uses light. They both then depend on time of delay to determine distance.

But neither of these detect anything. The closest they come is "something is closer than 5 ft"
 
Now ask AI if they've solved many/most of those problems - for example, using multibounce specular LIDAR. :)
  • Increased Complexity: The technology and algorithms required to process multiple reflections are more complex than those for single-bounce LiDAR systems.
  • Data Processing: Handling and analyzing the additional data from multiple bounces require more computational power and sophisticated data processing techniques.
  • Cost and Power Consumption: The increased complexity and data processing needs can lead to higher costs and greater power consumption, which are important considerations in practical applications.
 
  • Increased Complexity: The technology and algorithms required to process multiple reflections are more complex than those for single-bounce LiDAR systems.
  • Data Processing: Handling and analyzing the additional data from multiple bounces require more computational power and sophisticated data processing techniques.
  • Cost and Power Consumption: The increased complexity and data processing needs can lead to higher costs and greater power consumption, which are important considerations in practical applications.
Excellent! Nothing we can't do if we put our minds to it.
 
  • Like
Reactions: enemji
To radar, perhaps. Lidar, no. Lidar has much higher resolution than radar, and could trivially distinguish between these cases.
Lidar doesn't distinguish between those case, the software does. That's a subtle but critical difference, that explains for example why Waymo crashed into a telephone pole despite having multiple lidar sensors on the vehicle.
 
Lidar doesn't distinguish between those case, the software does. That's a subtle but critical difference, that explains for example why Waymo crashed into a telephone pole despite having multiple lidar sensors on the vehicle.
That’s splitting hairs, but I’ll rephrase. The radar signature of a trailer and overhead sign may be identical. The Lidar signature is not. Thus, Lidar is far more useful than radar for a system that needs to be able to tell the difference, and act accordingly.
 
That’s splitting hairs, but I’ll rephrase. The radar signature of a trailer and overhead sign may be identical. The Lidar signature is not. Thus, Lidar is far more useful than radar for a system that needs to be able to tell the difference, and act accordingly.
That's false.

You are making assumptions that are just not right. To both, they see a solid rectangle of similar distance points, above a bunch of no returns (infinite distance). You seem to be maybe using resolution as the differentiator, but they aren't necessarily that different. And that doesn't necessarily make one look different than the other.

And as @stopcrazypp mentioned, neither lidar nor radar make the determination. It's the recognition software outside of the lidar/radar box.
Lidar/radar pretty much give the exact same data, a two-dimensional array of distances.
It's the software that then has to make the classification of the data.
 
It doesn’t have to be 100% accurate/precise. Just accurate enough to distinguish a trailer from an overhead sign. That doesn’t require much accuracy, for a lidar.

Truly you have a dizzying grasp of statistics. I’ve also occasionally had 10-mile drives with no disengagements. And 1-mile drives with multiple disengagements. In my experience (which may be different from your experience, because we drive different roads), it’s averaged out to a disengagement every 2-5 miles or so, on city streets. (Fewer on highways, but that’s the v11 stack and a different ODD.)

I took a 70-mile drive last night, through city, though country.
It had some curvy segments that FSD drive a 20 mph. And 65 mph segments that it went 77.
It had some straight rural roads that were marked at 45 mph, that FSD drive at 60+ (just like all the other drives)
I went through a couple of roundabouts.
I had two-lane left turns and two-lane right turns.

It basically drove pretty much like I would have driven.

I did have two interactions.
The first, which the car probably would have done, was a crew setting up for night paving. It had a lot of workers, officers, and confused drivers around, so I just took control for about 5 seconds.
The second, was as it pulled into Culvers and said route complete, allowing me to get the Blackberry Cheesecake mixer. After which, while at the drive through window, I reengaged.

70 miles, city, rural and construction traffic. 1 safety disengagement.

It was a quite enjoyable relaxing drive on a beautiful evening.
 
  • Like
Reactions: Ben W
That's false.

You are making assumptions that are just not right. To both, they see a solid rectangle of similar distance points, above a bunch of no returns (infinite distance). You seem to be maybe using resolution as the differentiator, but they aren't necessarily that different. And that doesn't necessarily make one look different than the other.

And as @stopcrazypp mentioned, neither lidar nor radar make the determination. It's the recognition software outside of the lidar/radar box.
Lidar/radar pretty much give the exact same data, a two-dimensional array of distances.
It's the software that then has to make the classification of the data.
Resolution is a huge differentiator; the resolution of Lidar is about 100x more precise than radar. At 100m distance, Lidar’s accuracy is a few centimeters; radar’s is several meters. That means Lidar (+ software) could trivially distinguish a bridge (4m clearance) from a trailer (1m clearance) at that distance, while radar (+ software) could not.

Full Gemini knowledge summary, FWIW:

“Light Detection and Ranging (LiDAR) and radar have different resolutions due to their different wavelengths and operating frequencies:
  • LiDAR
    Uses shorter wavelengths in the near-infrared, visible, or ultraviolet regions of the electromagnetic spectrum, which allows it to detect and map smaller features with higher spatial resolution. For example, Yellowscan LiDAR systems can have a resolution of a few centimeters at a distance of 100 meters. LiDAR's laser-based approach to capturing data also allows it to measure distance with greater precision than radar. LiDAR is often used for mapping and navigation applications that require high accuracy, such as laser altimetry, contour mapping, infrastructure analysis, and overhead wire detection.
  • Radar
    Uses longer wavelengths in the form of microwave frequencies, which results in lower spatial resolution. For example, standard radar can have a resolution of several meters at a distance of 100 meters. However, radar's longer wavelength allows it to detect objects at long distances and through fog or clouds. Radar is often used for aircraft anti-collision systems, air traffic control, and radar astronomy.
 
A good reminder on why Tesla bets on Vision only...skip to 80seconds


1718310277289.png
 
Last edited:
  • Like
Reactions: ewoodrick
Resolution is a huge differentiator; the resolution of Lidar is about 100x more precise than radar. At 100m distance, Lidar’s accuracy is a few centimeters; radar’s is several meters. That means Lidar (+ software) could trivially distinguish a bridge (4m clearance) from a trailer (1m clearance) at that distance, while radar (+ software) could not.

Full Gemini knowledge summary, FWIW:

“Light Detection and Ranging (LiDAR) and radar have different resolutions due to their different wavelengths and operating frequencies:
  • LiDAR
    Uses shorter wavelengths in the near-infrared, visible, or ultraviolet regions of the electromagnetic spectrum, which allows it to detect and map smaller features with higher spatial resolution. For example, Yellowscan LiDAR systems can have a resolution of a few centimeters at a distance of 100 meters. LiDAR's laser-based approach to capturing data also allows it to measure distance with greater precision than radar. LiDAR is often used for mapping and navigation applications that require high accuracy, such as laser altimetry, contour mapping, infrastructure analysis, and overhead wire detection.
  • Radar
    Uses longer wavelengths in the form of microwave frequencies, which results in lower spatial resolution. For example, standard radar can have a resolution of several meters at a distance of 100 meters. However, radar's longer wavelength allows it to detect objects at long distances and through fog or clouds. Radar is often used for aircraft anti-collision systems, air traffic control, and radar astronomy.

And what's the resolution of the cameras?
 
And what's the resolution of the cameras?
In adverse conditions (rain, sun glare, very low light, object color matching the sky), much worse than Lidar. In ideal conditions, similar to Lidar (several centimeters per pixel at 100m range), though the camera of course gives only color, not depth. Lidar’s usefulness is mostly for the adverse conditions, but also adds a reality check (literally) for the occupancy network in good conditions, which can reduce error.
 
name me one species of the animal kingdom that has a redundant sensory system to help itself guide when it is walking, swimming, flying etc
I’m not sure if that was a rhetorical question, but… Bats. (vision plus ultrasound). Whales and dolphins: vision + sonar. Pigeons use magnetism as effective GPS. Salmon use smell to find their home spawning streams. Bees use polarized light. Sharks use smell. Electric eels… dang, I forget what electric eels do.(All in addition to ordinary sight.)

Humans too, in the sense [ha] that we use our inner ear for balance, and tons of proprioceptive touch feedback to fine-tune locomotion. (Ever try to walk when you’re extremely dizzy?) Also we use plenty of audio cues to direct our attention within the environment to guide our actions; e.g. hearing a rustling snake in the grass. (Or in the context of driving, hearing an ambulance or a collision before we see it, for example.) We have multiple senses for a reason.
 
Last edited:
  • Like
Reactions: enemji
I’m not sure if that was a rhetorical question, but… Bats. (vision plus ultrasound). Whales and dolphins: vision + sonar. Pigeons use magnetism as effective GPS. Salmon use smell to find their home spawning streams. Bees use polarized light. Sharks use smell. Electric eels… dang, I forget what electric eels do.(All in addition to ordinary sight.)

Humans too, in the sense [ha] that we use our inner ear for balance, and tons of proprioceptive touch feedback to fine-tune locomotion. (Ever try to walk when you’re extremely dizzy?) Also we use plenty of audio cues to direct our attention within the environment to guide our actions; e.g. hearing a rustling snake in the grass. (Or in the context of driving, hearing an ambulance or a collision before we see it, for example.) We have multiple senses for a reason.
Thank you, and as rightly pointed out, each of the additional senses are for a very specific purpose, and not general purpose mobility. That is the point I was trying to get across.

Not that elon hates LiDAR. SpaceX uses LiDAR which they built themselves for the Dragon capsule for that is what works in that situation.
 
In adverse conditions (rain, sun glare, very low light, object color matching the sky), much worse than Lidar. In ideal conditions, similar to Lidar (several centimeters per pixel at 100m range), though the camera of course gives only color, not depth. Lidar’s usefulness is mostly for the adverse conditions, but also adds a reality check (literally) for the occupancy network in good conditions, which can reduce error.
I was talking about comparing to lidar.

But complain as you may, say FSD will never work all you want.

A Tesla only uses cameras and Tesla has pretty well validated their use. I made a 70 mile drive last night with absolutely NO vision issues.

Lidar systems have come and gone. Literally, most of them have closed down.