Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

how does the new end to end FSD work, need a block diagram from of data flow from the fleet to DoJO to an individual's car

This site may earn commission on affiliate links.
So a truck jackknifed across the road would not be picked up by radar. It would need to be LiDAR which is also using light (like a camera) to identify objects. So in poor weather visibility, having LiDAR and RADAR is an awesome technology to have. or not?
The truck may be picked up by RADAR, but LIDAR would see it more clearly. RADAR can judge distance, but it's primary use is velocity. LIDAR is for creating distance point maps. RADAR can see through fog, as its energy waves easily penetrate suspended water vapor. LIDAR can't see as far as some of the energy waves are reflected/refracted through the water.

In poor weather, the combination of all three (RADAR, LIDAR, cameras) would be an awesome technology to have.
 
  • Like
Reactions: JB47394
The truck may be picked up by RADAR, but LIDAR would see it more clearly. RADAR can judge distance, but it's primary use is velocity. LIDAR is for creating distance point maps. RADAR can see through fog, as its energy waves easily penetrate suspended water vapor. LIDAR can't see as far as some of the energy waves are reflected/refracted through the water.

In poor weather, the combination of all three (RADAR, LIDAR, cameras) would be an awesome technology to have.
I really think this is like asking for a flying crab
 
  • Funny
Reactions: Ben W and Dewg
NN/training is the limiting factor as of today, with MTBF in the tens of miles. As the NN/training improves, hardware will become the limiting factor. For Robotaxi to be a success as Tesla envisions it, the ODD will have to be pretty wide. I don't think they can get there with pure vision.
I predict RT will be ODD to the local bus routes, basically giving you on-demand service
 
So I assume that you work for Tesla and Tesla is about to announce that they are going to enable the external radar and add lidar.
I don't work for Tesla, obviously. And I don't think they are about to announce this, though I think they have handicapped themselves tremendously by locking themselves into vision-only.
But since I doubt that is true, all I can assume is that you are armchair quarterbacking and have very little knowledge to what is actually occurring.
I work in machine learning, so I have some pretty in-depth knowledge of all this. I can infer a lot from what I'm seeing from outside the company.
Tesla didn't remove the radar by accident. They didn't remove the USS by accident. And they have basically shown that vision without lidar works.
They removed radar due to (1) short-term parts shortage, and (2) the fact that their FSD approach at the time (manual C++ coding) wasn't able to properly do the sensor fusion between vision and radar to make full use of radar's capabilities. Neither of these limitations still apply.

They've shown that vision without radar/lidar is sufficient for an L2 system with limited ODD and tens of miles between failures. They have emphatically not yet shown that it will work for an L4 or even L3 system, even with a limited ODD.
Visualization really isn't a problem at this point. (unless you have evidence otherwise)
Pure vision is not the primary cause of disengagements or failures at this point, correct. In my experience it fails due to software every 2-5 miles (on city streets), and due to sensor limitations perhaps every 100 miles. But both of these need to be a million miles before it's ready for L4. Software is a lot easier to improve or replace than hardware, and I expect that in the next year or two they will reach the point where the majority of disengagements are due to hardware / sensor limitations, still well short of the needed million-mile reliability.
 
I really think this is like asking for a flying crab
1718211685174.png
 
  • Funny
Reactions: legendsk and enemji
They removed radar due to (1) short-term parts shortage, and (2) the fact that their FSD approach at the time (manual C++ coding) wasn't able to properly do the sensor fusion between vision and radar to make full use of radar's capabilities. And (3) short-term parts cost. None of these limitations still apply.
This is one of the scenarios where lack of something for the champion turns out to be beneficial to the underdog, in this case vision based NN.
 
This is one of the scenarios where lack of something for the champion turns out to be beneficial to the underdog, in this case vision based NN.
I don't think anyone doubted (in the past 3-4 years anyway) that pure vision could achieve good-weather L2 with ~10 mile MTBF, which is where we are now. The skepticism was that it could scale to L4 with million-mile MTBF, which was Elon's promise from the beginning, and so far Tesla has not achieved or proven that, and no guarantee that they will. Or that they couldn't achieve it much sooner and more easily if they added back radar+lidar. It will be very interesting to see the progress they make with 12.4/12.5/12.6, and I'm open to changing my mind if they somehow prove me wrong.
 
  • Like
Reactions: spacecoin
I don't think anyone doubted (in the past 3-4 years anyway) that pure vision could achieve good-weather L2 with ~10 mile MTBF, which is where we are now. The skepticism was that it could scale to L4 with million-mile MTBF, which was Elon's promise from the beginning, and so far Tesla has not achieved or proven that, and no guarantee that they will. Or that they couldn't achieve it much sooner and more easily if they added back radar+lidar. It will be very interesting to see the progress they make with 12.4/12.5/12.6, and I'm open to changing my mind if they somehow prove me wrong.
I, for one, don't want to drive around in a car that looks like this:

1718212941907.png


But if that's what it takes for everyone to chill the F out about L4+ in a personally owned vehicle, then so be it...
 
  • Like
Reactions: enemji
I, for one, don't want to drive around in a car that looks like this:

View attachment 1055777

But if that's what it takes for everyone to chill the F out about L4+ in a personally owned vehicle, then so be it...
Of course not. But realize that THIS car has Lidar and multiple radars too (the Lidar is in the little rectangle just below the logo):

1718213215471.png

Obviously Lucid's software to drive this sensor suite is not ready yet, but the point is that the hardware can be concealed; it doesn't have to be ugly.
 
Of course not. But realize that THIS car has Lidar and multiple radars too (the Lidar is in the little rectangle just below the logo):

View attachment 1055779
Obviously Lucid's software to drive this sensor suite is not ready yet, but the point is that the hardware can be concealed; it doesn't have to be ugly.
OMG - that's amazing. I had no idea it could look this good. What the heck is Waymo thinking with their cars?!? They should be using Lucids. /s
 
  • Funny
Reactions: enemji
OMG - that's amazing. I had no idea it could look this good. What the heck is Waymo thinking with their cars?!? They should be using Lucids. /s
Sigh. The point is not that Waymo should change what they're doing, or that Lucids look better than Teslas. It's that Tesla could incorporate these sensors without compromising their aesthetics, which is what you seemed to be worried about. (Unless you were being sarcastic about that as well, which is possible.)
 
  • Like
Reactions: spacecoin
Lidar has every way to resolve this. It generates a 3D point cloud, which would look very different in the shadow case than in the obstacle case, whereas to the camera (or even to the human eye) the difference is far more subtle.

The radar “saw” the side of the trailer, but had no way to distinguish it from an overhead sign, and the (human-written) software decided that it was an overhead sign. Lidar would have been able to disambiguate.
Lidar is 100% accurate?
RADAR is best for showing moving objects, and relative speeds between objects. LIDAR is what is needed here - LIDAR would have shown the pole.
When your car is moving, everything else is moving.
 
So a truck jackknifed across the road would not be picked up by radar. It would need to be LiDAR which is also using light (like a camera) to identify objects. So in poor weather visibility, having LiDAR and RADAR is an awesome technology to have. or not?

You want to know what a valid interpretation of a jack-knifed truck also looks like to lidar?

A bridge.
 
  • High Reflectivity: Highly reflective surfaces, such as shiny metals or glass, can reflect the LiDAR beams in unexpected ways, causing false readings or multiple returns.
  • Specular Reflection: Smooth surfaces can cause specular reflections, where the LiDAR beam is reflected away from the sensor, leading to missing or inaccurate data points.
  • Rain, Fog, and Snow: Adverse weather conditions can scatter or absorb the LiDAR beams, resulting in noisy or inaccurate data. Heavy rain, fog, or snow can significantly degrade the performance of LiDAR sensors.
  • Dust and Smoke: Particles in the air, such as dust or smoke, can scatter the LiDAR beams and create noise in the readings, leading to ambiguity in the detected environment.
  • Dirty or Obstructed Lenses: Dirt, dust, or debris on the LiDAR sensor’s lenses can obstruct the beams and result in inaccurate or ambiguous readings.

Some issues identified by AI regarding LiDAR....
 
  • High Reflectivity: Highly reflective surfaces, such as shiny metals or glass, can reflect the LiDAR beams in unexpected ways, causing false readings or multiple returns.
  • Specular Reflection: Smooth surfaces can cause specular reflections, where the LiDAR beam is reflected away from the sensor, leading to missing or inaccurate data points.
  • Rain, Fog, and Snow: Adverse weather conditions can scatter or absorb the LiDAR beams, resulting in noisy or inaccurate data. Heavy rain, fog, or snow can significantly degrade the performance of LiDAR sensors.
  • Dust and Smoke: Particles in the air, such as dust or smoke, can scatter the LiDAR beams and create noise in the readings, leading to ambiguity in the detected environment.
  • Dirty or Obstructed Lenses: Dirt, dust, or debris on the LiDAR sensor’s lenses can obstruct the beams and result in inaccurate or ambiguous readings.

Some issues identified by AI regarding LiDAR....
Now ask AI if they've solved many/most of those problems - for example, using multibounce specular LIDAR. :)
 
  • Like
Reactions: spacecoin
Lidar has every way to resolve this. It generates a 3D point cloud, which would look very different in the shadow case than in the obstacle case, whereas to the camera (or even to the human eye) the difference is far more subtle.

The radar “saw” the side of the trailer, but had no way to distinguish it from an overhead sign, and the (human-written) software decided that it was an overhead sign. Lidar would have been able to disambiguate.
Lidar is 100% accurate?
Pure vision is not the primary cause of disengagements or failures at this point, correct. In my experience it fails due to software every 2-5 miles (on city streets), and due to sensor limitations perhaps every 100 miles.

And this may be the basis of your problem. Because this is simply not true. Just finished a 10 mile lunch run with no disengagements.