Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
Interesting. Can you prove that had a driver not disengaged, there would be an accident? No you cannot. The act of disengaging altered the scenario. The opposite is also true. You can cannot prove that the system prevented an accident that a human driver would not have been able to avoid, as the human was not in command at the time.

Can the same logic be applied to AVs? If an AV causes an accident, can we prove that a human in command would not have caused the same accident?
If you’re interested in road safety and statistics, perhaps check out the work Zurich:re did with Waymo?

It’s presented here: (jump to 06:00 if impatient)
 
You missed my point, being blind folded is not the same as the activities allowed for L3 cars (reading a book or watching a video). It's more like sleeping, which is NOT allowed. I remember reading studies it takes around 7 seconds to reorient for those allowed activities (which don't involve readjusting to the brightness, you still have peripheral vision, and you don't have to take off a blind fold). Adding the time to do that for a blindfold situation can easily add 3 seconds (likely way more).
If you feel like watching a movie is completely different than a blindfold in terms of performing the OEDR, you can replace blindfold with book or movie. To me it’s about the same.
 
If you feel like watching a movie is completely different than a blindfold in terms of performing the OEDR, you can replace blindfold with book or movie. To me it’s about the same.
If you replace with a book or movie, I'm pretty sure some people have already done the equivalent to that on AP many times (heck, I have seen the other thread recently apparently someone frequently texts and drives and now is mad the new update nags him for doing that). I have not seen anyone claim trying a blindfold however, which is completely different (you have to take off the blind fold, which adds time to respond and you have zero peripheral vision).
 
Last edited:
The smart way to do it—and they way I suspect they’re actually doing it—is to send the recording through voice recognition, then look for keywords. For example, “speed bump”.

Then, suddenly you have 50,000 immediate examples of the car failing to slow down for a speed bump. Then run those clips through the auto-labeler (or if early in the process, label manually as needed). Review the auto-labeled results by a human and adjust as necessary. Would be a great way to get a lot of pertinent clips quickly.
When you report the same set of 3 speed bumps in the center of my town a dozen times with specific Google map locations and nothing happens it's difficult to keep reporting the same item over and over. Same for running a Stop sign repeatedly. It just seems so haphazard. I sure wish Tesla would provide some guidance on how they want customers to report certain types of problem so we could help them more.
 
If you replace with a book or movie, I'm pretty sure some people have already done the equivalent to that on AP many times (heck, I have seen the other thread recently apparently someone frequently texts and drives and now is mad the new update nags him for doing that). I have not seen anyone claim trying a blindfold however, which is completely different (you have to take off the blind fold, which adds time to responds and you have zero peripheral vision).

The safe way to do this would be:

1. Define an ODD
2. When inside the ODD, let FSD Beta complete the entire driving task without interruption. Count how many times a disengagement is strictly required to avoid damage to life or property.

This would include letting it make incorrect lane changes, hesitate, brake unnecessarily, etc. But nobody currently treats FSD Beta like that, except maybe Omar from Whole Mars, and folks here call him a "shill" for doing it.
 
There’s a ‘minimal lane changes‘ option but the issue isn’t lane changes, rather how it handles them. If I come up behind someone driving 5 MPH slower then I want to pass them, it should be courteous (and legal In some states) and get back over in the right lane.
I find that if you use the "average" setting FSD will move over a lane but not if using the "assertive" setting. It works reasonably well but when a car is fast approaching from the rear FSD is too slow and the approaching car will sometimes pass on the right. That messes with FSD. FSD just needs to make the decision to move out of the passing lane a little quicker.
 
I find that if you use the "average" setting FSD will move over a lane but not if using the "assertive" setting. It works reasonably well but when a car is fast approaching from the rear FSD is too slow and the approaching car will sometimes pass on the right. That messes with FSD. FSD just needs to make the decision to move out of the passing lane a little quicker.
This matches my experience. Except that, on secondary roads, even if multilane, FSDb rarely, if ever moves over for approaching traffic.
 
When I drive on auto pilot/FSD I’m sitting in the driver seat with my hands on my knees, an inch away from the steering wheel, looking out the front or the side monitoring traffic. It would be nice if the gaze- based attention monitoring system would be able just to see that I’m watching the road and take that as good enough
My guess is because we don't have an infrared light and camera, we need the steering wheel nag as well. It would be nice if Tesla made this an available retrofit--even if they charged for it. Another guess: it isn't an available retrofit because FSD will be L3/L5 in two weeks. Making a retrofit available would contradict that narrative.
 
  • Funny
Reactions: pilotSteve
That was what the original demo to Elon was
Is this from Walter Isaacson describing the neural network planner or someone from Tesla saying that?

It does seem reasonable to start from existing networks to show if this can work for control, but presumably moving from prototype, Tesla made architectural changes to many existing parts of the network to better take advantage of potential of end-to-end as you suggested adjusting sizes of various layers.

Have you looked at what Tesla has talked about for their foundation world model? Potentially it's even more evolution of the architecture with self-supervised pre-training, but unclear if Tesla is talking about this for current vs future single end-to-end network.
 
My guess is because we don't have an infrared light and camera, we need the steering wheel nag as well. It would be nice if Tesla made this an available retrofit--even if they charged for it. Another guess: it isn't an available retrofit because FSD will be L3/L5 in two weeks. Making a retrofit available would contradict that narrative.
Newer builds do have camera and interior illumination.
However, hands on the wheel will always be better reaction time wise than off wheel.
Model Y:
SmartSelect_20231225_045745_Firefox.jpg
 
This matches my experience. Except that, on secondary roads, even if multilane, FSDb rarely, if ever moves over for approaching traffic.
My fault for not clarifying I was only referring to multi lane highways. Sorry about that. What drives me crazy is when you right click on the scroll wheel it will automatically move your setting in the direction you clicked without your knowing it. I have done that which disables FSD moving back to the slower lane. Really poor UI behavior.
 
The safe way to do this would be:

1. Define an ODD
2. When inside the ODD, let FSD Beta complete the entire driving task without interruption. Count how many times a disengagement is strictly required to avoid damage to life or property.

This would include letting it make incorrect lane changes, hesitate, brake unnecessarily, etc. But nobody currently treats FSD Beta like that, except maybe Omar from Whole Mars, and folks here call him a "shill" for doing it.
The issue is - sometimes we don’t know what FSD might do and brake to prevent an accident. Happens with me a lot at roundabouts- FSD will happily try to enter the roundabout even when a can is coming from the left. Will it stop in time … or will it not ? What about freaking out the other driver ?
 
My guess is because we don't have an infrared light and camera, we need the steering wheel nag as well. It would be nice if Tesla made this an available retrofit--even if they charged for it. Another guess: it isn't an available retrofit because FSD will be L3/L5 in two weeks. Making a retrofit available would contradict that narrative.
Perhaps, but that doesn’t explain why they can’t use the regular camera during the daylight.
 
Operational Design Domain. It's the term for the environment and conditions an autonomous vehicle is designed to operate within.

So it could be a geofence, a time of day, weather conditions, traffic conditions, etc.
I can add that for L4 its typically a geographical area and a speed limit. For L3 it’s typically a lot more limited. An example could be “presence of a lead car, no lane changes, dry roads, daytime, limited access highway only and max 60 km/h”.

The ODD for Autopilot is limited access highways btw according to the Owner’s Manual. If I remember correctly there is no defined ODD for FSD city street.
 
Last edited:
I can add that for L4 its typically a geographical area and a speed limit.

That is what we are seeing with robotaxis. I think for consumer L4, we are likely to see ODDs based on road types rather than geographical areas. Shashua argues that consumer "eyes off" needs to be "everywhere" since consumers likely want to go everywhere in their personal car. So Mobileye has defined some standard ODD for consumer "eyes off" based on road types, rather than geographical locations:

kxsZTAH.png


Obviously, this list of ODDs is not exhaustive but I think it gives us a nice starting point. I do think that weather and speed limits will also be ODD limits. So I think we will see consumer L4 that is limited to certain road types, weather and speed limits.
 
  • Informative
Reactions: APotatoGod