Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

FSD cannot make safe unprotected turns onto high speed roads

This site may earn commission on affiliate links.
@Tesla's #FSD cameras cannot detect a fast approaching car from sides at an intersection. Seeing a range of only 80m is not enough to make a safe unprotected turn onto a high speed road.
A car that is approaching at 60 miles per hour will travel 80 m in 3 seconds, that is not enough time to make the turn.

This is a fundamental limitation of the current camera range.

They will have to change these.
 
Last edited:
@Tesla's #FSD cameras cannot detect a fast approaching car from sides at an intersection. Seeing a range of only 80m is not enough to make a safe unprotected turn onto a high speed road.
This is a fundamental limitation of the current camera range.

They will have to change these.
Teslas accelerate pretty fast though. I’m thinking turning right only; then making a uturn…

Also, if their website had claimed the cameras get 200m of range, would your mind change? #marketwank
Yes my mind will change. If they can see 200m sideways they should be able to make safe turns.
 
Teslas accelerate pretty fast though. I’m thinking turning right only; then making a uturn…

Also, if their website had claimed the cameras get 200m of range, would your mind change? #marketwank

I'm eagerly awaiting the time when independent agencies perform rigorous testing of these scenarios. All the Youtube videos showing "yes it can drive 60 minutes without disengagement" and "no it can't drive worth a damn" are interesting but ultimately inconclusive.

The IIHS may end up doing these tests (we can hope). The NHTSA has so far not bothered, and Consumer Reports isn't very scientific and has perceptions of bias. The NTSB is excellent, but only after a major crash, and usually only investigates that specific crash anyway. Maybe another agency or country will step up and conduct some tests at real speeds - not the pointless 30-50mph tests that tell us nothing about really dangerous situations, the kind that actually matter.
 
I'm eagerly awaiting the time when independent agencies perform rigorous testing of these scenarios. All the Youtube videos showing "yes it can drive 60 minutes without disengagement" and "no it can't drive worth a damn" are interesting but ultimately inconclusive.

The IIHS may end up doing these tests (we can hope). The NHTSA has so far not bothered, and Consumer Reports isn't very scientific and has perceptions of bias. The NTSB is excellent, but only after a major crash, and usually only investigates that specific crash anyway. Maybe another agency or country will step up and conduct some tests at real speeds - not the pointless 30-50mph tests that tell us nothing about really dangerous situations, the kind that actually matter.
How many left turns would a vehicle need to do to prove that it works?
I suppose after you figure out an acceptable failure rate you'd have to somehow get a random selection of unprotected left turns. Then you'd have to make sure the manufacturers never found out which left turn scenarios you were testing so they can't design for the test.
Needless to say I think on-road testing over 100's of millions of miles is the only way to prove safety.
 
How many left turns would a vehicle need to do to prove that it works?
I suppose after you figure out an acceptable failure rate you'd have to somehow get a random selection of unprotected left turns. Then you'd have to make sure the manufacturers never found out which left turn scenarios you were testing so they can't design for the test.
Needless to say I think on-road testing over 100's of millions of miles is the only way to prove safety.
Actually I'm more expecting they will prove failure at first. But your point is the same, how many tests will it take to prove success? They'll come up with a number, it just has to be good enough, not perfect.
 
At this point I'm more concerned with B-pillar occlusion than distance. Camera will always suffer from a fixed mounting and being behind the driver is more inferior. Also fixed mount cameras can't duck, boob, lean, pivot, peek around, twist or rotate the way those weird human heads do automatically at every intersection.🤣:rolleyes:🙃
 
At this point I'm more concerned with B-pillar occlusion than distance. Camera will always suffer from a fixed mounting and being behind the driver is more inferior. Also fixed mount cameras can't duck, boob, lean, pivot, peek around, twist or rotate the way those weird human heads do automatically at every intersection.🤣:rolleyes:🙃
And if it's true that B pillar occlusion means that FSD cannot perform certain turns safely then I'd like to see that officially proven. If it's indeed proven then Tesla will have to admit it and fix the sensor deficiency. In the long run it would be worth it, not continuing to hobble along pretending they have 360°coverage.
 
  • Like
Reactions: AtomicD0G
How many left turns would a vehicle need to do to prove that it works?
I suppose after you figure out an acceptable failure rate you'd have to somehow get a random selection of unprotected left turns. Then you'd have to make sure the manufacturers never found out which left turn scenarios you were testing so they can't design for the test.
Needless to say I think on-road testing over 100's of millions of miles is the only way to prove safety.
This scenario may be the holy grail of surface road algorithms. Not only do the cameras have to see left and right traffic, they also need to interpret traffic from the opposite side of the road who might be yielding left or right onto the same higher-speed road. What's more is that a lot of times there is a shared middle turning lane where both directions can enter and exit at any time.

As for success vs failure, I think the human driving statistics can be a good benchmark. If the aggregated numbers of all scenarios get close to how often humans fail, then it's acceptable to me.
 
Actually I'm more expecting they will prove failure at first. But your point is the same, how many tests will it take to prove success? They'll come up with a number, it just has to be good enough, not perfect.
As for success vs failure, I think the human driving statistics can be a good benchmark. If the aggregated numbers of all scenarios get close to how often humans fail, then it's acceptable to me.
But what is the human failure rate? Also it would be absolutely essential not to tell the manufacturers what scenarios would be tested otherwise they could pass by targeting their internal testing towards those scenarios. Then when presented with the infinite variety of real world scenarios they would have a horrific failure rate.
I still think brute force real world testing is the only way to go.
 
  • Like
Reactions: AlanSubie4Life
This scenario may be the holy grail of surface road algorithms. Not only do the cameras have to see left and right traffic, they also need to interpret traffic from the opposite side of the road who might be yielding left or right onto the same higher-speed road. What's more is that a lot of times there is a shared middle turning lane where both directions can enter and exit at any time.

As for success vs failure, I think the human driving statistics can be a good benchmark. If the aggregated numbers of all scenarios get close to how often humans fail, then it's acceptable to me.
We have to question whether releasing FSD Beta to untrained civilians is ethical.

If the sensors and software are incapable of determining safety in occluded situations - and if Tesla knows this - then they should not be letting people "try it and see".

Tesla's FSD advice to the drivers doesn't say what it can't do, there are no detailed "testers' instructions", and there's every reason to expect Tesla already knows what it can and can't do. Saying "it might do the worst thing" is not a responsible disclaimer.
 
We have to question whether releasing FSD Beta to untrained civilians is ethical.

If the sensors and software are incapable of determining safety in occluded situations - and if Tesla knows this - then they should not be letting people "try it and see".

Tesla's FSD advice to the drivers doesn't say what it can't do, there are no detailed "testers' instructions", and there's every reason to expect Tesla already knows what it can and can't do. Saying "it might do the worst thing" is not a responsible disclaimer.
Telling people what the system can and can't do would be even worse. There is literally nothing that Tesla will guarantee can be done reliably by the system.
I do agree that there is zero point to having customers test unprotected lefts right now. Though I'm not convinced it will get safer if it only fails 1 out of 1000 times. I think that might just catch people more off guard.
 
  • Like
Reactions: AlanSubie4Life
Fail rate is low for human and FSD because a fail is getting hit by a car. As a human, we can risk it and know we won't get hit. FSD will just wait forever, and upset everyone behind you. FSD may go at a wrong time and fail like any human would, but FSD is safer because it will never make that turn until it has a huge gap that comes once in awhile.
 
  • Funny
Reactions: Daniel in SD
Fail rate is low for human and FSD because a fail is getting hit by a car. As a human, we can risk it and know we won't get hit. FSD will just wait forever, and upset everyone behind you. FSD may go at a wrong time and fail like any human would, but FSD is safer because it will never make that turn until it has a huge gap that comes once in awhile.
Uhmmm... I suggest you watch some videos from Chuck Cook's YouTube channel if you think that's true:
 
Uhmmm... I suggest you watch some videos from Chuck Cook's YouTube channel if you think that's true:

I used to watch them, but I have FSD beta myself. His car usually will make a right instead, or he has to press the acceleration pedal himself, or he gives up because a car is behind him. It's good he is doing this, but where he live, it doesn't have a lot of cars.

I can tell you my car will not make a "screw you, you have brakes" left trun when cars are coming at you 45-50mph during rush hour. Humans will make that turn and make the other car brake or we'll never make that turn as there is no stop lights nearby to slow on coming traffic.

I would be interested to see Chuck take left turns during heavy traffic. FSD can't even take a unprotected left when cars are going 0-3mph because it doesn't bully its way in like a human would.
 
I still think brute force real world testing is the only way to go.
No arguments here, there is no other way. You can only hope that Tesla has an algorithm that can handle the sugarload of data and evolve based on it. I feel like a lot of FSD improvements over the releases are policy-type improvements - the rule framework that evaluates the benefits of a particular action - made by developers or humans, not machine learning. But of course, it's a hunch with no evidence to support.
 
Telling people what the system can and can't do would be even worse. There is literally nothing that Tesla will guarantee can be done reliably by the system.
I do agree that there is zero point to having customers test unprotected lefts right now. Though I'm not convinced it will get safer if it only fails 1 out of 1000 times. I think that might just catch people more off guard.
No I don't agree. They should explicitly say what the system can't do. People are guessing and they are making wrong guesses. Recently drivers have asked "Does FSD handle snow", some say it's great and therefore they're now risking themselves and others who listen to them. The answer is FSD does not have a snow-aware system.

There was a post recently where someone said "FSD responded to hand signals", whereas it does not seem likely that this is true - that was probably a fluke, and an assumption by the driver. Still, now some people believe FSD does respond to hand signals.

Turning across occluded or fast moving traffic does not seem to be safely possible at this time. Still, some people continue to attempt it, and when they see a success they say "see it does it just fine".

By saying nothing, Tesla is making an implicit statement that FSD "CAN" do everything, just that it "MAY" sometimes do the wrong thing. What they should be saying is FSD "CAN" do this (list), "CANNOT" do this (list), and then give their disclaimer about "MAY" do the wrong thing. Otherwise people are guessing about the capabilities, and guessing wrongly.

If Tesla says FSD "CAN" do a certain thing that does not mean the driver can be complacent because it still "MAY" do the wrong thing. But absolutely, Tesla should be saying what FSD "CANNOT" do.