Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

FSD cannot make safe unprotected turns onto high speed roads

This site may earn commission on affiliate links.
Ok, but you realize the only thing that has changed in my two scenarios is their marketing material?

My point is you are drawing conclusions on assumptions that might be inaccurate.
Actually I have investigated this bc of my experience with 10.8 beta.
It turns in front of the fast approaching cars, some youtube videos too show this behavior. It does not detect fast approaching cars. That is why I investigated, and their material lists the range as 80m, I assumed that must be it.
 
And if it's true that B pillar occlusion means that FSD cannot perform certain turns safely then I'd like to see that officially proven. If it's indeed proven then Tesla will have to admit it and fix the sensor deficiency. In the long run it would be worth it, not continuing to hobble along pretending they have 360°coverage.
It is proven that it turns in front of fast approaching cars unsafely. It did it for me, it did it in many youtube videos.
They may improve the detection range and fix it.
 
  • Like
Reactions: Dan D.
No I don't agree. They should explicitly say what the system can't do. People are guessing and they are making wrong guesses. Recently drivers have asked "Does FSD handle snow", some say it's great and therefore they're now risking themselves and others who listen to them. The answer is FSD does not have a snow-aware system.

There was a post recently where someone said "FSD responded to hand signals", whereas it does not seem likely that this is true - that was probably a fluke, and an assumption by the driver. Still, now some people believe FSD does respond to hand signals.

Turning across occluded or fast moving traffic does not seem to be safely possible at this time. Still, some people continue to attempt it, and when they see a success they say "see it does it just fine".

By saying nothing, Tesla is making an implicit statement that FSD "CAN" do everything, just that it "MAY" sometimes do the wrong thing. What they should be saying is FSD "CAN" do this (list), "CANNOT" do this (list), and then give their disclaimer about "MAY" do the wrong thing. Otherwise people are guessing about the capabilities, and guessing wrongly.

If Tesla says FSD "CAN" do a certain thing that does not mean the driver can be complacent because it still "MAY" do the wrong thing. But absolutely, Tesla should be saying what FSD "CANNOT" do.
Yeah. I'm just not convinced it would improve the safety of testing FSD Beta. They do already include a small list of things to watch out for. "Use Full Self-Driving in limited Beta only if you will pay constant attention to the road, and be prepared to act immediately, especially around blind corners, crossing intersections, and in narrow driving situations."

I'm in agreement that thinking that FSD can do things is a huge mistake. I just think having a list of what it cannot do would only increase people's trust in the system when doing things not on that list.
 
Not sure of your logic there. Humans over-estimate driving ability (as with many other things), so car needs to be .. what, exactly?
Say FSD has severe collision rate of about 1 per 5 million miles (about twice the human Tesla driver average), people aren't going to want it because they think they're at least twice as good as the average driver.
Note that I don't think people actually care that much about safety and would be happy to risk it.
 
Yeah. I'm just not convinced it would improve the safety of testing FSD Beta. They do already include a small list of things to watch out for. "Use Full Self-Driving in limited Beta only if you will pay constant attention to the road, and be prepared to act immediately, especially around blind corners, crossing intersections, and in narrow driving situations."

I'm in agreement that thinking that FSD can do things is a huge mistake. I just think having a list of what it cannot do would only increase people's trust in the system when doing things not on that list.
If nothing else, having a list of what FSD cannot do would stop people trying those things and show where Tesla stands in the AV game, and be communicated when they improve.
 
IM


They probably can’t do anything 100 percent of the time, and can do everything at least some of the time. Having a list of what it can do seems risky.
"According to Elon, with FSD version 9 Teslas will be able to understand turn signals, hazards, ambulance/police lights & even hand gestures." Can it?

Can FSD detect concrete pillars? It doesn't show them
Can FSD read signs? Which ones? How do we know which ones?
Can FSD detect speed bumps? Sometimes, always, rarely?
Can FSD detect and avoid deer? It did once, is it reliable?

Yes, I agree that a list of what it CAN do is risky. How about they start with a list of what it CANNOT do.

What they've done is sold and released (to many untrained people) one of the most dangerous devices possible - a 2 tonne car that can go 80mph, completely handsfree (but don't do that, wink), apparently anywhere, anytime, in any weather condition. They have basically said "have at it, this is autonomous but it's not autonomous. Try not to use it when it would be dangerous to use it. We're not saying exactly what it does, and won't give clear directions on its capabilities, nor will we give clear indications of its progress, things change from version to version to some degree, it's all very technical, you figure it out."
 
Last edited:
How does it handle School zones? Does it only slow down when the School zone lights are flashing?
1643164644316.png
 
Seems like the main discussion here is whether the car would or would not initiate the unprotected turns. My question is, if FSD Beta does accidentally turn into oncoming traffic, would it recognize its error and perform corrective maneuvers to avoid the impact?
 
  • Funny
Reactions: Daniel in SD
How does it handle School zones? Does it only slow down when the School zone lights are flashing?
It doesn't "handle" or recognize School zones/or school buses as of now. It only reads standard speed limit signs. It may read that sign as a standard 25MPH speed limit all the time. If flashing it may see it as a traffic signal and could think it is a flashing yellow or about to turn red. The words School, When Flashing, Your Speed and the digital display will all be ignored.
 
I don’t know if comparing FSD to average accident stats will lead down a good path when considering all the nuance involved here.

For example, most accidents are likely caused by a minority of people who get in accidents often while others rarely or never get in accidents.

Accidents from human drivers often involve bad or illegal things, like intoxication or fatigue or speeding or inclement weather or distraction or age and lacking driving experience.

Unless the system is desiged specifically to work only in these scenarios, I don’t think regulators will compare AVs to accident stats in this way. AVs will need to drive far better than a group of 100 sober, attentive, experienced, good drivers, not an average from a group of 90 sober drivers who never get into accidents and 10 drunk drivers who can’t make a left turn without plowing into a static object because then you may be subjecting the sober accident-free group to accident rates influenced by the minority who causes most accidents.

Now if the car can detect a drunk driver and employ an AV system that is shown to produce far fewer accidents than the pool of drunk drivers, that would make a lot of sense.

If the car can detect distraction and do the same, that would make a lot of sense. But other methods of curbing distracted driving are likely coming.


But if you have one sober driver who drives 1000 miles without an accident and one drunk driver who gets into one accident in 1000 miles, you have an average of one accident per 2000 miles right. Now say you have a system that gets in accidents 1/3rd as often as that average so it has one accident in 6000 miles, but a sober driver might do 20,000 miles before getting into a minor fender bender. A sober driver might go their entire driving life without ever getting in a bad accident.

Long story short, I’m not sure this angle will make sense to regulators and it definitely doesn’t make a whole lot of sense to me. It makes sense if we imagine that all drivers get into accidents at the same rate, but that’s not how it works in reality.
 
Last edited by a moderator:
But if you have one sober driver who drives 1000 miles without an accident and one drunk driver who gets into one accident in 1000 miles, you have an average of one accident per 2000 miles right. Now say you have a system that gets in accidents 1/3rd as often as that average so it has one accident in 6000 miles, but a sober driver might do 20,000 miles before getting into a minor fender bender. A sober driver might go their entire driving life without ever getting in a bad accident.
Half the time the drunk driver hits a sober driver though (half of collisions involve multiple parties). That makes the collision rate of even "good" drivers much closer to the average.
 
Half the time the drunk driver hits a sober driver though (half of collisions involve multiple parties). That makes the collision rate of even "good" drivers much closer to the average.
That thought came to me after hitting Post haha, but you could build in a system that detects intoxicated drivers and cut out that contributor of accidents while still letting the sober driver do their thing.

But I don’t see them taking stats without any appreciation for the details, there would be so many implications. Like in the example above, using a broad accident statistic as the gauge could essentially be shifting accidents from the minority who cause most accidents by their own actions to drivers who may never get in a bad accident through their own actions because they don’t engage in risky behaviour etc.

Now in the context of deploying a Level 2 ADAS with a backup driver always ready to take over, using broad accident stats could make sense for wide deployment. But I highly doubt a Robotaxi-type service would be approved on the basis of producing less accidents on average than a large sample that combines really good drivers with really bad.

Robotaxis will need to be so much better than the average driver that they're also far better than the best drivers
 
Last edited by a moderator:
That thought came to me after hitting Post haha, but you could build in a system that detects intoxicated drivers and cut out that contributor of accidents while still letting the sober driver do their thing.

But I don’t see them taking stats without any appreciation for the details, there would be so many implications. Like in the example above, using a broad accident statistic as the gauge could essentially be shifting accidents from the minority who cause most accidents by their own actions to drivers who may never get in a bad accident through their own actions because they don’t engage in risky behaviour etc.

Now in the context of deploying a Level 2 ADAS with a backup driver always ready to take over, using broad accident stats could make sense for wide deployment. But I highly doubt a Robotaxi-type service would be approved on the basis of producing less accidents on average than a large sample that combines really good drivers with really bad.

Robotaxis will need to be so much better than the average driver that they're also far better than the best drivers
When I get in a robotaxi I just want there to be a lower chance of me getting injured or dying than when I drive myself. I don't care whose fault the collision was. My point is I think the collision rate for a good driver is closer to average than you think because half of all collisions involve two parties.
 
But if you have one sober driver who drives 1000 miles without an accident and one drunk driver who gets into one accident in 1000 miles, you have an average of one accident per 2000 miles right. Now say you have a system that gets in accidents 1/3rd as often as that average so it has one accident in 6000 miles, but a sober driver might do 20,000 miles before getting into a minor fender bender. A sober driver might go their entire driving life without ever getting in a bad accident.

Long story short, I’m not sure this angle will make sense to regulators and it definitely doesn’t make a whole lot of sense to me. It makes sense if we imagine that all drivers get into accidents at the same rate, but that’s not how it works in reality.
But is DOES make sense when you remember that the accidents to worry about are those that involve multiple participants. If some buffoon gets drink and drives his car into a brick wall, I really could not care less. But if he drives it into a bunch of other cars then that does make a difference. Your logic only works if the AV is replacing the drunk driver. But in fact a decent AV will do better at avoiding or mitigating accidents involving that drunk driver (faster reactions, better and more logical application of brakes etc). You are right that the average accident rate doesnt take into account the distribution curve, but you also dont allow for that fact that your hypothetical "safe" driver who doesnt cause accidents also is pretty bad at handling an accident situation when it does arise.
 
I have written to Tesla multiple times (to [email protected]) telling them to add a “Prefer Right Turns” option to their navigation software. Please note that I did not say “Avoid Left Turns.” I drive mainly in the Los Angeles area, and frequently find myself on a minor street facing a stop sign. Tesla navigation is telling me to either turn left onto , or cross, a major street (2 or more lanes in each direction). Both of these maneuvers are usually unsafe, because of heavy traffic or excessive speed by drivers on the major street! This is why I specifically say that the new option I am asking for is “Prefer Right Turns.”