Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
I think there's a bit of a logical fallacy in the discussions of edge cases like snow and fog conditions. It's a familiar scenario that people take risks driving in such bad and low visibility conditions. You kind of make a bet, knowing you might lose, but most of us have done it before and have stories with usually less than tragic consequences.

It's pretty easy to find YouTube videos of traffic sliding around on icy roads, cars in Seattle spontaneously sliding down the street with a domino effect, and so on. Less common would be collisions due to low visibility in fog (or here in the desert, opaque dust storms or flash-flooded street crossings that can hide a huge washed-out ditch).

Human nature is that we often forge ahead even when we understand that it's risky. Places to go, errands to run, need to get to the school or to the house. If we were to cancel the the plan and it turned out that everyone else made it fine, it would be embarrassing to explain. If we slide the car off the road or have a fender bender in the fog, it's mostly just life experience, and we're limited in harsh judgment of ourselves or our friends.

But - if Tesla FSD, or a robotaxi, takes this kind of risk and fails, there'll be very little understanding and mountains of hysteria. Nationwide coverage. This is kind of obvious, yet we have repeated forum challenges of how Tesla FSD can't be good unless it does take those risks and attempts to operate in situations that you and do (but really shouldn't).

In my view, this is yet another aspect of why true L5 is not only an extremely difficult goal but actually impossible, if we demand the machine conquer all these human driving examples that actually involve high risk. It doesn't matter whether humans usually or nearly always get away with it. That standard won't survive the first post-fatality investigation of the robot driver. Actually, it won't even survive the ire of a disgruntled forum user who busted a rim or smacked into a pole.

I do wish everyone would keep this in mind when they spin up edge cases that FSD "won't ever be able to do". If you think about it sensibly, you don't want it even to try those things with you or your family, or your shiny Tesla.

P.S. this point has nothing to do with excusing Tesla from criticism over design engineering or feature release decisions, e.g. camera cleaning or camera placement. Nor is it an attack on sensible discussion about the car's capability in adverse conditions. I'm pointing out the contradiction of seeking human-like risk-taking (including very questionable circumstances) while also demanding superhuman safety results.
I see what you're saying, but I was asking because I actually want to see him do a drive other than the typical San Fran routes he takes.
 
  • Like
Reactions: DrGriz and JHCCAZ
I see what you're saying, but I was asking because I actually want to see him do a drive other than the typical San Fran routes he takes.
My daily drives have very little of the urban challenges that make his videos interesting to watch. Mostly I see all those pedestrian and bicycle scenarios as things that are unlikely to happen to me, but I want them to work when they do!

My biggest problems are more in the annoyance rather than dangerous category, particularly v11's maddening insistence on moving out of the necessary lane about a half mile before a turn, just so that it can try to re-merge back into that lame about 1,000 ft out. I can easily override this but I can't override the false signaling that goes with it. It's some kind of mapping bug problem I think. I have no evidence that v12 will solve it but I sure hope so.
 
particularly v11's maddening insistence on moving out of the necessary lane about a half mile before a turn, just so that it can try to re-merge back into that lame about 1,000 ft out.
LOL! Mine does this every bloody drive! "Changing lanes to follow route," So it changes to the right lane with a left turn about a half mile ahead. Every. Single. Time.

TBH though, I've given up caring about V11.x. V12 is going to be such a complete change, that it seems pointless to care or experiment. FSD does an outstanding job on a two lane highway that I drive all of the time. It's about an hour drive and it handles it perfectly. Easy case I know, but it really does make life pretty easy on that drive. Well worth my cost for FSD. (Which was a small fraction of today's price.)
 
I think there's a bit of a logical fallacy in the discussions of edge cases like snow and fog conditions. It's a familiar scenario that people take risks driving in such bad and low visibility conditions. You kind of make a bet, knowing you might lose, but most of us have done it before and have stories with usually less than tragic consequences.

It's pretty easy to find YouTube videos of traffic sliding around on icy roads, cars in Seattle spontaneously sliding down the street with a domino effect, and so on. Less common would be collisions due to low visibility in fog (or here in the desert, opaque dust storms or flash-flooded street crossings that can hide a huge washed-out ditch).

Human nature is that we often forge ahead even when we understand that it's risky. Places to go, errands to run, need to get to the school or to the house. If we were to cancel the the plan and it turned out that everyone else made it fine, it would be embarrassing to explain. If we slide the car off the road or have a fender bender in the fog, it's mostly just life experience, and we're limited in harsh judgment of ourselves or our friends.

But - if Tesla FSD, or a robotaxi, takes this kind of risk and fails, there'll be very little understanding and mountains of hysteria. Nationwide coverage. This is kind of obvious, yet we have repeated forum challenges of how Tesla FSD can't be good unless it does take those risks and attempts to operate in situations that you and do (but really shouldn't).

In my view, this is yet another aspect of why true L5 is not only an extremely difficult goal but actually impossible, if we demand the machine conquer all these human driving examples that actually involve high risk. It doesn't matter whether humans usually or nearly always get away with it. That standard won't survive the first post-fatality investigation of the robot driver. Actually, it won't even survive the ire of a disgruntled forum user who busted a rim or smacked into a pole.

I do wish everyone would keep this in mind when they spin up edge cases that FSD "won't ever be able to do". If you think about it sensibly, you don't want it even to try those things with you or your family, or your shiny Tesla.

P.S. this point has nothing to do with excusing Tesla from criticism over design engineering or feature release decisions, e.g. camera cleaning or camera placement. Nor is it an attack on sensible discussion about the car's capability in adverse conditions. I'm pointing out the contradiction of seeking human-like risk-taking (including very questionable circumstances) while also demanding superhuman safety results.
For sure that is a very sensible analysis. Autonomy will never happen if we expect it to be perfect. But I do think the problem is a lack of data about how frequently people have to break out of full self driving and also how risky the situation was in which they did so. In other words after my experience with 11.x, which is just plain scary bad, it's hard to believe that 12.X is going to be so much better that all the craziness in the version that I have will be removed. As other people have commented what's quite surprising is the gap between enhanced autopilot on the highway and FSD around town. Enhanced AP on the highway is a pretty smooth experience and I have confidence in it but FSD around town and on non Highway roads is frightening. I even prefer regular autopilot over FSD on the highway. So there's a whole lot of work still to be done.
 
  • Like
Reactions: JHCCAZ
Given the revelations about how much effort they went to do the first FSD demo drive that started things off, I highly doubt there was zero internal talk about doing a similar highly restricted coast to coast drive just for Elon not to get egg on his face. But he's been wrong so many times already on the timing, people are numb to it, so I guess he doesn't care about it anymore.

In trying to dig up an interview where the interviewer mentioned SAE, I watched some of the other interviews where Elon talks about "level 5" and he talks about completing "basic functionality for Level 5" which is complete nonsense if he was referring to SAE.

The only reference I can find where an executive actually referred to SAE level 5, is GM's when one called Tesla out:
"To be what an SAE level five full autonomous system is, I don’t think he has the content to do that"
GM expert: Elon Musk is ‘full of crap’ on Tesla’s autonomous driving capability
The date on that piece is 2017? So how can that possibly be relevant to the current environment? I agree that Elon has wildly over promised, clearly part of his futurism. But the head of GM's autonomous driving division is probably out of a job at this point so unclear how much of a groundbreaking genius he might be.
 
  • Funny
Reactions: FSDtester#1
For sure that is a very sensible analysis. Autonomy will never happen if we expect it to be perfect. But I do think the problem is a lack of data about how frequently people have to break out of full self driving and also how risky the situation was in which they did so.
The main difference between autonomy and driver assistance is that autonomy can almost never fail. MTBF needs to be at perhaps millions of hours.

The system needs to be able to always stop safely or hand over the driving to the human safely before it fails.

In in other words: in needs to be close to perfect if you are to remove the driver from the loop. That doesn't mean it will always drive though. That's the "ODD" is for.
 
  • Like
Reactions: diplomat33
The main difference between autonomy and driver assistance is that autonomy can almost never fail. MTBF needs to be at perhaps millions of hours.

The system needs to be able to always stop safely or hand over the driving to the human safely before it fails.

In in other words: in needs to be close to perfect if you are to remove the driver from the loop. That doesn't mean it will always drive though. That's the "ODD" is for.
Once again without specifying what degree of failure you're talking about there's no way to evaluate your statement. If you mean failure is fatal for the occupants, I suspect most people would not sign up for that millions of hours number even though that's not far off from Human failure rates leading to fatality. This is a big part of the problem this whole discussion is a data free zone so to speak in terms of a complete absence of Statistics which I suspect are closely guarded by Tesla for what kind of failures the system undergoes, and how serious they might be.
 
Once again without specifying what degree of failure you're talking about there's no way to evaluate your statement. If you mean failure is fatal for the occupants, I suspect most people would not sign up for that millions of hours number even though that's not far off from Human failure rates leading to fatality. This is a big part of the problem this whole discussion is a data free zone so to speak in terms of a complete absence of Statistics which I suspect are closely guarded by Tesla for what kind of failures the system undergoes, and how serious they might be.
You are right that MTBF for [mistakes that cause] death need to be higher than the MBTF for human injury, which in turn need to be higher than the MTBF property damage which in turn need to be higher than the MTBF breaking traffic rules or other unsafe manoeuvres.

If we assume that the AV-system maker will be required to take on liability for when it's at fault for any costs that incur (any property, human damage/death), this will be self-regulating. Repair/insurance costs and civil litigation costs will devour the company quickly.

If the deployed AV doesn't work "perfectly enough", the company may go out of business. Look at Cruise. At Tesla's scale, any type of low MTBF will be very costly very quickly.
 
Last edited:
  • Like
Reactions: dfwatt
My daily drives have very little of the urban challenges that make his videos interesting to watch. Mostly I see all those pedestrian and bicycle scenarios as things that are unlikely to happen to me, but I want them to work when they do!

My biggest problems are more in the annoyance rather than dangerous category, particularly v11's maddening insistence on moving out of the necessary lane about a half mile before a turn, just so that it can try to re-merge back into that lame about 1,000 ft out. I can easily override this but I can't override the false signaling that goes with it. It's some kind of mapping bug problem I think. I have no evidence that v12 will solve it but I sure hope so.
Lane selection is by far V11's biggest weakness on my drives. From what I've seen, Omar's videos don't shed any light on whether or not V12 has improved this.

But as soon as I get V12, I know exactly the route I need to take in order to test lane selection. Can't wait to find out.
 
I think there's a bit of a logical fallacy in the discussions of edge cases like snow and fog conditions. It's a familiar scenario that people take risks driving in such bad and low visibility conditions. You kind of make a bet, knowing you might lose, but most of us have done it before and have stories with usually less than tragic consequences.

It's pretty easy to find YouTube videos of traffic sliding around on icy roads, cars in Seattle spontaneously sliding down the street with a domino effect, and so on. Less common would be collisions due to low visibility in fog (or here in the desert, opaque dust storms or flash-flooded street crossings that can hide a huge washed-out ditch).

Human nature is that we often forge ahead even when we understand that it's risky. Places to go, errands to run, need to get to the school or to the house. If we were to cancel the the plan and it turned out that everyone else made it fine, it would be embarrassing to explain. If we slide the car off the road or have a fender bender in the fog, it's mostly just life experience, and we're limited in harsh judgment of ourselves or our friends.

But - if Tesla FSD, or a robotaxi, takes this kind of risk and fails, there'll be very little understanding and mountains of hysteria. Nationwide coverage. This is kind of obvious, yet we have repeated forum challenges of how Tesla FSD can't be good unless it does take those risks and attempts to operate in situations that you and do (but really shouldn't).

In my view, this is yet another aspect of why true L5 is not only an extremely difficult goal but actually impossible, if we demand the machine conquer all these human driving examples that actually involve high risk. It doesn't matter whether humans usually or nearly always get away with it. That standard won't survive the first post-fatality investigation of the robot driver. Actually, it won't even survive the ire of a disgruntled forum user who busted a rim or smacked into a pole.

I do wish everyone would keep this in mind when they spin up edge cases that FSD "won't ever be able to do". If you think about it sensibly, you don't want it even to try those things with you or your family, or your shiny Tesla.

P.S. this point has nothing to do with excusing Tesla from criticism over design engineering or feature release decisions, e.g. camera cleaning or camera placement. Nor is it an attack on sensible discussion about the car's capability in adverse conditions. I'm pointing out the contradiction of seeking human-like risk-taking (including very questionable circumstances) while also demanding superhuman safety results.



FWIW L5 does not require "can drive anywhere a human might ever foolishly attempt driving"-- instead it requires driving anywhere the vehicle can be "reasonably operated by a typically skilled human driver"

Driving on ice and through blizzards are explicitly called out conditions ( (among others) which exceed that reasonability standard and L5 is not required to drive there.

It IS required to recognize those conditions and perform the DDT fallback to achieve a minimal risk condition until exterior factors improve to the point it can reasonably resume the trip though.
 
Everything I've read about v12 from people who have driven it points to it being better than v11 in most ways. Still not perfect, but a good step in the right direction.

Makes me wonder why it hasn't been released wider yet? Tesla seems to be very cautiously and slowly releasing v12, even though the firsthand reports seem to claim it's more capable and confident than v11.

I think 2024 is the year FSD needs to really show substantial progress. If this whole year goes by without any meaningful improvements, like seriously substantial improvements, it will be sad.

I look forward to Tesla's "Chat GPT" moment, I just hope it happens relatively soon.
 
2023.44.30.14 incoming to my FSD car. Definitely being held back for V12!
TeslaInfo release notes for 2023.44.30.20 (FSD Beta v12.2.1) shows it includes the notes for 2023.44.30.14 (Over-the-Air Recall: Telltale Text Size).


Good news for those already on .14 potentially getting 12.x!
 
Everything I've read about v12 from people who have driven it points to it being better than v11 in most ways. Still not perfect, but a good step in the right direction.

Makes me wonder why it hasn't been released wider yet? Tesla seems to be very cautiously and slowly releasing v12, even though the firsthand reports seem to claim it's more capable and confident than v11.

I think 2024 is the year FSD needs to really show substantial progress. If this whole year goes by without any meaningful improvements, like seriously substantial improvements, it will be sad.

I look forward to Tesla's "Chat GPT" moment, I just hope it happens relatively soon.
My guess, unlike 10.+ releases this time they are not bending to the release to appease quick. Instead they want to polish as much as possible to minimize the click bate “it’s not better crowd”. I’m personally fine to wait for a more refined execution vs a fire then aim approach.
 
Do we know why they use V11 on highways? Perhaps so they can focus on secondary roads with V12 training?
Probably similar to the development of FSD Beta before 11.x where highways was already pretty good with Navigate on Autopilot, so there wasn't as big of a rush to getting it working on highways. Then for single stack, there was a lot of engineering effort and architectural changes to get 11.x safe enough to use on highways.

End-to-end already can handle highways albeit briefly when 12.x happens to have incorrect map data on where the highway starts/ends, and Tesla will probably evaluate the safety characteristics of 12.x in shadow mode before letting it go for long periods of time in these high speed situations. There is also the benefit that you suggest of focusing training on secondary roads, so if specialized highway training focus is needed based on the shadow metrics, Tesla can do that later when more training compute is available as highway driving is already even better than before.
 
  • Like
Reactions: JB47394
The network saw a lone pedestrian stopping on the edge of the driveable space, combined with the need to stop for traffic in front of it. In that case, most considerate drivers will slow in case the pedestrian wants to cross while at the same time leaving them a comfortable gap.
Pretty sure there's been examples of 12.x not leaving a gap when there's pedestrians on the other side of the road approaching a stop, so the specific difference in this example seems to be that the pedestrian was looking at the Tesla. If that really is the difference and if it does behave consistently, it seems like Tesla avoided a bit of work that others have done specially detecting and interpreting the meaning of the eye contact to change behaviors.

Perhaps ideas of things people could test with 12.x: pedestrian walking along the road looking straight ahead vs looking backwards with just head movement vs whole body turning, etc.

If end-to-end is picking up on signals from head movement and eye contact, seems like it should be able to detect other similarly small gestures.
 
Everything I've read about v12 from people who have driven it points to it being better than v11 in most ways. Still not perfect, but a good step in the right direction.

Makes me wonder why it hasn't been released wider yet? Tesla seems to be very cautiously and slowly releasing v12, even though the firsthand reports seem to claim it's more capable and confident than v11.

I think 2024 is the year FSD needs to really show substantial progress. If this whole year goes by without any meaningful improvements, like seriously substantial improvements, it will be sad.

I look forward to Tesla's "Chat GPT" moment, I just hope it happens relatively soon.
Probably because Elon's claim that V12 won't be beta. No offense, but most of the public wanting access won't contribute anything significant to the development, so not releasing it to the general public won't necessarily really hold them back. I'm sure there are regressions to work through.
 
Lane selection is by far V11's biggest weakness on my drives. From what I've seen, Omar's videos don't shed any light on whether or not V12 has improved this.

But as soon as I get V12, I know exactly the route I need to take in order to test lane selection. Can't wait to find out.
Oh, God, me too!

There's a four-lane street I take every day. At the first intersection I come to, it _used_ to be one left turn lane and three straight-ahead lanes, which fed into a three-lane street.

It was repaved last summer and the markings changed to that there are now two left turn lanes-- clearly marked with large left-arcing arrows-- and two straight ahead lanes feeding into three lanes on the other side of the intersection.

(As an aside, I find this lane design bizarre, since it's illegal to change lanes in an intersection here in Nevada, yet you're pretty much forced to with this setup. Anyway...)

The route I take requires a left turn at the next intersection after this one. For a year or so now FSD would place the car in the leftmost straight-ahead lane to cross the intersection. However, this is now a left-turn lane.

And the car knows it's a left turn lane since the visualization shows the left turn arrow painted on the pavement. But the FSD puts the car in the second left turn lane anyway, and will proceed straight across the intersection (I tested this in a zero-traffic environment).

I know that this is due to a mapping error-- the map hasn't been updated with the new lane configuration. I imagine there are hundreds if not thousands of examples of this across the country.

But I had always hoped if the mapping data and what the cameras saw disagreed, the car would go with the cameras...
 
  • Like
Reactions: JHCCAZ and Usain