Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
The next big milestone for FSD is 11. It is a significant upgrade and fundamental changes to several parts of the FSD stack including totally new way to train the perception NN.

From AI day and Lex Fridman interview we have a good sense of what might be included.

- Object permanence both temporal and spatial
- Moving from “bag of points” to objects in NN
- Creating a 3D vector representation of the environment all in NN
- Planner optimization using NN / Monte Carlo Tree Search (MCTS)
- Change from processed images to “photon count” / raw image
- Change from single image perception to surround video
- Merging of city, highway and parking lot stacks a.k.a. Single Stack

Lex Fridman Interview of Elon. Starting with FSD related topics.


Here is a detailed explanation of Beta 11 in "layman's language" by James Douma, interview done after Lex Podcast.


Here is the AI Day explanation by in 4 parts.


screenshot-teslamotorsclub.com-2022.01.26-21_30_17.png


Here is a useful blog post asking a few questions to Tesla about AI day. The useful part comes in comparison of Tesla's methods with Waymo and others (detailed papers linked).

 
Last edited:
He’s the shilliest of shills, it’s hilarious. Tesla could literally have renamed 11.4.3 to 11.4.4 and released it and Mars would say it’s 10x better than 11.4.3, and the greatest thing ever.
That he is. I can’t believe there is a single person that believes his reports are genuine; yet, with every new release, someone proudly posts his BS in all of the FSDj threads - as evidence of FSD’s greatness.
 
First drive with 4.4. Some improvements but will wait for a few more drives before judging.
The one dangerous problem that continues is pulling out in front of traffic when it's not safe. This particular example is not new but highlights the behavior when FSD stops at an intersection but then never creeps to make sure it's safe to pull out. FSD just goes. In this case the crossing traffic was a 18 wheeler. My guess is FSD would have eventually stopped without hitting the truck but the truck driver would have freaked and then bad things can happen.
Most of the time at this intersection FSD just goes without ever creeping. In the past I've also seen FSD try and pull out with vehicles coming from both directions at this intersection which is not obstructed. I had another intersection like this and after a few emails to Tesla it was fixed. Who knows if my emails had any impact. Already emailed Tesla on this for what it's worth.
It will be helpful when the B-pillar camera view is available which we know is coming.

I think without panoramic cameras or some kind of radars in the front bumper no iteration of FSD will be 100% safe in this situation, same as a human driver. It seems that Tesla made a conscientious choice to limit the car to the same set of inputs as a human driver, and so the car in the video did what most of us were doing when we were learning to drive. When the software matures enough it will learn to creep up, but it will never be completely safe, just like a human is not completely safe in this situation.
 
I don't know about you, but I've only got two optical inputs in one location. If you've got 8 eyes you can place around the extremities of a vehicle, I'd like to see a pic!

An aware driver is much safer than FSD in it's current state. Doesn't matter if it has 15 cameras, from my experience something is either wrong in the design of the B pillar cameras or the software is fatally flawed. Majority of the time my unprotected left or right hand turns onto a 55mph highway are done dangerously close to oncoming traffic or fast approaching traffic. There is a much higher risk for me being rear ended or t boned using FSD. I either always have to slam the accelerator or hit the brake.

I am extremely skeptical of them being able to fix this. I think the cameras just can't see far enough. UPL's and Righthand turns should be flawless.
 
An aware driver is much safer than FSD in it's current state. Doesn't matter if it has 15 cameras, from my experience something is either wrong in the design of the B pillar cameras or the software is fatally flawed. Majority of the time my unprotected left or right hand turns onto a 55mph highway are done dangerously close to oncoming traffic or fast approaching traffic. There is a much higher risk for me being rear ended or t boned using FSD. I either always have to slam the accelerator or hit the brake.

I am extremely skeptical of them being able to fix this. I think the cameras just can't see far enough. UPL's and Righthand turns should be flawless.

I don't think it's necessarily a hardware or a software issue. I think you're underestimating the value of human intuition, learned experiences, and creativity in solving a problem like approaching a blind curve. Those are things that will be very difficult to replicate with machine learning.
 
  • Like
Reactions: D Good
I don't think it's necessarily a hardware or a software issue. I think you're underestimating the value of human intuition, learned experiences, and creativity in solving a problem like approaching a blind curve. Those are things that will be very difficult to replicate with machine learning.

The turns my car struggle with are not blind, you can clearly see traffic on either side. The car still makes unsafe turn decisions. Why does it do this if the car is accurately modeling a 3d vector space around it? It baffles me every time.

The poor lane decisions, bad speed limits, and other various quirks are all understandable disengagements and failures. But safety related such as UPL's and right hand turns really lower the credibility of FSD ever becoming a reality. The car does now what it should have done 5 years ago, but still has fatal flaws where everyone initially expected.

And people pay $15,000 for this. It's one of the biggest scams you could purchase.
 
The turns my car struggle with are not blind, you can clearly see traffic on either side. The car still makes unsafe turn decisions. Why does it do this if the car is accurately modeling a 3d vector space around it? It baffles me every time.

The poor lane decisions, bad speed limits, and other various quirks are all understandable disengagements and failures.

Again, you're neglecting the human factor. As a driver experienced in your location, you know the typical dispositions of drivers around you. You can intuit whether people going 60 MPH on the road you're trying to cross are likely to accelerate before you can cross; you know if they're likely to yield given the social situation.

Lane lines were designed with humans in mind. They're often nonsensical, but you learn how to use them, and if you don't, another driver honks at you and that mistake is remembered even more.

You can design a self-driving algorithm with a rough approximation of these experiences and intuitions; you can hard-code rules for every location, and hard-code tolerances to assume certain behavior of other drivers in the scene. And it will work well until it doesn't; it's a brittle solution. For e.g. there's a 4-way stop near me where large vans tend to park on the right, blocking the sight-lines. FSD Beta stops at the stop sign, and then creeps forward into the center of the intersection, and then proceeds. It's unusual behavior, and any human driving the intersection would just go, because the other 1,000 times they've driven that intersection, oncoming traffic has stopped at their sign. But in the 1,001th case where the oncoming car doesn't stop, the human driver gets T-boned while FSD Beta avoids it.
 
Again, you're neglecting the human factor. As a driver experienced in your location, you know the typical dispositions of drivers around you. You can intuit whether people going 60 MPH on the road you're trying to cross are likely to accelerate before you can cross; you know if they're likely to yield given the social situation.

The human factor is I see a car approaching fast with no cars behind it so I wait. Tesla FSD acts like it doesn't see the car approaching, and decides to turn in front of the car and I come within a car length of being hit. Or as it's done, it starts to pull out and then slams on the brakes making that car have to go around me in their lane.

It's not that complicated as you're trying to make it seem. The camera should have no problem seeing a car approaching just like the front facing cameras slow down waaay before a vehicle in front.
 
I think without panoramic cameras or some kind of radars in the front bumper no iteration of FSD will be 100% safe in this situation, same as a human driver. It seems that Tesla made a conscientious choice to limit the car to the same set of inputs as a human driver, and so the car in the video did what most of us were doing when we were learning to drive. When the software matures enough it will learn to creep up, but it will never be completely safe, just like a human is not completely safe in this situation.
Nothing is 100% safe. That's an unrealistic standard that can never be met. The minimum standard for AVs is to be safer than humans.

Teslas have more sensor inputs than humans.
 
Teslas have more sensor inputs than humans.
This has always been said and while technically true, ours move, have much higher quality resolution, depth perception, and a brain that varies per user.

That's always been the qualm with me and the pillar camera position. It was a poor placement as my head can move forward and see things that the car obviously cannot without creeping dangerously.
 
An aware driver is much safer than FSD in it's current state. Doesn't matter if it has 15 cameras, from my experience something is either wrong in the design of the B pillar cameras or the software is fatally flawed. Majority of the time my unprotected left or right hand turns onto a 55mph highway are done dangerously close to oncoming traffic or fast approaching traffic. There is a much higher risk for me being rear ended or t boned using FSD. I either always have to slam the accelerator or hit the brake.

I am extremely skeptical of them being able to fix this. I think the cameras just can't see far enough. UPL's and Righthand turns should be flawless.
I agree , same experience here . I often have to intervene for one extreme or another . I wish I could contradict while remaining honest but alas…
 
  • Like
  • Informative
Reactions: kabin and MARKM3
Tested 11.4.4 at my 4 way stop and it was very bad. The car stopped, crept out half a car length for no reason, started the left turn, and stopped halfway through the turn for unknown reasons. It then started the turn again but I intervened because another car was waiting and it was clear that 11.4.4 was no different or worse than 11.4.3. Also there is the chronic issue of the car assigning a 55mph speed limit to sections of arterials that should be no more than 45. This is a newer area of suburbs and mixed use zoning so no speed limits are posted yet. Yes, I guess not really the fault of FSD, but I see the road conditions and know what is safe and FSD is clueless about that. I wonder if FSD will ever reach a point where I feel it has that intuitive awareness of road conditions to adjust speed accordingly despite lack of hard data. I agree that FSD/Autopilot in some ways is safer due to more cameras. Most of the time I feel like it is safer than me when using autopilot on the highway because it sees cars better when changing lanes and it never stops paying attention to the speeds of cars ahead of me. On the other hand FSD does not think as far ahead as I do, so I usually feel that it waits too late to get in the correct lane for an exit.

Back to the one mile drive we do from our kids' house to our house: if no interventions from me to adjust the vehicle speed would FSD eventually cause us to crash? My feeling is probably not, but I will always intervene-too scary and if another drive did pull out and we are going 55mph, where we should be going a max of 45, then there certainly could be a crash. At best, if no speed adjustment, FSD would really really scare us every time without fail by going way too fast. ,

We keep getting these incremental updates. I wonder if all these updates will eventually result in a non-beta reliable FSD package, or if these updates are just small improvements on a fundamentally doomed approach.
 
And people pay $15,000 for this. It's one of the biggest scams you could purchase.
I suppose it’s relative. I think it’s just a fun and cool thing to have in my opinion and well worth the money as I enjoy feeling like I’m in the future even though it’s far from perfect. I spent twice the cost of FSD on a Mac Pro a few years back and ended up not feeling like I got my money’s worth from it even though others with a similar configuration of Mac were much happier than I was using it.