Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
The next big milestone for FSD is 11. It is a significant upgrade and fundamental changes to several parts of the FSD stack including totally new way to train the perception NN.

From AI day and Lex Fridman interview we have a good sense of what might be included.

- Object permanence both temporal and spatial
- Moving from “bag of points” to objects in NN
- Creating a 3D vector representation of the environment all in NN
- Planner optimization using NN / Monte Carlo Tree Search (MCTS)
- Change from processed images to “photon count” / raw image
- Change from single image perception to surround video
- Merging of city, highway and parking lot stacks a.k.a. Single Stack

Lex Fridman Interview of Elon. Starting with FSD related topics.


Here is a detailed explanation of Beta 11 in "layman's language" by James Douma, interview done after Lex Podcast.


Here is the AI Day explanation by in 4 parts.


screenshot-teslamotorsclub.com-2022.01.26-21_30_17.png


Here is a useful blog post asking a few questions to Tesla about AI day. The useful part comes in comparison of Tesla's methods with Waymo and others (detailed papers linked).

 
Last edited:
"It’s possible that the cameras can’t see far enough, but that can usually be fixed by moving faster!"
Tesla needs to show how much FSD can see via a multicolor bar on the screen: green (can see well), yellow(see with some difficulty), orange(barely see), red(can't see anything). If I see green on the bar then I can let FSD go.
That sounds great in theory, but not sure how practical it would be when looking at what's important on the screen.
For example, looking forward the car can easily see objects nearby, but how can it indicate it can't see something if it can't see it? It is just as likely so indicate it can see the road ahead perfectly fine without being able to detect a car in the distance, net result is a green indication.
The current method only shows things that are detected, so nothing displayed if it can't see it.
 
That sounds great in theory, but not sure how practical it would be when looking at what's important on the screen.
For example, looking forward the car can easily see objects nearby, but how can it indicate it can't see something if it can't see it? It is just as likely so indicate it can see the road ahead perfectly fine without being able to detect a car in the distance, net result is a green indication.
The current method only shows things that are detected, so nothing displayed if it can't see it.
See means it's clear and safe to turn in this context. FSD should indicate to the user whether it can safely perform a task or not. It's similar to message "FSD is degraded due to poor weather".
It's give the user a chance to double check to determine whether disengagent in time is needed when FSD has a wrong judgement.
 
Last edited:
You keep asserting this because you blame the Tesla software and not the sensor suite. What is true that Waymo has shown that its approach works in limited geographic areas. Teslas's approach still works nowhere.
When did I suggest the sensor suite was ok? I definitely did not!

Also when did I compare Tesla’s approach to Waymo’s?

I just want Tesla to get the most out of the hardware they have, rather than giving up on it and moving to the next shiny object. (Selfish motivation on my part: this is what I own.)

I definitely think the existing software and hardware could perform much better. But that is not to say it would be better than L2. (So that objectively would make it inferior to Waymo currently.)
 
Last edited:
Tesla needs to show how much FSD can see via a multicolor bar on the screen: green (can see well), yellow(see with some difficulty), orange(barely see), red(can't see anything). If I see green on the bar then I can let FSD go.
As was pointed out, it is impossible to indicate something you cannot perceive.

Quite different than existing indications, which flag degradation when the vehicle knows the sensor is compromised.

The issue here is that a vehicle may be 5 seconds away and the maneuver to clear this traffic may exceed 5 seconds. The vehicle could be perceived past the point where anything could easily be done. (This is why maneuvering faster is a potential workaround.)
 
As was pointed out, it is impossible to indicate something you cannot perceive.

Quite different than existing indications, which flag degradation when the vehicle knows the sensor is compromised.

The issue here is that a vehicle may be 5 seconds away and the maneuver to clear this traffic may exceed 5 seconds. The vehicle could be perceived past the point where anything could easily be done. (This is why maneuvering faster is a potential workaround.)
Agreed, why doesn't Tesla take advantage of their quick acceleration?
When the car gets itself into a bad situation, juice it baby to make up for a close call.
The car is acting like it only has the pickup of a Subaru...

Yes, the Subaru jab was for you Alan!
 
See means it's clear and safe to turn in this context. FSD should indicate to the user whether it can safely perform a task or not. It's similar to message "FSD is degraded due to poor weather".
It's give the user a chance to double check to determine whether disengagent in time is needed when FSD has a wrong judgement.
To be really clear - the "user" should be making all those choices anyway without taking the time to look at the display. No need to take more time to check the display for anything.
What's shown on the display is just eye candy anyway, because the "user" is the one with responsibility for driving.
 
To be really clear - the "user" should be making all those choices anyway without taking the time to look at the display. No need to take more time to check the display for anything.
What's shown on the display is just eye candy anyway, because the "user" is the one with responsibility for driving.
I did not say the user does not have responsibility. I want to know when FSD decides to go.
 
  • Like
Reactions: LSDTester#1
I did not say the user does not have responsibility. I want to know when FSD decides to go.
Yesterday, on my extremely rural lane, with a course programed in, 11.4.4 stopped at a stop sign and appropriately said, "turning left." It turned right. After, I stopped it and backed up, turned around and proceeded to follow the route (and turned it back on). But HFJ.
 
When did I suggest the sensor suite was ok? I definitely did not!

Also when did I compare Tesla’s approach to Waymo’s?

I just want Tesla to get the most out of the hardware they have, rather than giving up on it and moving to the next shiny object. (Selfish motivation on my part: this is what I own.)

I definitely think the existing software and hardware could perform much better. But that is not to say it would be better than L2. (So that objectively would make it inferior to Waymo currently.)
But that's where I differ from you and probably Elon. I'd rather improve the sensor array and make it easier for the NN to succeed.
 
  • Like
Reactions: AlanSubie4Life
But that's where I differ from you and probably Elon. I'd rather improve the sensor array and make it easier for the NN to succeed.
I never said I didn’t want the sensor suite improved! We know they are not good enough, since they cannot match human perception.

Not sure where you got the idea that that was my opinion. Just because I want them to do the best they can with existing hardware (this is important for getting the right software approach and architecture as long as it is not unduly influenced by hardware limitations) does not mean I think they should not improve the hardware.

It would definitely be good to get more and better sensors, and much better processing hardware.

Right now it’s just layers and layers of defects. There is an outer layer of sensors with flaws and limitations, and then another layer of hardware with limitations, and another layer of software with limitations. It’s often difficult to tell in a particular case what the issue is.
 
Last edited:
  • Like
Reactions: timberlights
Just like automated elevators work in "limited geographic areas" ;)
the fact that Tesla did not take this approach might lead to their downfall. Once they finish the software, they should’ve done the testing and mastered it in a limited area and climate. If they want to collect data from other areas Ok. I feel they bit off too much, too many countries with untested, kickstarter vaporware. But economics forced them to sell it to as many people as they thought they could. if they could get it mastered in one area, and approved in one area by the regulators, in my mind, that’s how you would expand to other areas, climates, and countries. To try to solve the problem for everywhere is Noble, but to sell the product in advance of getting it working anywhere is not.
I suggested they replaced all the vehicles in Cuba and master it there. Become a true humanitarian.
 
  • Funny
Reactions: jebinc
I can report that on road trip:

1) Attempts to veer into dedicated right turn lanes on divided highways. Non-issue except for passengers. Easily overridden. Passenger: “What the h***? What is wrong with the car? You paid $8k ($5k+ $3k) for this?” (E.g. 89A westbound from Sedona to Cottonwood)

2) Plenty of jittery driving on high speed curving highways. (E.g 69 between Dewey-Humboldt and Cordes Lakes) Just slight jitteriness. Disengaged. Passenger had no comments.

3) Unable to drive on twisting low speed highways (e.g. near Page Springs ). Passenger: “What is wrong with your driving? Why are we all over the road?”

4) Plenty of disengagements for unacceptable pace into stopped or slowing traffic. Why can it not slow down 1/4 mile in advance? Easy enough to see the stopped/slowing traffic with my human eyes.

5) Unacceptable stopping behavior in moderate traffic between lights on surface streets. Just stops too hard too soon and jerks the car and leaves a lot of extra space to lead vehicle when braking is complete (could have been used to smoothly stop).

Overall on the freeway not too bad unless there is a moderate amount of traffic. Lots of decent features but have to be very alert for debris and potholes at all times.
 
New experience for me on 11.4.7.3. On an interstate with a 70 MPH speed limit, started with FSDb set to 75.

Light to moderate rain started to fall but my eyeball could easily identify objects at 1,000 feet. Got a message saying the speed was limited to 65 MPH, then it reduced to 60 MPH max. All while fighting the windshield wiper fiasco.

Finally just turned off FSDB and drove manually back to 75 MPH. Interestingly the TACC did not have any speed restrictions.

Definitely getting worse.
 
It would be nice to know what the near term plan is for FSD (v11 or v12). V12 capabilties are likely overstated, needs oodles of ideal driver data, and the assumption everything in between can be generalized. V11 has yet to achieve escape velocity. I suspect v11 is a dead end in part from choking on too much data training.
 
Last edited:
New experience for me on 11.4.7.3. On an interstate with a 70 MPH speed limit, started with FSDb set to 75.

Light to moderate rain started to fall but my eyeball could easily identify objects at 1,000 feet. Got a message saying the speed was limited to 65 MPH, then it reduced to 60 MPH max. All while fighting the windshield wiper fiasco.

Finally just turned off FSDB and drove manually back to 75 MPH. Interestingly the TACC did not have any speed restrictions.

Definitely getting worse.
I would not draw the conclusion that fast! The old TACC might not have the capabilities/programming to detect the degraded unsafe conditions. It is not only that large objects need to be detected. Do you know that a typical freeway lane width is 12 ft and the lane line marking width is 6 inches? At a distance, the lane lines become pixel size in the camera view and is quite hard to detect with certainty. Next time out in the rain, see if you can see the lane lines 1000 feet out.
 
  • Informative
Reactions: DrGriz
I would not draw the conclusion that fast! The old TACC might not have the capabilities/programming to detect the degraded unsafe conditions. It is not only that large objects need to be detected. Do you know that a typical freeway lane width is 12 ft and the lane line marking width is 6 inches? At a distance, the lane lines become pixel size in the camera view and is quite hard to detect with certainty. Next time out in the rain, see if you can see the lane lines 1000 feet out.
Sounds like a case against the vision only approach.