Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Profound progress towards FSD

This site may earn commission on affiliate links.
As this type of software gets better, it lulls the driver into complacency, and thus gets more dangerous.
When will it breeze through traffic lights? At what percent correct or incorrect? What percent of people will be paying attention?

I doubt this will be a problem in the near future. Elon's statements are very similar to what he has said in the past and have come out with a dud or worse. Take for example what he said about dumb summons. He said it will blow your mind. He said the same thing last year about FSD, coming for sure by end of year. I'll say for sure it won't arrive by end of this year.

No chance FSD level 5 will arrive in the next 5 years. Even with hardware upgrades. Although many of us will be very happy with Level 3 on limited access freeways.
 
Last edited:
As this type of software gets better, it lulls the driver into complacency, and thus gets more dangerous.
When will it breeze through traffic lights? At what percent correct or incorrect? What percent of people will be paying attention?

Yeah, if anything Tesla will have to enforce more driver supervision with the "4D" rewrite. When our cars are able to do a full commute from home to work with no interventions, people will get even more complacent, thinking that the car is truly full self-driving. The driver will need to stay vigilant for those cases that the car can't handle yet.
 
  • Like
Reactions: DanCar
I have not posted in awhile, mainly because I am knee deep in a Tesla solar order. :)

Having had my Model 3 with FSD for a year now, I will say this.

First, I understand all the complaints with people who paid for FSD and are not pleased, and won't be pleased, until FSD is actually FSD. I get it, its a logical position.

However, my position was that I knew that my car would not be driving itself around any time soon when I got it in June of 2019,, but I figured I wanted to support the company and the vision, and I also wanted to have the full suite of stuff as it rolled out.

The improvements in the last year have been incredible. Plus, when I have some time I am sure that in 2019, or in 2020 for sure, Tesla as a fleet must be approaching the safest cars you could get. For sure the safest performance cars you can get. That's because safety warnings are rolled out with FSD features. My car now must have many, many warnings it did not have when I bought it.

Each feature has to be rolled out, not to professional test drivers, but to actual amateur, real, driver type drivers like me. That means each feature is some fraction of what its actually capability is, because the stakes are so high that the driver always needs to be able to disconnect. I mean, of course.

But I would guess that the work involved in figuring out how to roll out the feature to the fleet is at least as much work as developing the feature in the first place.

The current state of recognizing and stopping for stop lights and signs is really amazing. I cannot believe a car, sold to the general public, can do it. But it can. Obviously, the slowing to the speed limit, and exactly the speed limit, means for most people that the car is not driving "as well" as most drivers, who consistently go over the speed limit. Plus, the need for confirmation to go through green lights.

But I have had the car long enough to not only rate the features it has now, but to sort of know where those features will go in the next 3 to six months, and so on. This damn car is going to drive itself around while I own it. I don't know when, but its getting pretty damn close.
 
There are videos of dogs "driving" cars with smart summon, pretty mind blowing to me. Karpathy is well aware that smart summon is only magical when it works. He "picks on" smart summon often in his talks. Tesla is well aware of the current state of AP and smart summon, etc. It's not like they think their current features will be good enough for FSD.
 
No. the 4 in "4D" is time. "2.5D" probably refers to the fact that the current NN are processing almost 3D.

The Alpha that Elon is running would definitely be the new 4D rewrite. What we have in our cars now is the "2.5D".

19 minutes into the call Elon raises 2.5D. "Harshly correlated in time and but not very well". I think I am right.

He then talks about rolling out 4D. He says would work and then says does work.

My guess is that Alpha is using 4D for some functions only. Otherwise, if Alpha has fewer disengagements than Beta, why hasn't it been rolled out? It is months away from being rolled out.
 
  • Informative
Reactions: APotatoGod
19 minutes into the call Elon raises 2.5D. "Harshly correlated in time and but not very well". I think I am right.

He then talks about rolling out 4D. He says would work and then says does work.

My guess is that Alpha is using 4D for some functions only. Otherwise, if Alpha has fewer disengagements than Beta, why hasn't it been rolled out? It is months away from being rolled out.

4D usually refers to the 3D of space + 1D of Time. And the 4D is the rewrite that uses all of 4D. Perhaps the 2.5D uses some time elements but probably not the whole thing like the 4D does. That would make sense with Elon's quote.
 
Anyone willing to bet that hw3 won't be enough to process all the 4D nets at 36hz? It doesn't seem like they designed hw3 with this new rewrite in mind. Elon would have mentioned it a long time ago.
I’ve thought this too, but hasn’t it been the bane of our existence ever since the transistor was invented? Every time new great technology comes out, the boobs in coding bloat up the software so much that we need another hardware revolution to cope with it.
 
The move to 4D from 2.5D is a big positive IMO. I have been harping on how processing multiple seconds of video will be necessary for FSD to work well enough. Static object detection just won't be good enough if the temporal information is too limited in scope.

Because Tesla will have enough training data, my issue has always been - will they be able to fit the correct model into current computing hardware, or is current hardware not capable enough? For now it sounds like they are not yet limited by compute even with this big change.

Promising times.
 
Perhaps the 2.5D uses some time elements but probably not the whole thing like the 4D does. That would make sense with Elon's quote.
I think the .5 in 2.5D that Elon is referring to here is the fact that they do use some time based analysis in current Tesla Autopilot, specifically cut-in detection. But it is very narrow in scope.
Full 4D will give them access to multiple frames in memory of the entire real-time stitched 3D scene.
I am curious how many frames they will have access to (has to be memory constrained).
 
  • Informative
Reactions: APotatoGod
I am curious how many frames they will have access to (has to be memory constrained).

Don’t see why they have to store that many frames. They can place the object in their 3D vector representation and do time-based predictions on recent frames.

They can use the fleet to generate trajectory predictions for all sorts of objects encountered while driving.

I think their limitation in the past for this type of 3D approach was that vision had not achieved the appropriate size and distance estimation required. Only lately has Karpathy mentioned that vision is closing in on Lidar for distance and size estimations. Tesla has also demonstrated this with their cone and trash bin predictions.
 
Last edited:
Don’t see why they have to store that many frames. They can place the object in their 3D vector representation and do time-based predictions on recent frames.

They can use the fleet to generate trajectory predictions for all sorts of objects encountered while driving.

I think their limitation in the past for this type of 3D approach was that vision had not achieved the appropriate size and distance estimation required. Only lately has Karpathy mentioned that vision is closing in on Lidar for distance and size estimations. Tesla has also demonstrated this with their cone and trash bin predictions.

A common way of embedding a temporal dimension into a neural network is with the use of recurrent loops. Where the network's output is then fed back into the network as an input. It gives a neural network a sort of "memory" for what occurred in the past.

Understanding RNN and LSTM
 
I have not posted in awhile, mainly because I am knee deep in a Tesla solar order. :)

Having had my Model 3 with FSD for a year now, I will say this.

First, I understand all the complaints with people who paid for FSD and are not pleased, and won't be pleased, until FSD is actually FSD. I get it, its a logical position.

However, my position was that I knew that my car would not be driving itself around any time soon when I got it in June of 2019,, but I figured I wanted to support the company and the vision, and I also wanted to have the full suite of stuff as it rolled out.

The improvements in the last year have been incredible. Plus, when I have some time I am sure that in 2019, or in 2020 for sure, Tesla as a fleet must be approaching the safest cars you could get. For sure the safest performance cars you can get. That's because safety warnings are rolled out with FSD features. My car now must have many, many warnings it did not have when I bought it.

Each feature has to be rolled out, not to professional test drivers, but to actual amateur, real, driver type drivers like me. That means each feature is some fraction of what its actually capability is, because the stakes are so high that the driver always needs to be able to disconnect. I mean, of course.

But I would guess that the work involved in figuring out how to roll out the feature to the fleet is at least as much work as developing the feature in the first place.

The current state of recognizing and stopping for stop lights and signs is really amazing. I cannot believe a car, sold to the general public, can do it. But it can. Obviously, the slowing to the speed limit, and exactly the speed limit, means for most people that the car is not driving "as well" as most drivers, who consistently go over the speed limit. Plus, the need for confirmation to go through green lights.

But I have had the car long enough to not only rate the features it has now, but to sort of know where those features will go in the next 3 to six months, and so on. This damn car is going to drive itself around while I own it. I don't know when, but its getting pretty damn close.


i applaud your optimism. but ive heard 3-6 month predictions since i jumped in during 2015 and ive seen everyone one of them take 12-18 months. it took ap2 almost 2 years for them to activate surround camera's and be on par with AP1. what kind of bullcrap was that?

now my lease is ending in 2 weeks and i will be tesla-less because i refuse to deal with a company without customer service. its easy for all the model 3/y owners to now jump in on the fun. but ive literally watch tesla go from #1 in all areas....to seriously lacking in many.

i will repurchase either when plaid is unveiled. or a new sensor suite with additional redundancy comes out. AND service gets their crap together. i gave up hope on all these predictions as has many.

nice to see my 9k FSD investment go down the tubes and not get what we were promised.
 
  • Informative
Reactions: pilotSteve
I am not that confident that meaningful FSD will arrive soon. Just yesterday was on NOAP and went through a construction zone that I had also gone through two days earlier. Cones were in place to close the right lane. Going through this time however the car did not quite manage the required lane change in time and before I had time to correct hit a cone and knocked off my side view mirror. Very disappointing as road was not narrow, no cars around me and perfect weather. If it can't maneuver in this most rudimentary boundary condition, how can it manage more complex situations. M3P, 2020 HW3
20200723_082135.jpg
 
If you believe Elon, or any other sales person, I've got some property 500 miles west of San Francisco to sell you. I''ve also got $5 that says it will not be able to handle roundabouts competently by the end of the year. And when they can handle roundabouts competently I'll give them Level 2.

Elon's getting very close to making claims that once again will incur the wrath of the SEC. As both an investor and an FSD owner I do wish he would keep his big mouth shut until vaporware is realware.
 
Don’t see why they have to store that many frames. They can place the object in their 3D vector representation and do time-based predictions on recent frames.

They can use the fleet to generate trajectory predictions for all sorts of objects encountered while driving.

I think their limitation in the past for this type of 3D approach was that vision had not achieved the appropriate size and distance estimation required. Only lately has Karpathy mentioned that vision is closing in on Lidar for distance and size estimations. Tesla has also demonstrated this with their cone and trash bin predictions.


You are not going to get a high confidence of the actual 3D scene from just a few time snapshots, let alone one. The accuracy of the object distance / size estimation is going to be a function of how many images you can use to track it. Obviously there will be diminishing returns...maybe just a few FPS?