whoosierdaddy
FSD Supervisor
Alpha is on the rewrite, which is using the new stitching system, so no.So the 0.5 in 2.5D is time.
Does everyone agree that Alpha is still mostly 2.5D?
You can install our site as a web app on your iOS device by utilizing the Add to Home Screen feature in Safari. Please see this thread for more details on this.
Note: This feature may not be available in some browsers.
Alpha is on the rewrite, which is using the new stitching system, so no.So the 0.5 in 2.5D is time.
Does everyone agree that Alpha is still mostly 2.5D?
So the 0.5 in 2.5D is time.
Does everyone agree that Alpha is still mostly 2.5D?
As this type of software gets better, it lulls the driver into complacency, and thus gets more dangerous.
When will it breeze through traffic lights? At what percent correct or incorrect? What percent of people will be paying attention?
Take for example what he said about dumb summons. He said it will blow your mind.
No. the 4 in "4D" is time. "2.5D" probably refers to the fact that the current NN are processing almost 3D.
The Alpha that Elon is running would definitely be the new 4D rewrite. What we have in our cars now is the "2.5D".
19 minutes into the call Elon raises 2.5D. "Harshly correlated in time and but not very well". I think I am right.
He then talks about rolling out 4D. He says would work and then says does work.
My guess is that Alpha is using 4D for some functions only. Otherwise, if Alpha has fewer disengagements than Beta, why hasn't it been rolled out? It is months away from being rolled out.
I’ve thought this too, but hasn’t it been the bane of our existence ever since the transistor was invented? Every time new great technology comes out, the boobs in coding bloat up the software so much that we need another hardware revolution to cope with it.Anyone willing to bet that hw3 won't be enough to process all the 4D nets at 36hz? It doesn't seem like they designed hw3 with this new rewrite in mind. Elon would have mentioned it a long time ago.
I think the .5 in 2.5D that Elon is referring to here is the fact that they do use some time based analysis in current Tesla Autopilot, specifically cut-in detection. But it is very narrow in scope.Perhaps the 2.5D uses some time elements but probably not the whole thing like the 4D does. That would make sense with Elon's quote.
I am curious how many frames they will have access to (has to be memory constrained).
Don’t see why they have to store that many frames. They can place the object in their 3D vector representation and do time-based predictions on recent frames.
They can use the fleet to generate trajectory predictions for all sorts of objects encountered while driving.
I think their limitation in the past for this type of 3D approach was that vision had not achieved the appropriate size and distance estimation required. Only lately has Karpathy mentioned that vision is closing in on Lidar for distance and size estimations. Tesla has also demonstrated this with their cone and trash bin predictions.
He said that he could do his commute from his home to work with almost no disengagements. I don't know how far away Elon lives but that would actually not be a very good disengagement rate.
Clearly you have never driven in California
I have not posted in awhile, mainly because I am knee deep in a Tesla solar order.
Having had my Model 3 with FSD for a year now, I will say this.
First, I understand all the complaints with people who paid for FSD and are not pleased, and won't be pleased, until FSD is actually FSD. I get it, its a logical position.
However, my position was that I knew that my car would not be driving itself around any time soon when I got it in June of 2019,, but I figured I wanted to support the company and the vision, and I also wanted to have the full suite of stuff as it rolled out.
The improvements in the last year have been incredible. Plus, when I have some time I am sure that in 2019, or in 2020 for sure, Tesla as a fleet must be approaching the safest cars you could get. For sure the safest performance cars you can get. That's because safety warnings are rolled out with FSD features. My car now must have many, many warnings it did not have when I bought it.
Each feature has to be rolled out, not to professional test drivers, but to actual amateur, real, driver type drivers like me. That means each feature is some fraction of what its actually capability is, because the stakes are so high that the driver always needs to be able to disconnect. I mean, of course.
But I would guess that the work involved in figuring out how to roll out the feature to the fleet is at least as much work as developing the feature in the first place.
The current state of recognizing and stopping for stop lights and signs is really amazing. I cannot believe a car, sold to the general public, can do it. But it can. Obviously, the slowing to the speed limit, and exactly the speed limit, means for most people that the car is not driving "as well" as most drivers, who consistently go over the speed limit. Plus, the need for confirmation to go through green lights.
But I have had the car long enough to not only rate the features it has now, but to sort of know where those features will go in the next 3 to six months, and so on. This damn car is going to drive itself around while I own it. I don't know when, but its getting pretty damn close.
Don’t see why they have to store that many frames. They can place the object in their 3D vector representation and do time-based predictions on recent frames.
They can use the fleet to generate trajectory predictions for all sorts of objects encountered while driving.
I think their limitation in the past for this type of 3D approach was that vision had not achieved the appropriate size and distance estimation required. Only lately has Karpathy mentioned that vision is closing in on Lidar for distance and size estimations. Tesla has also demonstrated this with their cone and trash bin predictions.