Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla AI Day - 2021

This site may earn commission on affiliate links.
There he goes again...predicting something in about two weeks ;).
Actually this should be interesting.


1627543618006.png
 
Because progress does not stop for one.

L5 is L5, dude. But if they need more engineers to do it in 2021, okay cool.

They are not going to sink talent updating anything that was firmly in the 1.0 code stack.

That's what's happening now, but that's dumb of them. Others will have a better experience on things that matters.
We will have paid $10 000 for something that's buggy AF, and still has the BETA stamp attached to it.

The Karpathy Doctrine is that AI takes over the entire stack (code 2.0).

Driving policy will not be in "2.0 code". That's fantasy. If you actually understood this you would agree.

Artificial general intelligence 2-3 times greater than the average human by the end of the year. That is not a question. Few people understand how big a deal AGI will be.

Funny :) AGI is ways off.
 
Artificial general intelligence 2-3 times greater than the average human by the end of the year. That is not a question. Few people understand how big a deal AGI will be.
This will be achievable because humans are actually very dumb. Autopilot engineers with 2-3 times the intelligence of average humans have been unable to achieve FSD on par with humans of significantly below average intelligence!
 
Go watch a Waymo presentation with @diplomat33 if you want dumbed down!

I respect @diplomat33 because he has always just followed the data, learning about the challenges of FSD, and evolved his opinion over time accordingly. No need to lump him in with the various bashers here...he's just calling balls and strikes, for the most part (2018):

If Elon was telling the truth about almost being able to do the FSD coast to coast drive, and I have no reason not to believe him, then that would suggest that Tesla is actually pretty close to having FSD. This would lend support to the notion that Tesla feels that they are close enough to finishing FSD that they might as well just wait a bit longer until FSD is done before releasing the new EAP features.
Please, don't be sarcastic. If the FSD demo was going to be that bad, Elon would never have said that it was almost finished. Clearly, by Elon's words, the FSD is much better than that. The most reasonable interpretation of Elon's words is that the FSD is good, just not good enough for the public. Tesla wants to wait until FSD is better than "good enough".
I do find it interesting how FSD-skeptic this forum seems to be. Every time I try to express an optimistic or positive take on FSD, I get slammed for being ignorant and dumb.
 
Last edited:
Driving policy will not be in "2.0 code". That's fantasy. If you actually understood this you would agree.
Sometimes I wonder if this is a mistake. My understanding is that perception is really the only thing implemented with NNs, and everything else is your typical path finding, FSMs/FuSMs and decision trees.

But we see in real world demos that driving policy requires inference from multiple variables. Like deciding when to pass a double parked car, for instance, requires the consideration of more variables than what (I think) a human engineer could reasonably encode into software. You have to make a guess as to how likely the car in front of you is going to move, and then how likely it is the lane over will be clear.

Most driving is pretty simple, but (as every autonomous car developer has discovered), it's the infrequent excpetions that's holding back true self driving. These exceptions might be too numerous and complicated to be put into policy.

I doubt if you tried to train an NN to understand all typical driving policy, that you would get very far. But I really wonder how well you could get an NN to perform at "best guess" attempts at recovering from situations that don't clearly fall into existing policy. It might perform better than engineers trying to guess, ahead of time, what the unknown unknowns of all driving situations are.

Of course now you have a volatile, unpredictable, difficult to test, black box of code making decisions about life and death. The interesting question is: would that perform better than a human attemtping the impossible task of programming all driving situations into static policy?
 
I respect @diplomat33 because he has always just followed the data and has evolved his opinion over time accordingly.

Yup, but what data you deem valuable / relevant is always biased in the face of an unsolved problem. It could be that a company with poor stats ends up solving the problem, or it could be that the company with the best stats achieves an Uber-like robotaxi service USA-wide.
 
I respect @diplomat33 because he has always just followed the data and has evolved his opinion over time accordingly. No need to lump him in with the various bashers here...he's just calling balls and strikes, for the most part (2018):
That's your right.

I do not respect @diplomat33 because he is blinded by shiny presentations and demos and when you point at valid criticism of - for example - Waymo failing at encountering a cone and then Waymo employees saying that Waymo goes around construction to begin with" (yet to @diplomat33 that is TRUE FSD), but will dismiss most things that Tesla excels at because it's still L2 (as if he never read what he was buying especially back in 2018).
That to me points to a person showing blind faith in a system he has never even tried. But he will mask it was BS reports from Cali DMV and such.

As I said above, I have 5 years on Autopilot... starting from AP1 going through every iteration Tesla had to offer.
Tesla - like SpaceX - is not afraid to fail on an approach, and it is part of their MO - Fail fast, iterate and fail forward!

Watching the progress on my cars and the stated end goal of FSD at Tesla is what makes me keep buying the FSD package.
 
Driving policy will not be in "2.0 code". That's fantasy. If you actually understood this you would agree.
Whatever helps you sleep at night - I guess!

Seems a lot of the fantasies about FSD have a tendency of becoming reality.
Remember the initial traffic light detection release and the BS the peanut gallery poured on that feature?

I have not had that feature fail on me in the life of my 11/2020 build Model Y! Not a single time!

Again, you seem to be stuck in the legacy code mentality, which I actually understand, but just because you have little to no imagination of how to get something done, do not ascribe your limitations to those that do and can and are doing something about it!
 
But you should not ignore the real FSD that other AV companies have!
I've been following Autonomous driving since before you, and follow the companies pretty closely.
There is no need to ignore anything! The consumer facing products clearly show who's got what.

Not some presentation or demo!

For those that care (or not but still want to know) here is my previous post on my journey following autonomous driving: Autonomous Car Progress