Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
What's the "known limit" of C++ planning code?




Server-side Teslas stated need for compute power is much higher than their current compute power.

Further, nobody...including Tesla... even knows how much compute is needed car-side. They (Tesla) have guessed incorrectly multiple times at this point, there's no reason to think they know the answer today either.
You are right that there are not guarantees. And like you, I suspect that if Tesla finds a new local maximum it will be because of hardware limitations in the car.

But now with end-to-end and a massive future investment in compute, I am optimistic that we are on a path to greater improvement than we have ever seen before.
 
It will often result in some subsequent creeping but it's fine as long as you commit to it.
It's crazy how important it is for the car to understand what it can't see. I wonder how they train that now?

Even seems like Waymo has a potential problem with that!

It's interesting. Haven't seen a lot of evidence that anything other than extremely cautious creeping is the solution to this, for now. Humans seem to be a lot better at it in most cases.
 
  • Like
  • Funny
Reactions: sleepydoc and Dewg
It's interesting. Haven't seen a lot of evidence that anything other than extremely cautious creeping is the solution to this, for now. Humans seem to be a lot better at it in most cases.
😂 You overestimate human capability. The accident statistics would seem to show that many humans "let Jesus take the wheel" and pray for the best.
 
It's crazy how important it is for the car to understand what it can't see. I wonder how they train that now?

Even seems like Waymo has a potential problem with that!

It's interesting. Haven't seen a lot of evidence that anything other than extremely cautious creeping is the solution to this, for now. Humans seem to be a lot better at it in most cases.


Humans sit forward of the B-pillar, and their necks move (and they can lean)- so they can "creep" further visually into an intersection without moving the car than the static B-pillar cams can.

One reason many think something like side-facing front signal light cams will be needed for actual L5 driving.... Chuck Cook had a good video showing how poorly the B-pillars see around obstructed corners.
 
Humans sit forward of the B-pillar, and their necks move (and they can lean)- so they can "creep" further visually into an intersection without moving the car than the static B-pillar cams can.

One reason many think something like side-facing front signal light cams will be needed for actual L5 driving.... Chuck Cook had a good video showing how poorly the B-pillars see around obstructed corners.

Such a simple idea and we think that Tesla didn't think about this for 9 years?
 
  • Like
Reactions: sleepydoc
The accident statistics would seem to
Daily reminder that we have no accident rate data.
so they can "creep" further visually into an intersection
More is the rate and hesitation getting for the right spot at the moment. Obviously many cases can’t work; different issue.

Car needs to know how far it is sticking out and quickly fully advance or as much as needed whichever is less.

Sensor limits make some turns impossible. Can ignore those; not important.
 
Last edited:
I think that's the wrong way to look at it. The C++ code isn't going to have to significantly evolve to handle new edge cases, only the NN does.

If you are talking about whether heuristics or NN is a better fit for solving the driving problem, I think it's becoming clear that NN is the way to go.

We've tried to solve it with heuristics for a least a few decades now and failed.

But, given the human example, we know for certain that driving can be solved with NN and cameras. I believe that Elon is slowly winning the argument.
 
Bet: Robotaxi will not be solved with no geofence until Driver AI achieves sentience. (so, never, IMO).

(perhaps limited to driving context, sensor input, but including for example cultural norms about jaywalking, and interactions with humans, eye contact and subtle nods, predicting driveway behavior based on neighborhood style-do cars often back out of driveways on quiet residential streets without really looking thoroughly? - list is nearly infinite).

Ability for abstract thought and inference of new situations based on past abstractions will be required. Imagination.

How many support staff is needed for each Waymo robotaxi? Have fun scaling that to support backroads in the hinterland. 6 hour wait for pickup in stalled taxis somewhere in middle of nowhere? Or occupants all dead because they drove over a live down wire and hooked the car?

Or are we not talking about ACTUAL lack of geofences, but some just much bigger limited ODD?

(this bet is not my idea originally, it's a "bet" between my brother and I, his idea, and I actually am on his side).
 
Such a simple idea and we think that Tesla didn't think about this for 9 years?

They've changed course on multiple aspects of self driving multiple times... Remember when HW2.0 was enough? Then HW2.5 was? Then HW3.0 is? We're on HW4 now and we're still at L2 that mostly works ok most of the time.

Remember when RCCC cameras were all you need for self driving? Except now it's RCCB cameras.

Remember when they made Radar the PRIMARY AP sensor... then years later told us radar was dumb to use and they removed it entirely? But now they've added it back, kind of, and only on a few models?

Remember when they prominently listed USS as an important sensor in self driving, and now have removed it?

Remember when they did multiple ground-up re-writes and each would be fire and bring robotaxis to the streets and never did?


"Tesla has done it this way for a while so it MUST be the correct final answer" is a poor argument.


Also, 9 years ago was February 2015... Teslas came with TWO total cameras (one front, one rear), not the 8 they'd change to later (and the 9 they'd change to later than that... or the 10 they've changed to now on the CT)
 
They've changed course on multiple aspects of self driving multiple times... Remember when HW2.0 was enough? Then HW2.5 was? Then HW3.0 is? We're on HW4 now and we're still at L2 that mostly works ok most of the time.

Remember when RCCC cameras were all you need for self driving? Except now it's RCCB cameras.

Remember when they made Radar the PRIMARY AP sensor... then years later told us radar was dumb to use and they removed it entirely? But now they've added it back, kind of, and only on a few models?

Remember when they prominently listed USS as an important sensor in self driving, and now have removed it?

Remember when they did multiple ground-up re-writes and each would be fire and bring robotaxis to the streets and never did?


"Tesla has done it this way for a while so it MUST be the correct final answer" is a poor argument.


Also, 9 years ago was February 2015... Teslas came with TWO total cameras (one front, one rear), not the 8 they'd change to later (and the 9 they'd change to later than that... or the 10 they've changed to now on the CT)

The things you mentioned aren't equivalent to constraints like obstructions or field / angle of view though. Camera position and field of view are elementary ideas, and it seems Tesla has kept similar cameras and views on the Cybertruck. So unless Tesla is failing at the simplest and most elementary decisions, then they must have concluded that the current cameras and FoVs are adequate for robotaxi-level safety thresholds, not to mention they even removed a front camera with HW4.
 
So unless Tesla is failing at the simplest and most elementary decisions, then they must have concluded that the current cameras and FoVs are adequate for robotaxi-level safety thresholds, not to mention they even removed a front camera with HW4.
More likely they are not having robotaxi as a goal this decade, except in marketing.
 
The things you mentioned aren't equivalent to constraints like obstructions or field / angle of view though. Camera position and field of view are elementary ideas

What color the cameras see is an elementary idea- and Tesla changed it.

What resolution the cameras see (and thus the ability to read small signs) is an elementary idea- and Tesla changed it.

How much inference power the car needs is elementary, Tesla has changed it multiple times.

Should radar be a primary sensor (or used at all) is an elementary idea, and Tesla has changed that multiple times.

Method of driver monitoring should be elementary and Tesla changed that- including adding an other camera for it.


, and it seems Tesla has kept similar cameras and views on the Cybertruck.

I guess you're unaware the front camera config is different on CT compared to the original FSD/AP 2.0 cars? And they're higher resolution, and see different colors.

You seem to want to keep no-true-scotsmaning all the facts that debunk your idea Tesla can't make mistakes or change their mind over time on what is needed for self driving
 
Some major déjà vu happening here reading everybody's take on the next, yet-to-be-experienced-but-promised-to-be-fire FSD version. Either this is a whole new group of posters or the posters here have short-term memories. What I have learned from driving on the FSD ADAS suite since October of 2018 (when they released NoA) is that Elon's, Omar's, Dirty Tesla's, and all the rest's comments don't mean squat - I'll believe it when I drive it.
Nominally true - I think the optimism is based on the videos released more than anything. The only thing you can count on Omar for is to say how smooth the ride is with each successive version.
 
  • Like
Reactions: FSDtester#1