Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

FSD Beta Videos (and questions for FSD Beta drivers)

This site may earn commission on affiliate links.
Or that it just wouldn't crash through the pile, so there is no video of it crashing through to show.
Ok. Show that then. (but much bigger than a small dog & much faster >30mph)
There are so many near-miss videos that would (most likely) have been collisions if the Safety Driver didn't disengage. We just don't know what happens if they didn't disengage. Show me that. An impending collision that you don't disengage from. (v.large, soft, safe objects, nobody else around, no damage to your car, >30mph)
 
Last edited:
Ok. Show that then. (but much bigger than a small dog & much faster >30mph)
There are so many near-miss videos that would (most likely) have been collisions if the Safety Driver didn't disengage. We just don't know what happens if they didn't disengage. Show me that. An impending collision that you don't disengage from. (soft, safe objects, nobody else around, no damage to your car, >30mph)
FSD Beta will correctly identify them as soft objects and drive through them. :p
 
Ah. I think it's more likely than not that for true "Full Self Driving" (i.e. Level 5) we will need general intelligence.
Cant deny that, since in L5 you may not even have a steering wheel to take over. However, "FSD" in Tesla terms is not L5, not even close. It's L3 at most. So we all need to be careful we're not talking about different things here (when I say "FSD" I mean specifically the FSD currently in beta, which is clearly not L5).
 
Wow...could it be real?

1629938173327.png
 
There are other players in the self-driving space that are completely throwing out a human-coded policy/planning layer and going full end-to-end machine learning (video of driving in, action of driving out). For systems like this, it doesn't even need to perceive and segment everything in the scene. It isn't important to count 35 pedestrians in the intersection and predict each of their paths when it only takes one to block your path. Specifically talking about OpenPilot.
I'm no longer sure where Tesla is on this spectrum. Clearly they are moving more and more of the logic into the NN (they made that clear nearly 2 years ago) and out of hand-coded decision logic, but how far that has got, and how much of a hybrid the current stack actually is, I dont think anyone outside of Tesla knows. There have been discussions elsewhere here that they have overflowed the capacity of the "dual redundant" HW3 chips and are not using them in a non-redundant manner, but, again, I dont think anyone is really certain about any of this.
 
I'm no longer sure where Tesla is on this spectrum. Clearly they are moving more and more of the logic into the NN (they made that clear nearly 2 years ago) and out of hand-coded decision logic, but how far that has got, and how much of a hybrid the current stack actually is, I dont think anyone outside of Tesla knows.


Up to roughly nowish In the public code NNs were only used for perception.

Prediction and planning/policy (ie actual driving) were 100% conventional code- as confirmed by Green repeatedly.

As AI alluded to they're moving SMALL bits of the prediction code toward using NNs (but still blending with lots of- in fact mostly- conventional code).

Planning (actual driving) remains all conventional code AFAIK.


There have been discussions elsewhere here that they have overflowed the capacity of the "dual redundant" HW3 chips and are not using them in a non-redundant manner, but, again, I dont think anyone is really certain about any of this.

No, we've very certain about that. They've been borrowing compute from Node B (ie no redundancy) since mid-late last year... both Green and James Douma have confirmed that fact.

Some folks are convinced they'll magically just "optimize" everything to use vastly less resources in the long run and make it all fit back in 1 node- which seems unlikely given how they keep looking to ADD things to the code, but maybe once we see a unified version of the vision-only stacks Elons talking about in V10 we'll have better insight into how realistic such hopes really might be.
 
I'm no longer sure where Tesla is on this spectrum. Clearly they are moving more and more of the logic into the NN (they made that clear nearly 2 years ago) and out of hand-coded decision logic, but how far that has got, and how much of a hybrid the current stack actually is, I dont think anyone outside of Tesla knows. There have been discussions elsewhere here that they have overflowed the capacity of the "dual redundant" HW3 chips and are not using them in a non-redundant manner, but, again, I dont think anyone is really certain about any of this.
There have been discussions elsewhere here that they have overflowed the capacity of the "dual redundant" HW3 chips and are not using them in a non-redundant manner, but, again, I dont think anyone is really certain about any of this.
No, we've very certain about that. They've been borrowing compute from Node B (ie no redundancy) since mid-late last year... both Green and James Douma have confirmed that fact.
This discussion by Douma this July seems to disagree with the take that Tesla is necessarily hitting capacity limits even if they are doing multi-node NN computing. Or is there a more recent take you are referring to?
James Douma on Tesla’s Fleet Data Collection Effort
Some folks are convinced they'll magically just "optimize" everything to use vastly less resources in the long run and make it all fit back in 1 node- which seems unlikely given how they keep looking to ADD things to the code, but maybe once we see a unified version of the vision-only stacks Elons talking about in V10 we'll have better insight into how realistic such hopes really might be.
I don't feel they need to do that even if they reach limits on a single HW3 node. If they get the multi-node working properly, as discussed in previous threads, they can run a minimal safety critical set on the spillover node and that would satisfy all the redundancy requirements for even L4/L5 according to SAE. No need to fit everything into one node.
 
Last edited:
  • Like
Reactions: MP3Mike
Cant deny that, since in L5 you may not even have a steering wheel to take over. However, "FSD" in Tesla terms is not L5, not even close. It's L3 at most. So we all need to be careful we're not talking about different things here (when I say "FSD" I mean specifically the FSD currently in beta, which is clearly not L5).
FSD has many definitions. haha.
When Elon talks about the goal for FSD with the current hardware being robotaxis with no geofence he's obviously talking about L5. You can't have a L3 robotaxi. Tesla has never given any indication that they plan on making an L3 system.
Personally I don't see FSD Beta as a driver assist system, it doesn't look like it makes driving easier or safer. I think it's exactly what it's name suggests, a beta version of full self driving (i.e. a car that does not require a driver).
 
FSD has many definitions. haha.
When Elon talks about the goal for FSD with the current hardware being robotaxis with no geofence he's obviously talking about L5. You can't have a L3 robotaxi. Tesla has never given any indication that they plan on making an L3 system.
Personally I don't see FSD Beta as a driver assist system, it doesn't look like it makes driving easier or safer. I think it's exactly what it's name suggests, a beta version of full self driving (i.e. a car that does not require a driver).
Well according to discussions with CA DMV, Tesla is saying FSD Beta as it is currently is simply development to get the City Streets function released and that such a release is firmly a L2 system. So technically it's an "end-to-end" L2 system. They do have a goal to eventually get to L4/L5, but not with the current Beta. You are right there is zero mention of L3.
Autonomous Car Progress
The line gets really blurry between an end-to-end L2 system, and a L4+ system that is using a "safety driver" during testing (Uber argued about this in the past also).
As a side note, robotaxis don't have to be L5, they can be L4 (like Waymo is already doing).

The way Elon words the idea for FSD is
1) "feature complete" (matching technically end-to-end L2)
2) "feature complete to the degree that … where we think that the person in the car does not need to pay attention" (basically L3+)
3) "a reliability level where we also convince regulators that that is true" (L4/L5, presuming regulators have minimum safety metrics, not talking about jurisdictions like Arizona where they don't care at all and pretty much anything goes).
Autonomous Car Progress
 
Last edited:
This discussion by Douma this July seems to disagree with the take that Tesla is necessarily hitting capacity limits even if they are doing multi-node NN computing. Or is there a more recent take you are referring to?
James Douma on Tesla’s Fleet Data Collection Effort

Douma disagreed with green on WHY there was no longer redundancy due to borrowing compute from Node B--- he did not disagree that was the current state of things though.

Which probably puts him in the "they will maybe fix it later on" camp, but as things stand now no redundancy.



I don't feel they need to do that even if they reach limits on a single HW3 node. If they get the multi-node working properly, as discussed in previous threads, they can run a minimal safety critical set on the spillover node and that would satisfy all the redundancy requirements for even L4/L5 according to SAE. No need to fit everything into one node.

Except there's no evidence they can do that.

Remember, in the production code the only thing they're using NNs for is perception right now (Green reconfirmed that just yesterday).

And that's split across both sides.

If one side fails- you lose perception. How do you "fail safely" at that point?

They need the perception stack running fully on both sides to be able to do that.

Which, if they could do that, they wouldn't be splitting it between sides.

(and it's not like when one side crashes they can then decide to spin up a bunch of extra NNs on the other to take over perception anyway- it's too late by then).

The fact HW3 could survive a failover of one side was one of the major things they hyped about it at autonomy day.


So- other than the idea Tesla is just writing terrible, massively bloated, code they'll somehow be able to add a ton MORE ability to and then also massively shrink down in compute needed as well- I don't see how you get above L3 (or even L2 really) without HW4 (if that's even enough- since they don't actually know until they solve it).
 
Seriously? He's once again moving his lips but the truth not there. Not even close. Funny thing is, he probably believes his own lies. The man needs serious help.

Maybe we need an X-File investigation into FSD?
Now that is not cool. Tesla has been very much on track in terms of release recently. Also, if one believes something...is it a lie? Wildly optimistic ...sure. Lie, not cool.

But I guess you work for GM or Ford or Waymo or VW and the future is dark and certain.
 
  • Like
Reactions: rxlawdude