Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

FSD Beta Videos (and questions for FSD Beta drivers)

This site may earn commission on affiliate links.
the car has vision problems on city streets.

you'll be waiting a loooong time if you think the current hw sensor array can handle this without huge amounts of accident and worse.

time for next version, please. this was a good learning version but not even close to what its going to take to be enough nines to trust life and limb.

people continue to trivialize this problem. one of humanities top 10 hardest problems of all time, and you think one company has a handle on it?

we are years away from anything higher than level2 or 3.
 
And yet ....

Elon said:
We should be there with Beta 10, which goes out a week from Friday

Another few weeks after that for tuning & bug fixes

Best guess is public beta button in ~4 weeks

The only true commitment here is Beta 10 going out a week from Friday, everything else should be interpreted as aspirational. Hope for the best, plan for the worst and you'll limit potential disappointment.
 
The only true commitment here is Beta 10 going out a week from Friday, everything else should be interpreted as aspirational. Hope for the best, plan for the worst and you'll limit potential disappointment.
Step by step.
oPkKMRf.jpg
 
Will beta testers even see 9.3 if 10 is coming out next Friday?

What do we really think will have changed in that time? The proof is in the puddin', have low expectations and you'll be pleasantly surprised with an overshoot rather than disappointed with under-delivery
I don't understand these ignorant comments. AI Day showed they are rebuilding NN models weekly. I prefer to listen to experts in their field.

Lex Fridman has a breakdown and overview of key points. He is well respected and has done a variety of interviews during which his knowledge and understanding are demonstrated.

Lex Fridman -- About
Research in human-centered AI and deep learning at MIT and beyond in the context of autonomous vehicles and personal robotics. I'm particularly interested in understanding human behavior in the context of human-robot collaboration, and engineering learning-based methods that enrich that collaboration.

I received my BS, MS, and PhD from Drexel University where I worked on applications of machine learning, computer vision, and decision fusion techniques in a number of fields including robotics and human sensing.

Before joining MIT, I was at Google working on machine learning for large-scale behavior-based authentication.

 
I don't understand these ignorant comments. AI Day showed they are rebuilding NN models weekly. I prefer to listen to experts in their field.

Lex Fridman has a breakdown and overview of key points. He is well respected and has done a variety of interviews during which his knowledge and understanding are demonstrated.

Lex Fridman -- About


What does "rebuilding NN models weekly" mean in terms of actual improvement for an end user? I've watched Lex's video on AI day, I watched the autonomous driving presentation at AI day, I've probably watched all of the beta tester videos posted by people with YouTube accounts. This is the opposite of ignorance, but you could call me a pessimist although I'd prefer to call myself a realist.

I think this is a monumental challenge and the problems we're seeing in beta tester videos won't be solved any time soon. If it happens then it'll be a pleasant surprise, if it doesn't then I won't be disappointed. I'd say you're already primed for disappointment, but lets see how it pans out.
 
  • Like
Reactions: Revolver3131
What does "rebuilding NN models weekly" mean in terms of actual improvement for an end user?
Your comment makes no sense. You still don't understand the NN training and benefit after watching the Lex video? [update: 8:28 - Summary: 3 key ideas -- talked about the reiterative process and the benefit] I'm not sure what that says. There has been constant improvement in the past several months. I don't know what it says about your understanding that you are not recognizing that. They are putting out FSDBeta every few weeks with training from scenario failures retrained. The setup criteria/triggers of unique cases they need and the fleets look for those cases and can be uploaded for training.

Re: monumental challenge -- I agree and I don't have any predictions. I think drivers will be responsible to monitor the car for a very long time but it will do more and more work.

Perfect example of the advancements:
 
Last edited:
Will beta testers even see 9.3 if 10 is coming out next Friday?
Sorry. I understand now what you meant by the above as I didn't see that. I misunderstood your comment meaning and thought your comment was just being fictitious.

InsideEVs article:
Recently, Musk did comment that FSD Beta Version 9.2 is "not great." He also said 9.3 is "much improved." Not long after those comments, Musk made it clear that there will no longer be a 9.3, or any incremental point upgrade coming soon. Instead, he says Tesla is planning to jump straight to Version 10, which Musk indicated will roll out next week.
 
Last edited:
Your comment makes no sense. You still don't understand the NN training and benefit after watching the Lex video? [update: 8:28 - Summary: 3 key ideas -- talked about the reiterative process and the benefit] I'm not sure what that says. There has been constant improvement in the past several months. I don't know what it says about your understanding that you are not recognizing that. They are putting out FSDBeta every few weeks with training from scenario failures retrained. The setup criteria/triggers of unique cases they need and the fleets look for those cases and can be uploaded for training.

Re: monumental challenge -- I agree and I don't have any predictions. I think drivers will be responsible to monitor the car for a very long time but it will do more and more work.

Perfect example of the advancements:
Is there a pre-9.2 video of this exact drive that we can use to identify improvements?

Looking at a one-off video of a specific drive doesn't tell you much about progress without a benchmark to compare against, that's Progress Measurement 101. Maybe the system is performing this drive better in 9.2 than the last few versions, we have little idea without seeing the same route on a recent iteration. Even then, changes in traffic conditions etc can make it difficult to quantify progress but at least it provides a reasonable basis for comparison.

Frenchie and Chuck do comparisons like this, and there may be some incremental improvements but there are major issues that persist and that, by all appearances, will persist for the foreseeable future. I'm only saying to keep your expectations in check, and it seems like most people here are already doing that. Major issues in one of humanity's greatest challenges will not be solved in a matter of weeks.
 
Douma disagreed with green on WHY there was no longer redundancy due to borrowing compute from Node B--- he did not disagree that was the current state of things though.

Which probably puts him in the "they will maybe fix it later on" camp, but as things stand now no redundancy.
Well for the purposes of if HW4 is required, we only really care if they spilled over due to running out of capacity, not really if it's for other reasons.
Except there's no evidence they can do that.

Remember, in the production code the only thing they're using NNs for is perception right now (Green reconfirmed that just yesterday).

And that's split across both sides.

If one side fails- you lose perception. How do you "fail safely" at that point?
If you lose perception of only a subset of cameras or only for a subset of perception functions, it'll still be fine. There only needs to be enough to either pull over the car in its own lane or pull to the shoulder if there is one.
I doubt the spillover means you necessarily lose everything at once just from the spillover part shutting down.
They need the perception stack running fully on both sides to be able to do that.

Which, if they could do that, they wouldn't be splitting it between sides.

(and it's not like when one side crashes they can then decide to spin up a bunch of extra NNs on the other to take over perception anyway- it's too late by then).
They don't have to spin up, just like in the full case, the other side would already be running. And instead of running all the functions, it'll be running a minimal risk watch dog that can take over as soon as it detects the other node had shut down.
The fact HW3 could survive a failover of one side was one of the major things they hyped about it at autonomy day.
Again, with what I'm suggesting, it can survive a failure of one side, just that instead of being able to continue with full function, it'll have partial function (sufficient enough to bring car to a safe stop).
So- other than the idea Tesla is just writing terrible, massively bloated, code they'll somehow be able to add a ton MORE ability to and then also massively shrink down in compute needed as well- I don't see how you get above L3 (or even L2 really) without HW4 (if that's even enough- since they don't actually know until they solve it).
L2/L3 does not require redundancy, they really will only run into the problem for L4/L5. Presuming they have run out in one node, They'll have to weight working on the software to fit things on HW3 vs doing another retrofit.
 
The Great Camera Debate
I really don't that there's much of a debate at this point. I got criticized both here and on r/Tesla when I echoed earlier comments by others regarding the absolute necessity of having an additional camera or cameras in the nose of the car. I based my conclusion on my watching of your videos. I also pointed out that based on your videos the car has only 3 to 4 seconds at most to react and execute an unprotected left on a busy highway. Since then, I have been favoring adding a side facing cameras to the headlight assembly (not my idea). The headlights could also be made with its own cleaning mechanism, which is not uncommon in luxury vehicles.

Anyway, the FSD beta will probably go through a few more internal iterations and then some limited version of it will be released widely. Most users will embrace it.

Note for context, I have not described my expertise here previously because doing so tends to stifle debate. But I think that I have a good concept of what's needed to get to higher levels of autonomous driving. I have a PhD in engineering with my specialty being in simulation modeling. But that doesn't put me even within 100 km of the Tesla research team with regard to expertise. BTW I don't fly tubeliners like you do but I have a PPL for 35+ years so I understand the limited transference from aircraft AI to vehicle AI. I also get your fascination with the FSD beta. I guess that the one main difference between land air is ATC. I can't decide what the equivalent mechanism would be with vehicles unless it's like how interventions can be managed centrally for Waymo vehicles.
 
I really don't that there's much of a debate at this point.
I'll still debate it. :p
How is the b pillar placement different than having a hood that's 2 feet longer?

Anyways, it seems like placing the cameras in the driver side mirror might work, side facing and forward facing (it seems like not being able to see around vehicles is a major issue). I think cameras that see more than the driver might actually be bad. If the driver thinks that the car can see more than they do how will the know when to intervene?
 
  • Like
Reactions: AlanSubie4Life
effective stitching requires a LOT of overlap. that's my main beef with the limited cameras - and placement - on teslas.

stitching on static images, with cheap distorted lenses (like these cams have) is hard when the overlap is minimal. add speed and bad weather to that and its impossible to have good coverage that is updated as fast as safety would require, and not need lots of fixups to blend the images together properly.
 
effective stitching requires a LOT of overlap. that's my main beef with the limited cameras - and placement - on teslas.

stitching on static images, with cheap distorted lenses (like these cams have) is hard when the overlap is minimal. add speed and bad weather to that and its impossible to have good coverage that is updated as fast as safety would require, and not need lots of fixups to blend the images together properly.
The recent AI Day presentation demonstrated they do have cross camera object recognition (recognizing objects that straddle two cameras) instead of relying on stitching.
 
  • Like
Reactions: scottf200