Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

FSD Beta Videos (and questions for FSD Beta drivers)

This site may earn commission on affiliate links.
interesting, it doesn’t seem like the black sedan slowed down at all though, so I don’t think the squealing was from it braking. Plus, with abs technology nowadays, car tires shouldn’t be squealing while braking.

but yes, that particular part was risky, beta should have waited longer before creeping out towards the right lane.
Yeah, I'm thinking the squeal was from a quick swerve.
 
interesting, it doesn’t seem like the black sedan slowed down at all though, so I don’t think the squealing was from it braking. Plus, with abs technology nowadays, car tires shouldn’t be squealing while braking.

but yes, that particular part was risky, beta should have waited longer before creeping out towards the right lane.
It was the FedEx truck, they probably don't have ABS (and if they do it's probably not as advanced as modern 4 channel systems).
 

Tesla needs to take FSD away from Gali. He’s a huge fan of the company and a huge promoter of the stock, but he’s going to be the first guy in an FSD accident if they let him keep doing what he is. Stop saying “This is dope” and monitor the car. It’s nice to see the car won’t actually hit the pillars, but later on he let the car turn right into the opposite lane and had to dodge past some of those plastic pylons to get to the correct side of the road. His definition of success is also pretty iffy. The car didn’t hit the pillars, but its handling of them was wonky and would be extremely confusing to everyone on the road at best, and I would also assume it’s illegal to change lanes between the pillars but I don’t know for sure.
I have to admit he has courage 😂
 
Several impressive maneuvers, but almost causes an accident at 25:37. While waiting to merge from lane blocked by cones, it starts to go too soon, with cars still travelling in the lane to its left, and seems to cause the black sedan to squeal its tires to get out of the way. Tesla driver takes over with brake and wheel. Amazingly, the black car did not honk.

We could also mention that the car puts itself into that situation by erroneously changing into a closed lane, long after a human driver would
have recognized it was closed and never attempted changing into it

In a half hour video where the car is going 15-20mph for most of it and with frequent stops, how many miles would that have racked up? 10 miles maybe?

I don’t find the drive particularly impressive. Navigating narrow roads and such is neat, but the video is mostly very low speed (I think Whole Mars Blog is manually adjusting the speed down in spots too) with mostly simple scenarios with low traffic. The few instances with actual traffic are where the car exhibits some strange behaviours.
 
With Tesla we have no real data
This is the biggest problem with comparisons. Without the data from Tesla, the optimistic people see Tesla as having nearly completely solved the problem (two weeks away!), where the pessimistic people see a problem every mile.

Tesla needs to start publishing actual data so a real comparison can be made with other autonomous companies.
 
Why do you say that?

Tesla has large NN models that can extract more information from cameras, but they run those models on their clusters rather than in the car.

It's possible that Tesla won't be able to achieve L4/L5 with the current hw3, and it might be forever stuck in L2 (because it'll only be 2-3x better than human safety). It's going to be difficult to get to 10x human safety with hw3, because of how large the NN models need to be to consider all the edge cases.
 
Their only limit is hw3’s compute.
Is that speculation or informed guess ?

This is insane. After watching Whole Mars sf video above, I’m 100% sure Tesla has solved the technical foundations and software workflow for level 5.
I don't know about L5 - but I'm fairly impressed with various videos. Unless they are editing out a lot of mistakes - it looks pretty good to me and much better than the videos I watched when the fsd beta first came out. We are definitely getting close to "button" time.
 
Last edited:
We are definitely getting close to "button" time.
...Until Tesla folds in the highway stack (which Elon says is currently behind the production highway code), supposedly in 10.1, and then introduces a whole new set of problems that block the release of the "button" for another 11 months.
For a very long time there will be a shiny new object improvement for Elon to use as the excuse why it hasn't gone wider. Just one more tweak. Just two more weeks.

I mean, what are they actually waiting for? What metric do they use? It's L2, it's always the driver's fault, right? This code makes mistakes. V9 made mistakes. What is the threshold for giving it to more people? The current beta testers are already driving around without their hands on the wheel, letting the car go very far into dangerous situations. Will the average public be worse? Will the average public be better if the code gets better, giving them *more* confidence in a system which will fail 1% of the time?

I heard V9 was really close to being ready from supporters. Now I hear v10 is close also. What needs to be fixed?
 
I mean, what are they actually waiting for? What metric do they use? It's L2, it's always the driver's fault, right? This code makes mistakes. V9 made mistakes. What is the threshold for giving it to more people? The current beta testers are already driving around without their hands on the wheel, letting the car go very far into dangerous situations. Will the average public be worse? Will the average public be better if the code gets better, giving them *more* confidence in a system which will fail 1% of the time?
This is what I'm dying to know. The only thing that makes sense to me is that they're not sure if it's safe so they're starting with a small group of people (like the phases of a drug trial).
 
The only thing that makes sense to me is that they're not sure if it's safe so they're starting with a small group of people (like the phases of a drug trial).
Yes, but you can't use this process when you change the product constantly. The FDA doesn't allow you to completely reformulate your drug and then say all the data from V9 applies to V10, and then the V10 data gives you rationale to release the reformulated V10.1.

You only do that when what you have is stable. "FSD" is not going to be stable for years. The only thing they can really find out is if humans are a good backup to very unstable systems. I keep hearing the FSD beta has no accidents, so apparently it is? But we also have all that data from the L2 Highway code- and Tesla claims this is safer than a human alone. How can an L2 system be unsafe?

I mean, the next release is supposed to completely change the highway stack- the one that has been stable for years, and that we have actual statistics on. It's getting less stable, not more stable. This is the exact moment in time where you are furthest away from knowing how safe it will be in public, which you would think means we're not about to go wider if they are safety first, yet Elon is saying we're still close.

So again, why are they waiting? Could it be a non-safety issue, like they know people won't be impressed at what they get after 3 years of waiting for $10K? That the PR is actually better letting 10 people make videos and everyone just see those, rather than having people experience it themselves? Or that the system only works in narrow geofenced areas, and the NDA doesn't allow people to discuss that? I mean, not releasing it makes all these questions valid.

This is not a company that has taken stability and safety with AP seriously in the past. Tesla has $1B+ of customer money for this feature. It's L2 and safety is not an issue, we'll blame the driver in all cases. What's the holdup?
 
It's a very informed guess that Tesla will need HW4 (or more) to offer anything above L2 because the current system, which obviously still needs improvement, is already using so much compute that the original intended dual-node redundancy is out the window.
You are making a lot of assumptions. Optimization takes time.

Let me put it another way. Only people who are industry experts (i.e. autonomous vehicle industry) probably know what it takes to get to generic L3/L4 level. Even those may not understand what it takes to get to that level with vision only (most experts are on the lidar side). So, unless you are one of those in the industry or an academic who is an expert on A.V. ... its all speculation at this point.

There is another twist - SMEs in the industry are unfortunately biased. They either work for a competitor of Tesla or work for Tesla.

ps : No - a certain "senior software engineer" who took a course on CNN and has 5 years of IT experience is not an SME of A.V.
 
During AI Day, I remember Elon saying he's confident hw3 will get to 2-3x human safety. To me, that's not enough to enable L4+.
Actually 2x to 3x human safety level (i.e. a reportable accident every 20 to 30 years) is good enough for me. Infact, we are all doing just fine and are completely happy to be driving around with ~ 1x human safety level.

BTW, I don't remember the quote. Need to see exact working and context.
 
  • Like
Reactions: rxlawdude
You are making a lot of assumptions. Optimization takes time.

Let me put it another way. Only people who are industry experts (i.e. autonomous vehicle industry) probably know what it takes to get to generic L3/L4 level. Even those may not understand what it takes to get to that level with vision only (most experts are on the lidar side). So, unless you are one of those in the industry or an academic who is an expert on A.V. ... its all speculation at this point.

We know for a fact the regular PRODUCTION code doesn't have enough compute to enable full redundancy. Let alone all the extra city streets stuff needed just to get to where the FSDBeta videos are today.


Sure, you can SPECULATE they'll magically be able to add a bunch MORE capabilities and somehow also magically reduce compute usage a TON with 'optimization'- enough to squeeze ALL of that back into the single node it overflowed out of before the first FSDBeta came out.

But it seems less likely of an outcome than "needing at least HW4" does.


Honestly a bigger question to me right now is do they run out of compute on BOTH nodes of HW3 before they solve this or not?

Because if they do they're rightly screwed for much further progress until HW4 is going into cars- which sounded like late 2022 last I heard.

Whereas if HW3 has enough compute across both to solve it they can keep moving forward to that, then just give anyone wanting >L2 an upgrade to get redundancy back.



Infact, we are all doing just fine and are completely happy to be driving around with ~ 1x human safety level.

If that were true we wouldn't be mandating things like ABS, AEB, etc.



How do you know they will not need more cameras or sensors along with HW4 to achieve L4?

Which part of what I wrote, in any way, suggested any claim to my knowing that? I literally didn't even mention sensors or cameras in the post- since its actual topic was on-board compute, not sensors.
 
I remember Elon saying he's confident hw3 will get to 2-3x human safety. To me, that's not enough to enable L4+.
Elon has been confident about a lot of things that are not true. He was confident HW2 would be good enough too. FSD was "definitely" only 6 months away in 2017. He's surprised "the button" is later than June. He talked about radar as important until he didn't. Why would we take his "confidence" as any worthwhile data point?

When Elon quotes safety targets, he usually quotes 1:1B miles (via "nines"), which is >10X as safe as a human overall, although only about 3X as safe on the highway.
 
If that were true we wouldn't be mandating things like ABS, AEB, etc.
And the irony of those- we're requiring features that take over when the human fails, because the real data is that accidents happen when humans get distracted, overwhelmed, or run out of skill, and these systems help in those cases.

Meanwhile, Tesla seems to be trying to improve safety by developing L2 systems which only function when humans are paying attention and make no attempt to perform backup functions to human failures. The only safety argument I ever hear for Tesla's current AP features is it makes you more relaxed/less tired? Which completely ignores the tradeoff that more relaxed people tend to be less alert.
 
  • Like
Reactions: Dan D.