Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Brad Templeton's F rating of Tesla's FSD

This site may earn commission on affiliate links.
What class of prototype are you using in your post? Beta software is usually considered to be a class of prototype so it appears you are creating a semantic argument.
No, a beta is product planned for release which has completed internal testing to the point that external users are needed to find more bugs, and possibly to get some feedback for modest feature changes.

A prototype is an experimental product used to learn things to build the production product. Tesla has declared that it has already done several "complete rewrites" of FSD or components of it, and will probably do more. That only happens with prototypes, not betas.

The stages are prototype, prototype 2, ..., alpha, dogfood (at big companies with many employees not on the project), beta, and then production.
 
No, a beta is product planned for release which has completed internal testing to the point that external users are needed to find more bugs, and possibly to get some feedback for modest feature changes.

A prototype is an experimental product used to learn things to build the production product. Tesla has declared that it has already done several "complete rewrites" of FSD or components of it, and will probably do more. That only happens with prototypes, not betas.

The stages are prototype, prototype 2, ..., alpha, dogfood (at big companies with many employees not on the project), beta, and then production.
The semantics are not at all standardized nor agreed upon, especially when talking new/not-yet existing industries.
 
The semantics are not at all standardized nor agreed upon, especially when talking new/not-yet existing industries.
iirc Microsoft had in house vs public Beta, as others may well do. There doesn't seem a definitive use of terms, but what might be termed an in house beta could be well away from public beta release.

Either way, I think it's hard to see how FSD beta is really ready for public beta release. It may be some of the issues are down to NN behaviour and human appreciation of it. With technology that is reaching well into virgin territory, I guess there are going to be some quite big issues that at least partly need real world exposure to learn about.

Maybe current FSD is a Betamax technology. Although Betamax did at least work.
 
Last edited:
  • Like
Reactions: TresLA
It's hard to compare the FSD Beta to anything else I've ever encountered in my entire 25+ year career.

The closest is a customer beta where you need to get feedback, and additional data to make improvements. Where you select the customers that are best suited towards this. In a lot of ways the FSD Beta is exactly this, but then they ignored the very customers they picked.

FSD Beta testing requires a lot of work, but there is no real reward for any of it.

The lack of any kind of response from Tesla to any inquiry/question/comment makes the entire thing feel much different than a Beta.

There are numerous bugs within not just the FSD Beta, but within the general firmware.

I'd give the FSD Beta an F rating, but that isn't necessarily the rating of its driving but of the FSD Beta program itself.

The lack of communication from Tesla is resulting in a lot of wasted time, and effort.

The biggest waste of time and effort is the navigation/maps issues. Most of of us have a list of places where FSD Beta isn't working well because of the Navigation issues, but there is no way to report them to Tesla.

This should be a really exciting time of a lot of customer engagement, and daily improvements to maps/navigation. But, it's not really all that exciting and I don't feel particularly motivated to write reports or even push the button when it screws up.

The only excitement is before an important release followed by disappointment when it's most the same as it was before.

It would be entirely different if they engaged with the customer.

Lots of people love the satisfaction of helping to fix something. That they'll continue to work for free because that satisfaction is enough. Or the usefulness of the fix made the experience nicer.
 
This should be a really exciting time of a lot of customer engagement, and daily improvements to maps/navigation. But, it's not really all that exciting and I don't feel particularly motivated to write reports or even push the button

people love the satisfaction of helping

I agree. I thought / hoped Tesla would treat their public Beta testers as a valued and trusted resource. It does feel that one of the main driving forces in tesla's public interaction is to generate hype / clicks / hits.
 
It's hard to compare the FSD Beta to anything else I've ever encountered in my entire 25+ year career.

The closest is a customer beta where you need to get feedback, and additional data to make improvements. Where you select the customers that are best suited towards this. In a lot of ways the FSD Beta is exactly this, but then they ignored the very customers they picked.

FSD Beta testing requires a lot of work, but there is no real reward for any of it.

The lack of any kind of response from Tesla to any inquiry/question/comment makes the entire thing feel much different than a Beta.

There are numerous bugs within not just the FSD Beta, but within the general firmware.

I'd give the FSD Beta an F rating, but that isn't necessarily the rating of its driving but of the FSD Beta program itself.

The lack of communication from Tesla is resulting in a lot of wasted time, and effort.

The biggest waste of time and effort is the navigation/maps issues. Most of of us have a list of places where FSD Beta isn't working well because of the Navigation issues, but there is no way to report them to Tesla.

This should be a really exciting time of a lot of customer engagement, and daily improvements to maps/navigation. But, it's not really all that exciting and I don't feel particularly motivated to write reports or even push the button when it screws up.

The only excitement is before an important release followed by disappointment when it's most the same as it was before.

It would be entirely different if they engaged with the customer.

Lots of people love the satisfaction of helping to fix something. That they'll continue to work for free because that satisfaction is enough. Or the usefulness of the fix made the experience nicer.
100% agree with your frustration in lack of feedback - specific case based or even in general (we received X reports, processed Y hours of video, impact to product was Z). Heck the release notes are so cryptic nobody knows what they mean until some folks on here try and spend some time/bandwidth to make a few informed guesses.

Correct me if I am wrong but when Apple first rolled out it’s public beta program (~iOS 10?) I remember the bug reporting app, and I used it regularly. I cannot for the life of me recall ever actually getting any feedback on that either. Please correct me if I am wrong.

Granted, short of a runaway battery in my pocket while sitting on an airplane the stakes were a lot lower and so the Give-A-*Sugar* factor was also a lot lower.
 
  • Like
Reactions: edseloh
Good news. Today Elon said FSD will be ready for everyone in 2022.
1643254554647.jpeg
 
Correct me if I am wrong but when Apple first rolled out it’s public beta program (~iOS 10?) I remember the bug reporting app, and I used it regularly. I cannot for the life of me recall ever actually getting any feedback on that either. Please correct me if I am wrong.
My experience with Apple is mostly with reporting Apple map issues, and every single one that I've reported was fixed rather quickly. The nice thing is they acknowledge receiving it, and then you get an acknowledgement it was fixed.

Usually beta SW products will have some kind of bug tracker to avoid duplication, and to prioritize serious bugs.

Most of my experience isn't with beta testing, but someone that tends to find issues with released products.

A lot of it comes down to the fact that I'm into Home Automation, and Health monitoring devices. Lets just say there are a lot of things that break, and things that simply don't work that well. IoT is basically land of the "release now, and we'll fix it later".

But, at least all of the companies respond to my emails and at least attempt to help.

When I reported to Whoop that their product couldn't track my HR during high intensity workouts like the Apple Watch could or highly regarded Chest Straps they were willing to give me a free subscription during the months it took to report tons of comparison data to them. Eventually I stopped, and returned it because that was just way too much working out. I replaced it with an Oura which at the time didn't even bother tracking high intensity HR, and instead simply imported it. That worked nicely, but they now claim they new one (which I got) does do it this so I get to go through all that again because I know its not going to work.
 
When you look at the release notes, and you see that "fixes" are expressed by a percentage change toward correct behavior, it's difficult to "close a bug," because it's still there, just maybe occurring less frequently. This is the main reason you can't compare with other beta testing where bugs are typically 100% fixable in one shot, and they can contact the reporter to let them know the issue is resolved.

If Tesla uses our snapshots to decrease phantom braking by 5% for example, who exactly should they reply to? If the car now stops at the stop sign line instead of 20ft back, and this issue was reported by 20k testers, multiple times, why would they respond to those 20k users instead of just noting the improvement in the next release notes? I think we should stop having some expectation that Tesla should be responding to us. Even if you have a very reproducible and serious problem, it's not something they can just 100% fix.
 
The semantics are not at all standardized nor agreed upon, especially when talking new/not-yet existing industries.
I agree that people play fast and loose with the term, especially testing. But having managed the release of over a dozen software products and been part of many more, I know what it's supposed to mean. It did originate in a simpler time, before the internet, and so yes, software testing methodologies have changed. But not that much.

Of course the name comes from having two stages of software testing -- alpha and beta, followed by release. You did your alpha until you weren't finding bugs very fast, then you let customers at it. But yes, you also let customers at it to get feedback on usability and features, though usually that was supposed to be done much earlier in the design phase, not the testing phase. Those used to come in order.

We've seen the term changed the other way, too, with "GMail Beta" have 100 million users. Google finally relented and accepted it was in production. They put it in perpetual beta to say, "we don't know if we'll ever get this stable."

Prototypes are still well defined. Experimental versions, meant to be replaced before production. Which is what FSD is, and Tesla has declared "complete rewrites" a few times. And it's generally accepted this is not a beta. A beta is an early version of a planned production release that's good enough to let the public try, and from which their bugfinding will be useful.
 
  • Like
Reactions: CarlThompson
A proper car is ready for its map to be wrong, and it's important it be able to understand if it's wrong and how it's wrong. That's why people like high detail maps, as it's easy to see if the world has changed from the map.
Since you say vehicles today sometimes can't tell what lanes are really there then how do they know when to trust what they're seeing over what the map says?

The only possible answer lies in the neural net score certainty percentage. If the car is very uncertain about what it's seeing then perhaps it should trust a map that's less than a week old. OTOH, if the car is very sure, then it should trust what it's seeing instead. And then there's some middle ground where the car is pretty sure and the map is kinda old....

At any rate the "easy" characterization would appear to be false, at least in some circumstances. Input fusion is a complicated process when there's any disagreement on the inputs.
 
Since you say vehicles today sometimes can't tell what lanes are really there then how do they know when to trust what they're seeing over what the map says?

The only possible answer lies in the neural net score certainty percentage. If the car is very uncertain about what it's seeing then perhaps it should trust a map that's less than a week old. OTOH, if the car is very sure, then it should trust what it's seeing instead. And then there's some middle ground where the car is pretty sure and the map is kinda old....

At any rate the "easy" characterization would appear to be false, at least in some circumstances. Input fusion is a complicated process when there's any disagreement on the inputs.
A car will rarely trust a map over its sensors. (The map might say, "This situation is confusing to your sensors, if your sensors show X, we assure you it's Y.) It's key that the map tells the car what the sensors will see if the road is as mapped. That's why you need details in the map, and Elon doesn't seem to get that. If the map is out of date, of course trust sensors. If the map is valid, then trust it because it was made with more data than you have, by computers smarter than yours, who had all the CPU time and memory of the cloud to figure out this situation, and saw every part of the road up close from multiple directions, and you only see it from a distance. And in many cases the map was drawn with better sensors than you have as well as different angles and distances, including aerial views and stuff you could never know, because you are just one car.
 
A car will rarely trust a map over its sensors. (The map might say, "This situation is confusing to your sensors, if your sensors show X, we assure you it's Y.) It's key that the map tells the car what the sensors will see if the road is as mapped. That's why you need details in the map, and Elon doesn't seem to get that. If the map is out of date, of course trust sensors. If the map is valid, then trust it because it was made with more data than you have, by computers smarter than yours, who had all the CPU time and memory of the cloud to figure out this situation, and saw every part of the road up close from multiple directions, and you only see it from a distance. And in many cases the map was drawn with better sensors than you have as well as different angles and distances, including aerial views and stuff you could never know, because you are just one car.
There are many contradictions in this. First, the car sensors win over the map. But, then trust the map because it was made with more data and more CPU. But, then maybe the map is out of date. But then the map was made with better sensors and angles (from non-frequently visiting sources).

And then the comment that somehow the map knows to "say" something to the car about the car's sensors being confused? A-what? Please show me an HD map that has "confusing to vehicle sensor" as metadata for a road segment. Yeah, not going to happen.

It's all so contradictory that what is described can not be programmed. And to top it off, you completely ignored the neural net confidence score that is the actual important aspect, as I started to describe. This score is really the only opportunity for the car to trust the map more than its own sensors. Ever.

If we break were to break down FSD failures (assuming we could agree on a proper definition of "failure"), we might see failures in lane assignment determination, but then also potential failures for traffic light state recognition, construction sign recognition, dynamic actor recognition (other cars, pedestrians, motorcyclists, bicycles, animals, road workers, etc.(), and there are surely even more aspects of total scene recognition to handle. But, of all of these, only lane assignment determination could be supplied by a map (others would be lane availability and speed limit, for instance). BTW, all this is without considering driving policy failures, which are separate and subject to their own list of potential bugs/failures.

I agree that there are situations where advance knowledge of available lanes and lane assignment are useful. Your route was no doubt intended to hit on those pretty hard - the shifting from left lane to right lane (and back again?) of which lane goes straight, for instance. The need to make a quick left hand turn after making a right hand turn. Knowing what to expect from the road ahead can only be helpful, but at the end of the day, the car has to trust what it's seeing over what the map describes. Map outdatedness is the easy kill-factor here, as no map will describe a fallen tree or power pole or the utility person holding a stop sign or a second ago putting up a "Lane Closed" sign or a construction lowered speed limit sign.

But, maps can be wrong in other ways as well. Mobileye's REM system, for instance, does not use "computers smarter than yours" for obtaining map data. As Shashua describes, no images nor video are uploaded to ME's cloud, only what vehicles have themselves computed on-board and summarized in 10kB/km chunks. Maybe ME is clever and assigns different vehicles to report different aspects of the same road segment, but even so the result is at best the well-known multiple blindfolded people feeling different parts of the same elephant.

And even if such a system could get it right, that might be fine in Munich or Paris or New York, but what about Peoria, Ill or even some of the roads in the Santa Cruz Mountains above Silicon Valley? Those roads get a lot less traffic and the prior vehicle might have been more than several minutes earlier. Or, for Hwy 1 in Big Sur, where there are miles and miles of roads with no cellular connectivity?

At best, the map is a declaration of what was true at some point in time. The vehicle can use that to help with anticipation that makes future maneuvers smoother - but that must be subject to what the vehicle itself is seeing. Given sufficient neural net confidence scores, the vehicle should trust what it is seeing over what the map is telling it was there. And there are many dynamic factors for which no map can ever provide anything use, as listed above, and so the vehicle sensors must almost always be trusted.

It would be an interesting exercise to come up with situations where the vehicle should trust the map over what it's seeing. Unless the confidence score is very low, I can't see that ever being the safer choice.

Maps are not the be-all and end-all for autonomous driving. At best I think they are helpful as predictive elements to avoid "Sunday Driver" behavior, which is useful.
 
I think they are helpful as predictive elements to avoid "Sunday Driver" behavior, which is useful.

Yes. In the same way, data from other cars could have a contribution if available. But you can anly have one ultimate authority.

the car sensors win over the map. But, then trust the map because it was made with more data and more CPU. But, then maybe the map is out of date. But then the map was made with better sensors and angles

And the quality of these inputs could change depending on multiple factors.

What is sure is that where there is a disagreement that would lead to significantly different course of action, then a better authority is needed.
 
There are many contradictions in this. First, the car sensors win over the map. But, then trust the map because it was made with more data and more CPU. But, then maybe the map is out of date. But then the map was made with better sensors and angles (from non-frequently visiting sources).

And then the comment that somehow the map knows to "say" something to the car about the car's sensors being confused? A-what? Please show me an HD map that has "confusing to vehicle sensor" as metadata for a road segment. Yeah, not going to happen.

It's all so contradictory that what is described can not be programmed. And to top it off, you completely ignored the neural net confidence score that is the actual important aspect, as I started to describe. This score is really the only opportunity for the car to trust the map more than its own sensors. Ever.

If we break were to break down FSD failures (assuming we could agree on a proper definition of "failure"), we might see failures in lane assignment determination, but then also potential failures for traffic light state recognition, construction sign recognition, dynamic actor recognition (other cars, pedestrians, motorcyclists, bicycles, animals, road workers, etc.(), and there are surely even more aspects of total scene recognition to handle. But, of all of these, only lane assignment determination could be supplied by a map (others would be lane availability and speed limit, for instance). BTW, all this is without considering driving policy failures, which are separate and subject to their own list of potential bugs/failures.

I agree that there are situations where advance knowledge of available lanes and lane assignment are useful. Your route was no doubt intended to hit on those pretty hard - the shifting from left lane to right lane (and back again?) of which lane goes straight, for instance. The need to make a quick left hand turn after making a right hand turn. Knowing what to expect from the road ahead can only be helpful, but at the end of the day, the car has to trust what it's seeing over what the map describes. Map outdatedness is the easy kill-factor here, as no map will describe a fallen tree or power pole or the utility person holding a stop sign or a second ago putting up a "Lane Closed" sign or a construction lowered speed limit sign.

But, maps can be wrong in other ways as well. Mobileye's REM system, for instance, does not use "computers smarter than yours" for obtaining map data. As Shashua describes, no images nor video are uploaded to ME's cloud, only what vehicles have themselves computed on-board and summarized in 10kB/km chunks. Maybe ME is clever and assigns different vehicles to report different aspects of the same road segment, but even so the result is at best the well-known multiple blindfolded people feeling different parts of the same elephant.

And even if such a system could get it right, that might be fine in Munich or Paris or New York, but what about Peoria, Ill or even some of the roads in the Santa Cruz Mountains above Silicon Valley? Those roads get a lot less traffic and the prior vehicle might have been more than several minutes earlier. Or, for Hwy 1 in Big Sur, where there are miles and miles of roads with no cellular connectivity?

At best, the map is a declaration of what was true at some point in time. The vehicle can use that to help with anticipation that makes future maneuvers smoother - but that must be subject to what the vehicle itself is seeing. Given sufficient neural net confidence scores, the vehicle should trust what it is seeing over what the map is telling it was there. And there are many dynamic factors for which no map can ever provide anything use, as listed above, and so the vehicle sensors must almost always be trusted.

It would be an interesting exercise to come up with situations where the vehicle should trust the map over what it's seeing. Unless the confidence score is very low, I can't see that ever being the safer choice.

Maps are not the be-all and end-all for autonomous driving. At best I think they are helpful as predictive elements to avoid "Sunday Driver" behavior, which is useful.
Nobody said maps are the be-all and end-all. They help you understand the road, especially the parts you can't see very well yet. They help you plan and avoid mistakes. In fact, they are essential for that today, though some hope there is a future where they are not. Used properly, they will help you and never hurt you and they need not cost anything or very little. They are a no-brainer. Why forget what you and your cousin cars have seen when they drove the road before. No human would do that. On Saturday, my Tesla in FSD would have crashed had I not intervened, and that would not have happened if it had a map.

REM does use "computers smarter than yours." The distillation is done on the cars but they are uploaded and combined in the cloud. I agree ME tries to trim the data too much to save bandwidth. That's not needed, bandwidth is not expensive.

Around here, and soon everywhere, there's a Tesla going by every minute, or in sparse places every hour on pretty much every road. And a MobilEye going by even more often.

My situation on Saturday was the car not being able to see very well. The lane it was in was ending with a merge. With traffic it could not see that. A map and it would have known. Humans get stuck in this quick merge too, it's not a great road design. But you have to drive on the roads you are given.

See my upcoming video for more answers on this.
 
Used properly, they will help you and never hurt you and they need not cost anything or very little.

The "used properly" caveat is important here. It's easy as a Monday-morning quarterback to say when to use/not use the maps, it's harder in real-time while driving. We agree the maps are useful as a predictor of what is was there, but I believe the car has to always be prepared to override that. I also believe that if the map shows something is a certain way but the car can't confirm it, the car should not commit to acting on what the map shows. Again, we're down to NN confidence scores, which is trickier than our Monday-morning quarterbacking discussions.

And then the process comes down to whether knowing something might be there but hasn't been confirmed is more useful than simply waiting until the car itself determines what is that. And that may come down to whether advance knowledge can be fed into the neural net system to produce a confirmation result sooner or with higher confidence than the car figuring it out on its own. I'm not saying that's not possible, but since we don't know the details, we can't say for sure.


REM does use "computers smarter than yours." The distillation is done on the cars but they are uploaded and combined in the cloud. I agree ME tries to trim the data too much to save bandwidth. That's not needed, bandwidth is not expensive.
1) REM is merely compiling what the cars have said. The "distillation" as you put it is the critical element, and for that Mobileye's use of the on-board computer is itself severely limiting. Garbage-in, garbage-out: it doesn't matter how great Mobileye's cloud computing is if it's fed stuff that's missing elements or even wrong because the on-board computer wasn't up to snuff. In this regard, Tesla's sending of data to the cloud for post-processing is superior.

2) Sashua understands the cost of bandwidth and what OEMs are willing to pay very well. Vehicle OEMs are loathe to spend money to provide data to a Tier 1. Even the $1/month number Sashua mentioned is a high cost for OEMs that try to save pennies by cutting down the number of screws used to attach a panel. If Mobileye really wanted more data, they could pay the OEMs for the bandwidth ME-equipped vehicles use. If the bandwidth is as cheap as you say and the additional data as useful as you say, then why hasn't Mobileye done this?