Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Brad Templeton's F rating of Tesla's FSD

This site may earn commission on affiliate links.
At the risk of sounding like Elon I'll just point out that your statement is obviously false because humans (and other animals) do exactly that.
This does sound like Elon and it is a flawed argument.

A good, experienced driver does NOT come to an intersection AS IF THEY HAD NEVER SEEN ONE BEFORE and navigate only on what they can see. Instead, the driver compares the intersection to a mental map of what intersections look like, which most closely matches what they see now, and what they did successfully before and uses that information, combined with what they see to smoothly move through the intersection.

In the future, Teslas will do something similar but they won’t need to have been through the intersection previously, they will just look at the best available map.
 
Maps are a crutch in the sense that humans drive just fine without (necessarily) having a map of what the area they are driving in. I know, for me, I sometimes slow way down in areas when navigating in an unfamiliar place.

In any case there have been a couple of posts that have indicated that fsd is doing the wrong thing because the map is wrong. (Where stop signs are. Lights. Post & appropriate driving speeds). Using the map is a pragmatic approach to try solving some of the problems right now. It be could argued ‘shouldn’t be needed’ but it could also be argued ‘it’s just another input’. I’d say a human actually does normally drive with a map in their head. E.g., During the vast majority of my driving, I ‘know’ to a high degree what’s coming around the next corner. And if I’m surprised because there’s something outside of my expectations I slow down and it grabs more of my attention. If it stays that way for many days, I update what ‘know’ about that area.

I think map usage by cars is fine and even makes sense. But yes the sw will need to know how to cope when something it perceives something that doesn’t align well with its side band knowledge of the area.
There a difference between the maps you are referring too which Tesla uses and the maps that @diplomat33 is referring too.
 
  • Like
Reactions: CarlThompson
This does sound like Elon and it is a flawed argument.

A good, experienced driver does NOT come to an intersection AS IF THEY HAD NEVER SEEN ONE BEFORE and navigate only on what they can see. Instead, the driver compares the intersection to a mental map of what intersections look like, which most closely matches what they see now, and what they did successfully before and uses that information, combined with what they see to smoothly move through the intersection.

In the future, Teslas will do something similar but they won’t need to have been through the intersection previously, they will just look at the best available map.
You don’t need map for that.
 
You are using a strawman now as well. Nobody relies exclusively on HD maps. AVs like Waymo also use cameras, radar and lidar to drive. HD maps are just one part of their autonomous driving stack.

You said autonomous driving requires maps like Waymo uses. Which is it? Are the required or not?

If they are required then obviously they are relied on.
 
A good, experienced driver does NOT come to an intersection AS IF THEY HAD NEVER SEEN ONE BEFORE and navigate only on what they can see. Instead, the driver compares the intersection to a mental map of what intersections look like, which most closely matches what they see now, and what they did successfully before and uses that information, combined with what they see to smoothly move through the intersection.

This is why I say that FSD Beta would work a lot better if it used HD maps. I've noticed the same behavior. It's like FSD Beta is driving the intersection for the first time every time. It is hesitant and jerks around like it is feeling its way through the turn for the first time. If it had a HD map, it would know what the intersection looks like and would be able to smoothly navigate the intersection more smoothly and more reliably.
 
  • Like
Reactions: nj1266 and Dunsel
Waymo works up to 65 mph according to the ODD that Waymo published.

You do understand that what they wrote in a design document and what they allow the car to do in real life are two entirely different things, don't you?

Waymo has said so repeatedly. Here is one example:

eqdysIf.jpg

That doesn't say that it can operate without the maps when they're wrong.

You are using a strawman now as well. Nobody relies exclusively on HD maps.

No, I am not; you are simply creating a new straw man for yourself by inserting the word "exclusively" into what I said when I myself said absolutely no such thing.
 
  • Like
Reactions: qdeathstar
What is surprising to me is how little response there has been here to this article. I guess that anyone who regularly uses FSD beta is aware of its flaws. And grading something as an F is just clickbait journalism. Think about it, most Forbes readers wouldn't even bother reading past the headline if FSD beta got anything but an F or an A+.

As others have stated, if FSD beta gets an F, then other approaches must be getting an I for incomplete. Those companies aren't attending class, forgot to turn in homework and went out partying rather than studying.
 
What is surprising to me is how little response there has been here to this article. I guess that anyone who regularly uses FSD beta is aware of its flaws. And grading something as an F is just clickbait journalism. Think about it, most Forbes readers wouldn't even bother reading past the headline if FSD beta got anything but an F or an A+.

As others have stated, if FSD beta gets an F, then other approaches must be getting an I for incomplete. Those companies aren't attending class, forgot to turn in homework and went out partying rather than studying.
Which was my point about grading on a curve….
 
This does sound like Elon and it is a flawed argument.

A good, experienced driver does NOT come to an intersection AS IF THEY HAD NEVER SEEN ONE BEFORE and navigate only on what they can see. Instead, the driver compares the intersection to a mental map of what intersections look like, which most closely matches what they see now, and what they did successfully before and uses that information, combined with what they see to smoothly move through the intersection.

What you are saying is not at all inconsistent with what I (and probably Elon) are saying. Of course a FSD car should have a very good "mental" idea of what an intersection is and how they work. But what I am saying is that to get to true FSD a car cannot rely on having an exact map of that particular intersection. Those are the kind of maps we're talking about: where the car has a predetermined map of exactly where the lights are in this particular intersection, exactly where the lanes are and what each lane is allowed to do, exactly where the crosswalks are, exactly where the bike lanes are etc. Yes, having that kind of exact map of a specific intersection will make it easier to do something that looks like competent driving in that intersection. But the problem is intersections (and roads in general) change on the fly all the time. A human can see and adjust quickly when a lane that used to be able to go straight is now a turn only lane. A human can adjust if the road they need to turn onto is blocked off because of an accident. But if a FSD car requires that kind of exact predetermined map to operate it would not be able handle these kind of everyday situations. And if the FSD car doesn't require such maps and can still drive competently and safely like a human could even when the predetermined map is wrong then you don't need such exact predetermined maps at all.
 
  • Like
Reactions: HVM
You said autonomous driving requires maps like Waymo uses. Which is it? Are the required or not?

I meant they are required for safe driverless. Of course, you can do some autonomous driving without HD maps but the autonomous driving will be less reliable. HD maps, when used correctly, greatly enhance the reliability an safety of the autonomous driving. If you want autonomous driving that is reliable and safe enough to remove the driver, you need HD maps IMO.

You do understand that what they wrote in a design document and what they allow the car to do in real life are two entirely different things, don't you?

Sure. But are you saying that Waymo cars only drive at low speeds in real life? Because that is not true.

That doesn't say that it can operate without the maps when they're wrong.

Yes, it does say that:

We designed our self-driving system so it can navigate safely if something new or unexpected is encountered, such as construction.

"something new or unexpected" implies a change in the map. They even give the example of a car approaching a construction that was not on the map.
 
  • Like
Reactions: Sporty
You don't rely on maps. You use them as super-useful additional data about what the road is doing and what it means. You notice when the road has changed (it's actually super-super-rare that you will be the first car ever in the fleet to discover that, but you have to be ready for it -- after one car notices it nobody else is surprised by it.) When it is changed you degrade down to doing what the Tesla has to do all the time.

I've written in many self-driving cars. Read the report of anybody who has, and the most common report is "boring." Tesla FSD is not boring. It's not even beta quality yet. I don't grade on a curve, but it would still be an F if I did. The rest would all be C and D, maybe Waymo can rate a B- but perhaps C+

I have an article about the Waymo cone problem above. That's why it doesn't get a B. But understand the cause of that was that Waymo has remote operators who help the cars solve problems by giving them commands to override their normal programming, and one of those operators gave a wrong command. The car trusts the human command over its own senses in that case, and thus got stuck. It's a bit like saying "FSD was stalled trying to figure out if it could turn, so the human grabbed the wheel, gunned it and crashed into a tree. Bad FSD!"
 
But we have all seen Waymo cannot handle traffic cones to the point of putting human life in danger.

That was one instance out of millions of miles and it was caused by a remote operator giving the car bad advice. Waymo handles cones just fine all the time.

Waymo is just as bad as Musk as exaggerating autonomy claims.

Comparing Waymo to Elon's lies is offensive! Elon has lied about FSD for 6 years now. All Tesla has is FSD Beta which is a piss poor "door to door L2". Waymo has deployed actual driveless robotaxis. Waymo has real autonomy. Waymo is not exaggerating anything.
 
Last edited:
You don't rely on maps. You use them as super-useful additional data about what the road is doing and what it means. You notice when the road has changed (it's actually super-super-rare that you will be the first car ever in the fleet to discover that, but you have to be ready for it -- after one car notices it nobody else is surprised by it.) When it is changed you degrade down to doing what the Tesla has to do all the time.

I've written in many self-driving cars. Read the report of anybody who has, and the most common report is "boring." Tesla FSD is not boring. It's not even beta quality yet. I don't grade on a curve, but it would still be an F if I did. The rest would all be C and D, maybe Waymo can rate a B- but perhaps C+

I have an article about the Waymo cone problem above. That's why it doesn't get a B. But understand the cause of that was that Waymo has remote operators who help the cars solve problems by giving them commands to override their normal programming, and one of those operators gave a wrong command. The car trusts the human command over its own senses in that case, and thus got stuck. It's a bit like saying "FSD was stalled trying to figure out if it could turn, so the human grabbed the wheel, gunned it and crashed into a tree. Bad FSD!"

Thanks for the info.

I suspect that Waymo would get a B now based on the 5th Gen I-Pace as it is way better than the 4th Gen that the Pacificas in Chandler are using and that got into the cone incident.
 
super-useful additional data about what the road is doing and what it means.

giving them commands to override their normal programming,

No where near FSD, but tinkering with simple autonomous model vehicles was quite a challenge to find a sweet spot between one perfect, universal and infallible sensor (type) that could provide adequate data, and adding more and more sensors to give a more accurate and complete data set to work with.

Basic positioning could be done with a mix of dead reckoning and several beacon / grid / GPS type solutions, but interacting with a changing environment of random objects and obstructions usually had many edge cases that would not detect reliably. As soon as you have inputs from multiple sensors especially when trying to deal with hard to handle edge cases the problem becomes what to believe. You only add a sensor (like map input) if you feel the cost and dependability make it worthwhile to get a more trustworthy input for a particular edge case.

trusts the human command over its own senses
It's like FSD Beta is driving the intersection for the first time every time.

Not unreasonably, the human was given ultimate authority, which makes sense when the autonomous control has failed to find a solution.

The 'very first time' effect feels to me like a battle of conflicts being hammered out each time and it only takes a knife edge difference to drive the outcome in a different direction.

a car cannot rely on having an exact map of that particular intersection.

If vision only and the AI / NN was infallible, then there would be no need for alternative ways to try and reach a solution - human or other (hd map). As soon as you acknowledge other inputs could be needed, then you have to arbitrate, and ultimately give one sensor / control priority.

Other than the arbitration going round in circles and fighting between alternatives, you still end up either shutting down or trusting your most trusted input / control.

Are these or equivalent issues present in FSD? It feels to me that they could well be. If VO works.... which it needs to, then radar should be superfluous. However, radar does certain things well most of the time ... to the extent that the functional loss of deleting radar could possibly be made up for by not having to arbitrate between VO and Radar views of certain scenarios.

Even with more sensors, you can still get incorrect / misleading data. Is this why (may be secondary reflected) radar point clouds (say from overhead gantry sign) combined with a shadow on the road in the direction of travel could add up to braking for a non existent object?

Not hd maps, but similar issue to decide between speed sign and map data speed limits. Which to act on? And how to differentiate between temporary construction speed limit sign, poorly positioned speed limit for nearby street, map speed data for over / underpass.
 
Last edited:
  • Like
Reactions: CarlThompson
I know the idea is that if you need vision to work when the map is wrong then it is better to just use vision all the time.
Not exactly. The idea is that if you need vision to work when the map is wrong, then that means the car can do it without maps. So if you trust that a map-using AV can safely drive when the maps are wrong, then you should trust the car when driving anywhere without maps (since maps can be wrong anywhere anytime because of construction, a parade, an accident, a cone, etc.) Also, as I'm sure you already know, Tesla does use maps but relies on them less than others' approaches. So there is some integration happening. I've played with FSD in some warehouse/industrial areas where the in-car maps displays little info as far as roads/driveable space. FSD beta definitely had a harder time figuring out where to go, and was more jerky/hesitant than acceptable if a driver was behind me, but it did okay.

if maps can make your system more reliable in 99.9% of cases
That's the assumption I'm not convinced of. Waymo's working robotaxi is too restricted to be convincing. Case in point is that Waymo taxi rider who got stuck because of a few cones: 12:22

In my area, FSD Beta can't even do simple right turns without being very jerky and hesitant
Again, maybe it's regional, but FSD beta does great on right turns for me IF the cross traffic is controlled (e.g. - traffic coming from the left has a stop sigh, red light, etc.) or IF turning right at an intersection where we have a green light (right green or straight green). But yes, it's jerky/super hesitant/not good enough when turning right at a stop sign onto a large/fast street.
 
Last edited:
  • Like
Reactions: CarlThompson