Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
I wish I was in the special club of no hands driving. Is that for people who have driving more than 10K miles on FSD?

Elon said hands free for 10K miles was suppose to be invoked last January.
Related discussion: Poll: Will hands free driving be a big win for Tesla?
 
  • Funny
Reactions: boonedocks
Just like the absence of any other sign on any other road I guess, just training.

If you train it at junctions with and without the sign, then it will learn actions necessary.
Perhaps I'm not being clear.

Sometimes, you see a sign and it tells the car what it can do. Sometimes you see a sign and it tells the car what it can't do. Sometimes you don't see see a sign and, depending on where you are in the world, you are allowed to do something or it is illegal to do something. How does the car tell the difference?
 
Perhaps I'm not being clear.

Sometimes, you see a sign and it tells the car what it can do. Sometimes you see a sign and it tells the car what it can't do. Sometimes you don't see see a sign and, depending on where you are in the world, you are allowed to do something or it is illegal to do something. How does the car tell the difference?
[{Overly} optimistic speculation]While it will take years it seems Tesla will train different *versions and you will be able to pick some of them and others will be specific to your location. Since there are no hard coded rules you can't have a Chill, Average or Assertive so there will be a trained individual version for each (and maybe more). Same for states and jurisdictions and their specific laws. Of course this level is going to take a near unbelievable amount of backend computing but I bet that is Tesla direction now.

Also this could be another revenue stream for Tesla by offering optional self driving "effects" and maybe even some unique tuning version influenced by your requests/driving style.

EDIT: Having different versions was hinted at in the Red Light running attempt.
 
Last edited:
V12 will see map data removed from the planning function of FSD, it will be NN planning only thereon in.

That doesn’t make any sense. In order to drive properly and efficiently, humans use their internal HD maps. So maps are definitely not going anywhere.

Again, beware of what Elon says - it’s just a loose representation of what might happen. It just doesn’t make any sense.

Also, the Tesla under V12 doesn't actually read the sign, instead it simply associates an image with an action, so it just needs to know from training what action to take if it sees a sign it recognises - it doesn't matter to the car what is written on it, it doesn't care.
That would be reading the sign! Even I would admit that, even though I am skeptical of the actual parallels between NNs and a human reading a sign. It’s analogous in any case, even if it is not even close to the same process physically.

Sometimes you don't see see a sign and, depending on where you are in the world, you are allowed to do something or it is illegal to do something
They’ll need a map and specific training for that area, just like a human would do. If they want to go with this approach where it is “learned,” anyway.

Far from clear that would be the most efficient use of training resources. Or that it would even work (but that’s a separate issue and would be an issue even if there were just one area and one set of rules). If they actually truly get it working for one large area with uniform rules they should be good, though could take years to train everything else, let alone the first case.
 
Last edited:
That would be reading the sign! Even I would admit that, even though I am skeptical of the actual parallels between NNs and a human reading a sign.
It’s not reading the sign, though. Reading the sign would involve determining the lettering, words, or symbology they’re upon, establishing the meaning of the words or symbology, interpreting the rule from the meaning, and adding the rule to decision making. The NN is just saying “pixels of x color in x position on road make y factor higher or lower which makes z output more or less likely.”

In a lot of ways it’s more like an experience driver would do in an area or region they are familiar with. But when they, e.g., go to a different country, a human starts reading the signs again.
 
The NN is just saying “pixels of x color in x position on road make y factor higher or lower which makes z output more or less likely.”
Understood. But for more unique signs it sounds like it's going to take a long, long time (years?), i.e. the below example.

1000007573.jpg


There is also a rare lighted stop sign that FSDb doesn't recognize. I know of only one in Ohio.

1000007574.jpg


For me I just want FSD to make the UPL out of my neighborhood, which it cannot today. I'm not optimistic that Dojo will somehow learn this turn as I'm likely the only FSD car that even attempts it. I'm way down the priority list. Is it possible for Dojo to identify and correct the many edge cases around the USA (or world) effectively? Seems unlikely but I hope I'm wrong.
 
Understood. But for more unique signs it sounds like it's going to take a long, long time (years?), i.e. the below example.

View attachment 969727

There is also a rare lighted stop sign that FSDb doesn't recognize. I know of only one in Ohio.

View attachment 969729

For me I just want FSD to make the UPL out of my neighborhood, which it cannot today. I'm not optimistic that Dojo will somehow learn this turn as I'm likely the only FSD car that even attempts it. I'm way down the priority list. Is it possible for Dojo to identify and correct the many edge cases around the USA (or world) effectively? Seems unlikely but I hope I'm wrong.
FSD also doesn't recognize faded stop signs, stop signs with a lot of bullet holes, or crooked stop signs.

The signs have to be damn exact or the car just ignores them as we've all seen with speed limit signs.
 
  • Like
Reactions: DanCar and KArnold
For me I just want FSD to make the UPL out of my neighborhood, which it cannot today. I'm not optimistic that Dojo will somehow learn this turn as I'm likely the only FSD car that even attempts it.

That you are disengaging at this UPL means its quite likely video clips are being fed back to Tesla on this turn, so perhaps it's already on their list of things to do. That's the beauty of having FSDBeta at level 2 currently.

How they then prioritise which of the thousands of clips to teach the NN we don't of course know. I guess there's a grading system of urgent fixes over the less urgent ones - It's also possible that your UPL will be fixed based on other clips elsewhere before they even look at yours which will solve your problem.
 
  • Like
Reactions: KArnold
It’s not reading the sign, though. Reading the sign would involve determining the lettering, words, or symbology they’re upon, establishing the meaning of the words or symbology, interpreting the rule from the meaning, and adding the rule to decision making. The NN is just saying “pixels of x color in x position on road make y factor higher or lower which makes z output more or less likely.”

In a lot of ways it’s more like an experience driver would do in an area or region they are familiar with. But when they, e.g., go to a different country, a human starts reading the signs again.
Now you are getting into the actual mechanics. And the differences between a brain and artificial NNs (they aren’t the same or similar at all!).

By your definition OCR and other text recognition tools are also not reading anything. Which I would sort of agree with. It’s not really doing the same thing as a human, probably.

But I was speaking of the end result. In a perfect implementation (what I was referring to), it’s nearly indistinguishable from a human reading the sign. I said that to solve the proposed problem the system would have to “read” the sign. There is not really another way.

And then it was said that no, actually the decision would be made based on presence/absence of a sign, in the NNs decision. It would just all feed into the network and then there would be an output action.

But this is effectively reading the sign - just like text recognition. The specific output “turn right on red here” based on the specific overall scene is the same as “output the letter P” based on the specific scene in a photo or whatever.

What is different, in terms of end result?

See the examples from @KArnold above. These will just go into the network, and it will flawlessly produce outputs regardless of a huge variety of inputs (because every possible input will have been thoroughly trained and there are no other unrecognizable signs that exist, or maybe it is a foundation model with emerging capabilities and does not need to be trained on every possibility, but I have no idea on that and it doesn’t matter). If a text or sign recognition output was the desired result instead of driving, it would be perfect. Every sign in an image would produce a flawless output with all the text and everything, lifted from the scene. So how is that not “reading” signs?

Obviously it has absolutely no idea what the signs means or what the output it is producing is, or whether the output is correct (humans can double check whether they produced the right result) - it is just producing the output. But there’s a strong argument that this is still “reading.” Otherwise these text capture tools from images would be useless! But they are not.
 
Last edited:
How does that apply to speed limits. Clearly letting anyone set max speed limit above the posted limit is illegal.
I think making the choice is the key here, people set the max based on their own risk tolerance and they know what they're getting into. People don't choose whether or not FSD rolls through a stop sign, but Tesla wouldn't be paying if a police officer tickets you for it.

As usual the NHTSA relies heavily on complaint submissions from owners/users and we can be sure the NHTSA's action here was driven by FSD Beta users complaining, maybe some were complaining because they were pulled over and warned or already ticketed for something they didn't choose to do.
 
I think making the choice is the key here, people set the max based on their own risk tolerance and they know what they're getting into. People don't choose whether or not FSD rolls through a stop sign, but Tesla wouldn't be paying if a police officer tickets you for it.

As usual the NHTSA relies heavily on complaint submissions from owners/users and we can be sure the NHTSA's action here was driven by FSD Beta users complaining, maybe some were complaining because they were pulled over and warned or already ticketed for something they didn't choose to do.
It was an option, aggressive mode "included rolling stops", but again...that's a safety hazard and why it was removed.
 
It was an option, aggressive mode "included rolling stops", but again...that's a safety hazard and why it was removed.
If I recall correctly, but it's going a while back, rolling stops happened regardless of what setting you were on despite the descriptions.

Either way, the NHTSA is largely reactionary and relies heavily on owner reports/complaints to drive their investigations and other actions.