Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Seems like FSD is a complete crock

This site may earn commission on affiliate links.
I don't this TACC is available if you have a manual gearbox.

Also, I believe this rely on a front radar, I don't think there are cameras, and uses GPS to get the speed sign?

I remember driving a GTI with advance cruise control, I was following a car but because of a curb
the car in front of me was not visible, so my car started to accelerate. This was a little bit scary.

Did you ever experienced something similar?

Correct, front-facing radar, that's how the cruise control is aware of the traffic.
Again reading the road signs is an extra cost option, and it will display the last 3 road signs, this includes road works, road narrowing, temp speed restrictions (side of the road and overhead) and other temp changes which obviously aren't in the static maps tracking GPS data.
 
Sorry, I bought the top-spec Golf R, the options were TACC (standard across the entire GOLF range since 2014), then there's lane assist, then blind spot monitor, then road sign reading (but VW doesn't act on these, just display). So that's 4 different pricing options, these are available across the whole range.
Auto park is only available on a subset of models.
In US, on Jetta that is not how it works. They just have basic stuff.
 
  • Like
Reactions: jebinc and goRt
AP will remain at Level 2 until Tesla actuaries think there is low enough risk to call it L3/L4. So, yes, I think it will remain at L2 "for a while".

I was talking about when AP becomes capable of L3+. That will happen before Tesla actually certifies it as L3+. First, AP will become capable of L3+, then it increases in reliability where Tesla certifies it as L3+.
 
  • Like
Reactions: goRt
I was talking about when AP becomes capable of L3+. That will happen before Tesla actually certifies it as L3+. First, AP will become capable of L3+, then it increases in reliability where Tesla certifies it as L3+.
That's true - freeway NOA is now L2+.

Anyway, the whole levels thing is just theoretical. In practice development of FSD won't proceed along those categories.
 
That's true - freeway NOA is now L2+.

Anyway, the whole levels thing is just theoretical. In practice development of FSD won't proceed along those categories.

Yes with Tesla, it's FSD feature complete then the march of the 9s. The more teslas they sell, the more training data they have and the safer things become. With Tesla's approach, it won't be a gradual increment of L2 to L3 to L4 to L5. They will operate in L2 until they have the data to prove the car is safe to drive, then it will be L4.

The levels of autonomy scale is not ideal for a vision based approach. It works better for a brute force approach, but with vision, safety will continuously improve as you get more training data until regulators approve your system based on statistics.
 
Yes with Tesla, it's FSD feature complete then the march of the 9s. The more teslas they sell, the more training data they have and the safer things become.
Probably a topic for another thread - but I don't think they need any more Teslas. Now it is just a matter of rolling out features, getting needed training data and getting better. They can already get a lot more data than they need. If you see verygreen's hacking data - the triggers are already being set for 0.1% probability of collection. SO they have 1000x more data than they need !

May be when they geographically expand AP/FSD, they need more cars in those areas.
 
Probably a topic for another thread - but I don't think they need any more Teslas. Now it is just a matter of rolling out features, getting needed training data and getting better. They can already get a lot more data than they need. If you see verygreen's hacking data - the triggers are already being set for 0.1% probability of collection. SO they have 1000x more data than they need !

May be when they geographically expand AP/FSD, they need more cars in those areas.

For routine events, sure, they have far more than they need.

But making FSD work is largely about capturing and learning to handle the unique cases, like the car in mid air jumping over the logs across the road that they showed at Autonomy Day.

For those moments, Tesla needs as much data from as many cars as they can get so they get recordings of the weird things happening on the rare occasions when they happen.

(Though for level 3, "things are weird, help!!" might be enough.)
 
  • Like
Reactions: goRt
That's true - freeway NOA is now L2+.

Anyway, the whole levels thing is just theoretical. In practice development of FSD won't proceed along those categories.
Except that existing law references the SAE definitions. They seem perfectly sensible to me...
Yes with Tesla, it's FSD feature complete then the march of the 9s. The more teslas they sell, the more training data they have and the safer things become. With Tesla's approach, it won't be a gradual increment of L2 to L3 to L4 to L5. They will operate in L2 until they have the data to prove the car is safe to drive, then it will be L4.

The levels of autonomy scale is not ideal for a vision based approach. It works better for a brute force approach, but with vision, safety will continuously improve as you get more training data until regulators approve your system based on statistics.
This is exactly the same as any other AV company. Everyone is attempting to continuously improve until they can prove that their systems are safe based on statistics. I can't really think of another way of doing it...
Currently Waymo and Cruise are effectively operating as level 2 systems since they require an alert driver, just like Tesla.
 
  • Like
Reactions: goRt
They can already get a lot more data than they need. If you see verygreen's hacking data - the triggers are already being set for 0.1% probability of collection. SO they have 1000x more data than they need !
Or they don't have the capacity to process more in a useful way.
For routine events, sure, they have far more than they need.

But making FSD work is largely about capturing and learning to handle the unique cases, like the car in mid air jumping over the logs across the road that they showed at Autonomy Day.

For those moments, Tesla needs as much data from as many cars as they can get so they get recordings of the weird things happening on the rare occasions when they happen.
This is easier said than done. The current system relies on triggers (since recording, uploading and labeling everything is not feasible), which means you have to know in advance what you're looking for. But if that is the case and the expected event is rare, you're probably better off just sending out test drivers to provoke that particular scenario in a targeted way.
 
Or they don't have the capacity to process more in a useful way.
This is easier said than done. The current system relies on triggers (since recording, uploading and labeling everything is not feasible), which means you have to know in advance what you're looking for. But if that is the case and the expected event is rare, you're probably better off just sending out test drivers to provoke that particular scenario in a targeted way.

That's why the massive level 2 fleet is important. Every time you break AP manually, the car sends data back to Tesla about it. (At least, I think that's what they told me at Autonomy Day.)

So they don't have to know you're about to hit something weird - they just have to recognize when it happened and you grabbed the wheel, and have the car send them the full videos and the car's understanding of its environment.

Once they get enough such random events, they can crank the data engine and you won't have to grab the wheel for that case anymore.
 
That's why the massive level 2 fleet is important. Every time you break AP manually, the car sends data back to Tesla about it. (At least, I think that's what they told me at Autonomy Day.)
But how does that help to recognize "the car in mid air jumping over the logs across the road"? I think people here are seriously overestimating what AI can do today.
So they don't have to know you're about to hit something weird - they just have to recognize when it happened
Which is exactly the difficult part, unless you record and upload everything and have an army of people watching all those videos.
 
But how does that help to recognize "the car in mid air jumping over the logs across the road"? I think people here are seriously overestimating what AI can do today.
Which is exactly the difficult part, unless you record and upload everything and have an army of people watching all those videos.

Without training, it won't recognize the car jumping the logs.

What I think I'd do is have the car upload its understanding of the environment and navigation, and set up an automatic system to look for typical reasons for the manual disengagement - difficult lane changes, nearing an exit you planned to take, etc.

Then have humans look at the cases where the computer can't figure out why AP was broken.
 
Without training, it won't recognize the car jumping the logs.

What I think I'd do is have the car upload its understanding of the environment and navigation
But that's just it. The car doesn't have an "understanding" of things it hasn't been trained to recognize properly, but those things are exactly what you are most interested in.
and set up an automatic system to look for typical reasons for the manual disengagement - difficult lane changes, nearing an exit you planned to take, etc.
If their neural nets could recognize things like "difficult lange changes" (whatever that means), they could just as well fix autopilot to handle them.

Note that I'm not saying that collecting Autopilot disengagements is useless. It can, for example, be use to identify locations with an abnormally high number of disengagements. But collecting training data for rare "edge cases" is a lot harder and requires labeling by humans in most cases.
 
I kinda don’t mind waiting for the software to improve. I am concerned that the “other shoe” drops in 2 years when they say that the current camera-sensor set isn’t sufficient after all and you’d need to purchase a new car to really get FSD.

"It would be like getting a spinal transplant" as Elon stated for AP1->AP2. I suspect this is what will happen though. They thought they could do enhanced summon with AP1 back in the day.

In that (unlikely) case, Tesla will return (part of) the money.

Afterall we know FSD has been solved using just 2 cameras ;)

I"ll bet this actually goes down as a class action and we all get $20. No way Tesla is refunding even partial FSD purchases unless forced to by the courts. They will claim "well, you HAVE FSD now! It is NoA, Autopark and summon!" Its not like Tesla would refund me my FSD purchase today without a court order if I decide I am tired of waiting.
 
I"ll bet this actually goes down as a class action and we all get $20. No way Tesla is refunding even partial FSD purchases unless forced to by the courts. They will claim "well, you HAVE FSD now! It is NoA, Autopark and summon!" Its not like Tesla would refund me my FSD purchase today without a court order if I decide I am tired of waiting.
Apparently there has already been a class action suit that was settled for max of ~$280.

Sheikh, et al. v. Tesla, Inc.

ps : I think if Tesla delivers City NOA + Summon & Autopark, their "FSD" is done. They have never promised, IIRC, robotaxi or any particular level of FSD to customers on the website.
 
But that's just it. The car doesn't have an "understanding" of things it hasn't been trained to recognize properly, but those things are exactly what you are most interested in.
If their neural nets could recognize things like "difficult lange changes" (whatever that means), they could just as well fix autopilot to handle them.

Of course it doesn't understand things it hasn't been trained on. But Tesla said they are having the cars create a vector space understanding of the world around it. So what I was suggesting is actually kinda similar to what you were saying, I think - have the computer analyze the vector space and Nav data from disengagements for obvious causes like trying to change lanes into a packed lane, where a driver like me might choose to disengage AP and merge themselves - and then pass the video of the cases where there isn't an obvious cause in vector space to humans to figure out if it contains an edge case.
 
Apparently there has already been a class action suit that was settled for max of ~$280.

Sheikh, et al. v. Tesla, Inc.

ps : I think if Tesla delivers City NOA + Summon & Autopark, their "FSD" is done. They have never promised, IIRC, robotaxi or any particular level of FSD to customers on the website.

They promised me something for $3k above and beyond EAP. Back then it was all 8 camera access. So at least one of those features needs to not be given to EAP-only owners. Then I will consider myself as haven gotten "FSD". Maybe that will just end up being hardware 3, who knows.

Problem is as of right now I paid $3k for features above and beyond EAP that I bought for $5k, and I haven't gotten anything for that $3k yet.

Edit: and if you read the "old" FSD promise it was something along the lines of getting in my car and taking no action, my car would drive to my destination. No level promised, but door to door with no driver input.

That class action was for the EAP delays, as far as I know there hasn't been one for FSD non-deliveries yet.
 
Last edited:
They promised me something for $3k above and beyond EAP. Back then it was all 8 camera access. So at least one of those features needs to not be given to EAP-only owners. Then I will consider myself as haven gotten "FSD". Maybe that will just end up being hardware 3, who knows.

Problem is as of right now I paid $3k for features above and beyond EAP that I bought for $5k, and I haven't gotten anything for that $3k yet.

Keyword is "yet". You haven't gotten anything yet but you will get the promised FSD features. Just wait. In fact, the website says you will get the promised FSD features by the end of this year.
 
Keyword is "yet". You haven't gotten anything yet but you will get the promised FSD features. Just wait. In fact, the website says you will get the promised FSD features by the end of this year.

Well I bought FSD in December 2017 with the idea that the "6 months, definitely" must be right around the corner. I mean, they had that FSD video over a YEAR ago in 2016 and so they MUST be close to releasing the first feature, right?

Besides, I will need the hardware upgrade to get the features, and we know Elon said they would maybe start those end of 4th quarter (plus whatever Elon time), so I wont be getting my FSD features until sometime in 2020 at the earliest.

In the end, I just want ONE thing the EAP owners don't get. Just one and I will be a happy camper. Doesn't have to be real FSD or anything, just something I can say that I paid $3k and got X.