Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Frustrated with FSD timeline

This site may earn commission on affiliate links.
They've already indicated the coast-to-coast drive isn't happening in 2017 so it appears the timeframes in that article are now off too.
Elon said:

And then, the coast-to-coast drive, autonomous drive by the end of the year, I believe we're still on track for that. It is certainly possible that I may have egg on my face on that front. But if it is not, at the end of the year, it will be very close.
 
I have not observed that but will try it.

The rub is that you've gotta glance at the IC (after visually inspecting surroundings to ensure safe lane change) when there is more than one car in the lane you are entering -- all of the cars will appear as the gray/white in the lane you are entering and they will even angle themselves if they are also changing lanes. That's when you can see its tracking not just the cars around you but in (presumably) both adjacent lanes for a good distance. That has to be vision, so its weird they are hiding that information from users. Troubling even. It must be flawed in a way I haven't noticed. I don't glance all that often but I've seen that it is tracking a bunch of cars a dozen times when using this feature and cars aren't around me.
 
The rub is that you've gotta glance at the IC (after visually inspecting surroundings to ensure safe lane change) when there is more than one car in the lane you are entering -- all of the cars will appear as the gray/white in the lane you are entering and they will even angle themselves if they are also changing lanes. That's when you can see its tracking not just the cars around you but in (presumably) both adjacent lanes for a good distance. That has to be vision, so its weird they are hiding that information from users. Troubling even. It must be flawed in a way I haven't noticed. I don't glance all that often but I've seen that it is tracking a bunch of cars a dozen times when using this feature and cars aren't around me.

It could also be tracking ultrasonic targets (as it does while driving as "sound" waves). Ghost cars would be explained by that...

Edit: I see you talking forward targets, yes those would be vision...
 
Last edited:
I belive the first 95% of self-driving is pretty easy. Then the next percents up to a 100% gets exponentially harder for each percent. I hope they manage to solve that, because even an 99.9% perfect FSD would be pretty bad with one stumble every 1000km (altough maybe sufficient as drivers aid).

Ap1, and ap2 eap probably cover 40% of self driving as things like this can happen. We've got a long way until 95% but I do get what you're saying

Tesla Autopilot crash caught on dashcam shows how not to use the system
 
  • Helpful
Reactions: AnxietyRanger
Elon said:

And then, the coast-to-coast drive, autonomous drive by the end of the year, I believe we're still on track for that. It is certainly possible that I may have egg on my face on that front. But if it is not, at the end of the year, it will be very close.
IMG_0088.JPG
 
Ap1, and ap2 eap probably cover 40% of self driving as things like this can happen. We've got a long way until 95% but I do get what you're saying

Tesla Autopilot crash caught on dashcam shows how not to use the system
I believe EAP is no effort towards FSD. Most likely two completely different branches of code, where EAP is based on simple image recognition and hardcoded situation handling. FSD is a more 100% self-learning approach.

After all the 2016 FSD video was filmed with a software that is in no way similar to todays EAP.
 
Elon said:

And then, the coast-to-coast drive, autonomous drive by the end of the year, I believe we're still on track for that. It is certainly possible that I may have egg on my face on that front. But if it is not, at the end of the year, it will be very close.

So I read this quote through many times over just to be sure and I have confirmed - at no time did he say the end of which year.
 
I believe EAP is no effort towards FSD. Most likely two completely different branches of code, where EAP is based on simple image recognition and hardcoded situation handling. FSD is a more 100% self-learning approach.

After all the 2016 FSD video was filmed with a software that is in no way similar to todays EAP.

Yes, that is obviously the case.

IMO it is likely Tesla never intended to have the EAP branch at all, MobilEye was supposed to be on the AP2 board (there is even an empty space for it there) but those plans failed as MobilEye ditched Tesla after the Brown incident and/or due to data sharing disagreements. So Tesla had to come up with a solution sometime between spring/summer 2016 and October 2016...

The FSD branch was/is always separate from that, we shall see what its capabilities will be once we get the first version.
 
Elon said:

And then, the coast-to-coast drive, autonomous drive by the end of the year, I believe we're still on track for that. It is certainly possible that I may have egg on my face on that front. But if it is not, at the end of the year, it will be very close.

So I read this quote through many times over just to be sure and I have confirmed - at no time did he say the end of which year.
haha the question he's responding to is:
And I guess just related, are you still hoping to be able to do the autonomous drive L.A. to New York by the end of this year? Thank you.
Tesla (TSLA) Q2 2017 Results - Earnings Call Transcript | Seeking Alpha
 
After AP1 was introduced in October 2014, they had this on their website (screenshot)
summon.png

Summon was released in Beta, 15 months later, in Jan 2016 with 7.1.

Now 1,5 years after the software release, you still have to stand by and actively tell your car to go/stop.
After almost 3 years, the features described on the website late 14 are not there.

Ever since AP1 came out, Tesla have been overpromising and underdelivering.
This is fact. It´s strange to me how anyone can argue otherwise.
 
It seems to me that with the P85D announcement, which coincided with the AP1 announcement, Tesla stepped over a line and started making announcements that were based on overly optimistic and/or misleading interpretations of future software updates.

That seems to have been a rather unfortunate view to take, considering that it turned one of Tesla's liked assets (the software update) into also one its most disliked liabilities - when they could not follow-up on those promises made.

I.e. that was the time when Tesla really started selling forward-looking statements as a demand lever, and not the current product...

After all, remember how P85D was supposed to get its promised performane after a software update (which later turned into the series of 5/10k Ludicrous upgrades instead when it couldn't be done)...

Many things said of AP1 have not/did not materialize either. Some were listed above and the verbally mentioned traffic lights did not come either and also we still don't have that ramp to ramp (was supposed to come in late 2016).

And once you sell one forward-looking promise, you have to make more forward-looking promises to keep the appearances, hence the P90DL Vx and power level debacles following P85D, and obviously the whole AP2/EAP thing...

What if Tesla had just stuck to selling what they have? AP1 and AP2 hardware for example could still have come out exactly when they did and Tesla could have just said some very conservative things about what is coming, instead of going all ramp to ramp, by the curb, FSD on us...

"Today we are adding a forward looking radar and camera to Model S. We hope to ship driver's aid features in the future, starting with this or that today."
 
Last edited:
Again you can't train the system by collecting raw video data and simply feeding it.

You seem pretty knowledgeable. You must know about autoencoders and other unsupervised learning techniques. And you must also know about transfer learning and other feature extraction and embedding techniques. Why do you think these are not useful for NN based visual recognition systems?
 
You seem pretty knowledgeable. You must know about autoencoders and other unsupervised learning techniques. And you must also know about transfer learning and other feature extraction and embedding techniques. Why do you think these are not useful for NN based visual recognition systems?

What are you training? The ability to understand an image I'd agree with you. The ability to steer down a road you'd need to input the steering angle as that's what you're interested in.
 
What are you training? The ability to understand an image I'd agree with you. The ability to steer down a road you'd need to input the steering angle as that's what you're interested in.

Yes of course. A full system requires a lot of parts and some of those will be best served by labeled inputs. Unless I misunderstood him, the author of the original post seemed to be suggesting that raw, unlabeled video was not worth collecting. He had previously made some very detailed comments about the architectures of various systems and I was wondering why he thought that unlabeled video was not useful.
 
  • Love
  • Informative
Reactions: NerdUno and Swift