Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Elon: "Feature complete for full self driving this year"

This site may earn commission on affiliate links.
An excellent starting point for FSD would be fully functional subsystems. As I keep documenting, TACC isn't capable of adequate obstacle detection. Just today it slowed me on the highway doing 100 km/h to 45 km/h because a cyclist was in the breakdown lane (yes that's legal here on Australia), and a little earlier, it slowed to 60 km/h from 80 km/h because an overpass cast a hard shadow (which looked like an obstacle presumably). These are baby problems I've never even had with my previous vehicles using the bog standard Bosch radar based cruise control systems. Autosteer on the other hand in highway settings has been ultra reliable. I'll believe FSD announcements when those subsystems work correctly. And I didn't even mention traffic lights and signs not working, speed limits coming out of a map database - which is a lot of fun in non-WAAS territory where the car usually doesn't know whether you're on the off-ramp or still on the highway for lack of enough GNSS precision. There's a long way to go before we're anywhere near FSD.

My wish is for Tesla to adopt an approach to this that is borrowing from the aviation world, where we have autopilot subsystems that are independently managed. One pull on the CC stalk engages TACC. Two pulls engages TACC+AS. One push on the stalk disengages everything. Why not the same logic for disengaging as for engaging? With TACC+AS on, one push on the stalk should only disengage AS, two pushes disengages both. Stepping on the brake always disengages everything. Simples...
 
The only redeeming thing about Elon's FSD predictions is that it seems that Tesla will be the first to achieve a mass market FSD system. If any other company solves the perception problem with cameras, there's little doubt that Tesla's team would quickly adapt their approach and achieve similar results with less hardware.

I am curious. What do you mean by "solving the perception problem with cameras"? What problem is there with cameras that needs to be solved in your opinion?

The reason I ask is because there are plenty of companies like Waymo and Mobileye that already have camera vision that can recognize lane lines, road debris, cars, trucks, cyclists, pedestrians, road markings, road signs, traffic lights etc.. And they have camera vision that can create 3D maps and determine depth and range. Now, I am not saying that there camera vision is perfect. But I am curious what needs to be solved? Is there something missing with camera vision or is just a matter of increasing reliability?

Also, it is a misconception that when you "solve perception", you automatically have autonomous driving. Solving perception is only step 1 of doing autonomous driving. That just gives you a car that can "see" the world. You still need to teach the car how to drive, ie how to handle all the driving scenarios that happen during driving. Most experts actually consider perception the easiest part of autonomous driving, with the right sensors and computing power of course. The reason they consider it is the easiest part is because using camera, lidar and radar to do perception is pretty straight forward at this point. But solving the driving policy so that your car drives in a smart and safe way, that's the hard part that has not been solved yet.

What I'm saying is solving vision is a requirement for FSD. Tesla will be the first to solve it, if it's ever solved with the current tech, but Elon's timeline has been wrong time and time again. Some of us have been following Elon's FSD predictions for 3-4 years... Lol

I agree that solving vision is a requirement for FSD. But it is only one piece of FSD. Again, companies like Waymo and Cruise have excellent camera vision. They solved perception but it still took them years of developing their prototypes to handle cases to get to the quality of autonomous driving that they have now.
 
I am curious. What do you mean by "solving the perception problem with cameras"? What problem is there with cameras that needs to be solved in your opinion?

I've been following NN developments over the years and have yet to find research that demonstrates camera-based vision NNs that can achieve the trailing 9s Tesla is aiming for. Elon said in an interview around 2016-17 that the perception problem has been solved, but I don't think that is correct.

We may never achieve the trailing 9s with the current NN approaches...
 
  • Helpful
Reactions: diplomat33
In my observation as of 2020.8.x, the camera processing system is too laggy based on it's steering and braking response times.

If it were a student at the wheel, you'd tell them "look further ahead".

I wonder if they could use the same tech that humans do to process imagery. We only have a 3° cone of high res image. We constantly dart around our FOV and stitch the important snapshots together. This reduces the necessary brain power to have high frame rates (humans are over 120 fps before adrenaline, and maybe double that with it).
 
1) It's not a schtick. A schtick is a gimmick or a comedy routine. I am not doing a comedy act. I am expressing a serious opinion that I think is based on some logic and fact.

2) Tesla sold AP2 hardware in 2016 claiming it was "FSD capable". They sold a FSD package, saying that FSD only needed regulatory approval. Despite the fact that they repeatedly reported ZERO autonomous miles to the CA DMV. Then Elon repeatedly promised FSD, coast to coast demos, FSD divergence from EAP etc and missed those deadlines. So yes, when you sell a product that is not ready yet and repeatedly make promises that don't pan out, I think there is a big disconnect.

The fact is that there is a big difference between missing FSD promises when you have no real FSD yet (Tesla) and companies who miss deadlines but actually have some autonomous driving (Waymo or Cruise).

3) I am basing "enough sensors" on what the leaders in autonomous driving are using, like Waymo, Cruise and Mobileye. They are the ones who actually have achieved real autonomous driving so I think that's a good standard for what sensors are needed.

When did I ever state that camera vision is good enough when it is "advanced enough"? First of all, how do you define "advanced enough"? That is very vague. I said Mobileye has a demo prototyoe with camera only but I never said that it was good enough for deployment. The fact is that Mobileye is still planning to include lidar when they actually deploy their L4/L5 system to the customer on public roads because they don't think camera only is good enough to meet the safety and reliability requirements for deployment.

But again, I am just going by what I see the leaders in autonomous driving doing. The current leaders who have real autonomous driving now have some of the best camera vision on the planet and still use lidar. That tells me that lidar is still required for safe, reliable, autonomous driving.

No offense but when it comes to FSD, who should I trust more? The companies with actual autonomous driving or the company with only L2 and camera vision that is still a work in progress?
Yes, we both know what a shtick is and I watch you parrot it time and time again. You know what FSD requires from a technical standpoint, you know who can and can't do it, you know X system alone will never be capable, you know the efficacy and safety of X system will never be "enough", you know with such a degree of confidence that you speak in absolutes - when we both know, you do so erroneously and arrogantly. In effect, a shtick -- it is annoying and tiresome.

Again, I do not equate failing to meet a promise as dishonesty - to each his own. Nor do I see a direct correlation between the lack of reporting on autonomous miles to the California DMV and dishonesty. Do you think Tesla only worked on/tested/developed their autonomous systems in 2019 by way of the one 12.2 mile route they reported to the California DMV? Please... be logical.

You said "Tesla's don't have enough sensors for proper perception and rely too much on "solving camera vision" which is not yet as advanced as it needs to be." Which is stating both that camera vision is not enough and then it's just not advanced enough yet. Regardless, Waymo is operating in perhaps one of the easiest locales for FSD, Cruise has no public offering. But if Waymo operating in just one very tiny geographic location, where all conditions are ideal for FSD implies to you, their tech stack is the only means of accomplishing this endeavor, I'm not sure what to say in response... how one can logically make these disconnected things connected, is beyond me. We can say Tesla demoing FSD on just a 12.2 mile loop is dishonest, though Waymo operating only in one very specific geographic location, where there are less variables as it relates to "solving" FSD is the gospel?

We get it, you hold these opinions and ideals to be self-evident and absolute, why then keep contributing to a discussion?
 
Last edited:
In my observation as of 2020.8.x, the camera processing system is too laggy based on it's steering and braking response times.

If it were a student at the wheel, you'd tell them "look further ahead".

I wonder if they could use the same tech that humans do to process imagery. We only have a 3° cone of high res image. We constantly dart around our FOV and stitch the important snapshots together. This reduces the necessary brain power to have high frame rates (humans are over 120 fps before adrenaline, and maybe double that with it).
This is a great description!

I also agree that the "vehicle control" stuff behaves as if it wasn't programmed very well. (This could, of course, be a byproduct of garbage in / garbage out.) The "car making a left turn in front of you, fully clear the lane, then AP hits the brakes" makes no sense this far into the game. It SEEMS like the cameras see and recognize and follow the turning car, so why does it take until it's already cleared the lane to severely slow down?

Overall positioning of the car on the road is also wonky, though I believe this has been slowly improving, in that it doesn't take a good line through a turn that it seems to "know" the radius for- it enters the turn too fast, then reacts too late on a decreasing radius turn like a n00b (like me) learning the racing line. Why isn't it reliably picking entry/exit points on turns at this stage?

The hopeful part of me thinks it has to do with overall processing power and current state of the code and that they literally can't improve it in its current state, hence the rewrite we've heard about. Then again, this is what, rewrite number 3?
 
Regardless, Waymo is operating in perhaps one of the easiest locales for FSD, Cruise has no public offering. But if Waymo operating in just one very tiny geographic location, where all conditions are ideal for FSD implies to you, their tech stack is the only means of accomplishing this endeavor, I'm not sure what to say in response... how one can logically make these disconnected things connected, is beyond me. We can say Tesla demoing FSD on just a 12.2 mile loop is dishonest, though Waymo operating only in one very specific geographic location, where the are less variables is not as it relates to "solving" FSD?

That is incorrect information. Waymo has done 20 million autonomous miles in 25 cities across the US. They are not limited to just one small area.

Silly me, for thinking that a company with 20 million autonomous miles in 25 US cities might be closer to solving autonomous driving than a company with only 12 autonomous miles.

Source:
 
That is incorrect information. Waymo has done 20 million autonomous miles in 25 cities across the US. They are not limited to just one small area.

Silly me, for thinking that a company with 20 million autonomous miles in 25 US cities might be closer to solving autonomous driving than a company with only 12 autonomous miles.

Source:
Yeah, we're going to ignore the majority of my response? Okay, cool... perhaps I should have bulleted them or prepended every response with a number.

So, Waymo completed those first 10 million miles, over the course of 10 years, correct? How does that translate into FSD? Has Waymo been operating an FSD fleet for 10+ years? By this silly metric, we can include all miles driven by Tesla's on Auto Pilot as "autonomous miles". Regardless of this, you're sidestepping my entire response and counter-points to your silly absolutism's.
 
That is incorrect information. Waymo has done 20 million autonomous miles in 25 cities across the US. They are not limited to just one small area.

Silly me, for thinking that a company with 20 million autonomous miles in 25 US cities might be closer to solving autonomous driving than a company with only 12 autonomous miles.

Source:


Have you listened to the third row podcast with the ex Tesla programmer? Episode 13. While there is no good way to verify his level of involvement nor his competence nor insight into the big picture, but he seems to think Tesla still has a large advantage. This is despite the fact that he jumped ship for competitor Waymo or Uber.

He seems to see the code rewrite as a big deal, that should usher in new features and reliability at a faster rate than what we have seen so far. He also seems to confirm that shadow mode does exists, and that small tweaks to code can be completed without a full firmware update.

I say let’s wait and see way this year brings.
 
  • Like
Reactions: EinSV
So, Waymo completed those first 10 million miles, over the course of 10 years, correct? How does that translate into FSD? Has Waymo been operating an FSD fleet for 10+ years? By this silly metric, we can include all miles driven by Tesla's on Auto Pilot as "autonomous miles". Regardless of this, you're sidestepping my entire response and counter-points to your silly absolutism's.

Yes, Waymo had a FSD fleet for those 10 years. Remember, they started with an autonomous driving prototype with solved perception, left over from Google's autonomous driving project, and then worked to improve it from there. Now, Waymo has L4 FSD. It's 20 millions miles and yes, they were all autonomous miles because they were L4 miles. No, we cannot count AP miles as autonomous because AP is classified as L2. And if you think I am cheating and being unfair by dismissing AP miles but counting Waymo miles just to make Waymo miles look better, I am not. Waymo is L4 which is considered autonomous. AP is L2 which is not considered autonomous. I am merely going with what the SAE definition of autonomous driving is. According to the SAE, L0, L1 and L2 are not autonomous, L3, L4 and L5 are autonomous. When Tesla achieves autonomous driving, either L3, L4 or L5, I will happily count those miles as autonomous.

And to your other point, yes, I am sure Tesla has worked on FSD outside of those 12 miles. But assuming that Tesla is not cheating and is truly reporting their autonomous miles to the CA DMV like they are legally required to do, then Tesla did not actually test autonomous driving on public roads. They admit as much in their letter to the CA DMV that they used other methods such as shadow mode and simulations but did not test autonomous driving on public roads hence why they did not report any miles during those other previous years. Now it is fine if Tesla used shadow mode or simulations but that is not the same as actual autonomous driving. Shadow mode is not autonomous driving. Waymo did 1.4 million miles last year in CA alone of actual autonomous driving on public roads.

If I missed another point you made, please repeat it without resorting to insults and I will happily answer it in good faith.

Have you listened to the third row podcast with the ex Tesla programmer? Episode 13. While there is no good way to verify his level of involvement nor his competence nor insight into the big picture, but he seems to think Tesla still has a large advantage. This is despite the fact that he jumped ship for competitor Waymo or Uber.

He seems to see the code rewrite as a big deal, that should usher in new features and reliability at a faster rate than what we have seen so far. He also seems to confirm that shadow mode does exists, and that small tweaks to code can be completed without a full firmware update.

I say let’s wait and see way this year brings.

I have not listened to it yet. I am curious why he left Tesla to join Waymo. That seems suspicious to me. I mean, if Tesla has such a huge advantage in FSD and Waymo is such a lost cause according to Tesla fans, then why leave the "winner" to join the "loser"? Doesn't he feel dirty working for a company which Elon says is using lidar as a crutch?

I don't think Tesla has a huge advantage when it comes to developing the software for autonomous driving. Tesla has great ML engineers but Waymo has great ML engineers too. In terms of data, yes, Tesla has 2 billion miles of AP data which is huge. But Waymo has 10 billion miles of simulation data. And Waymo has 20 million miles of real autonomous driving under their belt which is a lot more than Tesla has. So while Tesla has a lot of AP data, Waymo has a lot more simulation data and a lot more real world autonomous data than Tesla has.

However, Tesla's big advantage is that they already have a large fleet on roads with the AP software. And Tesla can upload new features quickly to the entire fleet via OTA update. So Tesla does have a huge advantage when it comes to deploying software to their fleet. If Tesla does achieve autonomous driving on the current AP3 hardware, then Tesla could upgrade the entire fleet to robotaxis with a push of an OTA update, which would be huge and would certainly jump them way ahead of Waymo. But that advantage depends on Tesla achieving FSD. Tesla still needs to solve FSD. I do think that even without achieving FSD, Tesla still has a huge advantage over other automakers in terms of deploying OTA updates. Because even deploying L2 features via OTA updates still puts Tesla ahead of other makers in terms of L2 driver assist.

I do agree that the AP code rewrite is a big deal and will usher in great features. And yes, shadow mode is real.
 
Last edited:
  • Like
Reactions: boonedocks
Yes, Waymo had a FSD fleet for those 10 years. Remember, they started with an autonomous driving prototype with solved perception, left over from Google's autonomous driving project, and then worked to improve it from there. Now, Waymo has L4 FSD. It's 20 millions miles and yes, they were all autonomous miles because they were L4 miles. No, we cannot count AP miles as autonomous because AP is classified as L2. And if you think I am cheating and being unfair by dismissing AP miles but counting Waymo miles just to make Waymo miles look better, I am not. Waymo is L4 which is considered autonomous. AP is L2 which is not considered autonomous. I am merely going with what the SAE definition of autonomous driving is. According to the SAE, L0, L1 and L2 are not autonomous, L3, L4 and L5 are autonomous. When Tesla achieves autonomous driving, either L3, L4 or L5, I will happily count those miles as autonomous.

And to your other point, yes, I am sure Tesla has worked on FSD outside of those 12 miles. But assuming that Tesla is not cheating and is truly reporting their autonomous miles to the CA DMV like they are legally required to do, then Tesla did not actually test autonomous driving on public roads. They admit as much in their letter to the CA DMV that they used other methods such as shadow mode and simulations but did not test autonomous driving on public roads hence why they did not report any miles during those other previous years. Now it is fine if Tesla used shadow mode or simulations but that is not the same as actual autonomous driving. Shadow mode is not autonomous driving. Waymo did 1.4 million miles last year in CA alone of actual autonomous driving on public roads.

If I missed another point you made, please repeat it without resorting to insults and I will happily answer it in good faith.



I have not listened to it yet. I am curious why he left Tesla to join Waymo. That seems suspicious to me. I mean, if Tesla has such a huge advantage in FSD and Waymo is such a lost cause according to Tesla fans, then why leave the "winner" to join the "loser"? Doesn't he feel dirty working for a company which Elon says is using lidar as a crutch?

I don't think Tesla has a huge advantage when it comes to developing the software for autonomous driving. Tesla has great ML engineers but Waymo has great ML engineers too. In terms of data, yes, Tesla has 2 billion miles of AP data which is huge. But Waymo has 10 billion miles of simulation data. And Waymo has 20 million miles of real autonomous driving under their belt which is a lot more than Tesla has. So while Tesla has a lot of AP data, Waymo has a lot more simulation data and a lot more real world autonomous data than Tesla has.

However, Tesla's big advantage is that they already have a large fleet on roads with the AP software. And Tesla can upload new features quickly to the entire fleet via OTA update. So Tesla does have a huge advantage when it comes to deploying software to their fleet. If Tesla does achieve autonomous driving on the current AP3 hardware, then Tesla could upgrade the entire fleet to robotaxis with a push of an OTA update, which would be huge and would certainly jump them way ahead of Waymo. But that advantage depends on Tesla achieving FSD. Tesla still needs to solve FSD. I do think that even without achieving FSD, Tesla still has a huge advantage over other automakers in terms of deploying OTA updates. Because even deploying L2 features via OTA updates still puts Tesla ahead of other makers in terms of L2 driver assist.

I do agree that the AP code rewrite is a big deal and will usher in great features.


I’m no programmer, I’m not in the software field. But I don’t think he sees it as winner and loser. It’s not black and white.

Anyways, give it a listen.
 
  • Like
Reactions: diplomat33
I’m no programmer, I’m not in the software field. But I don’t think he sees it as winner and loser. It’s not black and white.

I was being a bit facetious there. It was a reference to how Tesla "fanboys" are always saying that Tesla will win the race to L5 and that Waymo will lose the race to L5.

But I will listen to the interview of course. Thanks.
 
Have you listened to the third row podcast with the ex Tesla programmer? Episode 13. While there is no good way to verify his level of involvement nor his competence nor insight into the big picture, but he seems to think Tesla still has a large advantage. This is despite the fact that he jumped ship for competitor Waymo or Uber.

He seems to see the code rewrite as a big deal, that should usher in new features and reliability at a faster rate than what we have seen so far. He also seems to confirm that shadow mode does exists, and that small tweaks to code can be completed without a full firmware update.

I say let’s wait and see way this year brings.

Do you know what that guy does? Let me break it to you. He's not a software engineer.
He's a data labeler. Do you know what he does? Remember the captchas you solve when you log onto sites? That's what he does.
He has absolutely zero experience in development and in the video sounds like a complete Tesla fanboy.
He didn't even know that Waymo has cars in phoenix and that they are riders only (driverless).
 
Do you know what that guy does? Let me break it to you. He's not a software engineer.
He's a data labeler. Do you know what he does? Remember the captchas you solve when you log onto sites? That's what he does.
He has absolutely zero experience in development and in the video sounds like a complete Tesla fanboy.
He didn't even know that Waymo has cars in phoenix and that they are riders only (driverless).

Listening to the interview, he is definitely a Tesla "fanboy" which makes sense since Third Row Podcast is a fanboy hangout.

But he is talking about how amazing AP is and how amazing the AP rewrite is. Don't get me wrong. I am glad that Tesla is doing this AP rewrite. I am sure it will help improve AP. But it seems like something that should have been done a long time ago. I mean, apparently, up to now, Tesla was trying to do FSD with separate camera outputs that were not "talking" to each other? What?!? And only now, are they realizing that they need to stitch together all the camera outputs into a single 3D map and have a single NN run everything? isn't that something that Waymo and others have already done a long time ago? I am not a computer engineer that is why I am asking you but it seems pretty obvious, especially if your entire FSD is dependent on using cameras to "solve perception", that you would need to stitch together all the cameras outputs into a single 3D map run by a single NN.

It just seems to me that in the field of autonomous driving and camera vision, Tesla is just now re-discovering what others have already done a long time ago. My understanding is that Tesla is still working on perception whereas Waymo and others have already solved perception and have moved on to solving more difficult planning and driving policy problems. But it might explain why some Tesla fans mistakenly think that Tesla is so far ahead in FSD. They don't know that this stuff with camera vision has already been done. So they think Tesla is pioneering new stuff when in reality it's already been done by Waymo, Cruise, Mobileye and others.
 
Last edited:
Just an interesting note, I noticed stop lights in Redwood City have been updated with yellow borders like this:

Retroreflective Borders on Traffic Signal Backplates – A South Carolina Success Story - Safety | Federal Highway Administration

But some of them have an alternating black/yellow dash-like border. Could be useful in helping autonomous vehicles detecting stop lights.

We have such yellow border in the Netherlands on trafficlights where statistically more accidents happen then other crossings.
 
The metrics number of autonomous miles / kilometers really is completely useless.

I would like to see metrics like "unique situations" and "failures".

Regarding HD maps, to me a HD map is something like describing highly details lat/lon expressions for road details like where the middle of lanes are.

The fact that in the map is states where a speedbump is or if a crossing has trafficlights is definitely map enrichment. Now you could say well once you say that a lat/lon identifies a trafficlight that is linked to one or more lanes that that is HD. The thing is, we humans do this too!

Ever approached a new large crossing with trafficlights? How do you approach this differently from a similar crossing you drive daily?

I reason it like this:

If the car its sensors are adequite enough to recognize and store and use it for later convenience it is NOT HD. Meaning, the system can behave without but behaves better in the future due to the presence cached data.

However, if the car requires the data because it cannot reliably detect them at distance from its sensors then it is HD too.

By the way, I don't agree with the statement earlier that you need very high resolution cameras but all pixels help. Especially for detecting small nuances on symbols on for example traffic light bulbs. I sincerely doubt if the wide angle camera on the Tesla can detect such symbols.

Still, that is also not really a big issue. Tesla could *easily* (partially) upgrade these in for example 5 years if really needed.


Last but not least, if any competitor solves camera only autonomous driving Tesla can *always* license it and deploy it to its fleet.