Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Elon: "Feature complete for full self driving this year"

This site may earn commission on affiliate links.
@diplomat33 Musk can be commended for many things. So far autonomous driving is not one of them — and indeed it could turn out to be his rare mistake (picking a fight with SEC being another one).

Well, I think the jury is still out on FSD. But in terms of his fights with the SEC, yeah, I completely agree with you on that. It is super dumb and seems to be a case of his hubris going too far.
 
2wfuzv.jpg
 
So yeah, I think it is a given that NOA will be much improved in other areas before we get no confirmation. I am eager to test NOA on 2019.8 when I get that update because I want to see if and how much NOA has improved. I think the level of improvement will be a tell tale sign of how close we are to getting NOA with no confirmation.

I think Tesla can walk and chew gum at the same time, as they say. Tesla can work to improve NOA along with nav maps AND work to get NOA ready for no confirmation at the same time. In fact, I would guess that is exactly what Tesla has been doing. They don't have to sacrifice one to do the other. In fact, I would argue that improving NOA along with nav maps is a prerequisite for removing the stalk confirmation. So they really need to do "all of the above". So it is almost required that the new NOA will be much improved in other areas too.

I don't see the unconfirmed lane changes as something in itself that needs to be worked on. Instead it's simply improving NoA until it's ready to be unleashed in its full form.

So nothing is being sacrificed by holding off. It's hard to tell from a Musk tweet what's really going on. He could have tweeted as a delay tactic.

I love the fact that the Unconfirmed lane changes is simply a remote switch. That way we can all assess 2019.8 (or some later version) well before the remote switch is ever thrown. I haven't seen any indication that NoA behavior has significantly changed with 2019.8. I might start a thread that's for places where NoA fails to handle an exit properly to try to track changes in behavior. It won't be a thread for discussion, but simply a list of submitted locations by TMC users where either a location is added that it fails on, or a location is updated to say the problem has been fixed. I'll probably create one in the PNW section as it's going to be highly regional dependent.

I'll be a lot more excited about NoA when it picks the 2nd lane to the right a couple miles before the point where the right most lane goes off somewhere else, and the lane it's in is the correct lane to be in for the desired exit.
 
Well, I think the jury is still out on FSD.

Depends what question the jury is deliberating. If the question is "Has he missed every AP schedule he ever gave by huuuuuge margins?" then the jury is done deliberating and the accused is guity as charged by unanimous agreement. I'd say it's also pretty cut and dry that the demo video which persists to this day on the website is an outrigtht lie: "The person in the driver's seat is only there for legal reasons". That can't be anything but a lie. Neither can the implication that only "validation and regulatory approval" were required to release it at that time. It's just a plain lie.

Now, if the question is "will Tesla ever release L5 autonomy on AP2 vehicles", well, I would guess that because of the word "ever" the jury will be out on that forever. Meaning they will never deliver it, but we'll never prove that they won't ever deliver it. The goal posts will just keep moving until nobody is talking about it because will be completely irrelevant at some point.
 
Depends what question the jury is deliberating. If the question is "Has he missed every AP schedule he ever gave by huuuuuge margins?" then the jury is done deliberating and the accused is guity as charged by unanimous agreement. I'd say it's also pretty cut and dry that the demo video which persists to this day on the website is an outrigtht lie: "The person in the driver's seat is only there for legal reasons". That can't be anything but a lie. Neither can the implication that only "validation and regulatory approval" were required to release it at that time. It's just a plain lie.

Now, if the question is "will Tesla ever release L5 autonomy on AP2 vehicles", well, I would guess that because of the word "ever" the jury will be out on that forever. Meaning they will never deliver it, but we'll never prove that they won't ever deliver it. The goal posts will just keep moving until nobody is talking about it because will be completely irrelevant at some point.

I agree with you on both those questions. But the question for me is "will Tesla deliver hands free driving in the reasonable future?" On that question, I think the jury is still out. That's the only question I care about.
 
My feeling is that if the solution to driving policy and path planning turns out to be purely reinforcement learning (RL) in simulation, then Waymo stands the best chance of solving autonomy first. Alphabet has DeepMind and Google Brain — a deep bench of RL researchers who are arguably the best in the world.

Google also has massive compute for running simulations. I would guess Amazon, Microsoft, and Google have the most compute of any companies in the world, since they seem be the largest cloud compute providers. So the two things that matter most to developing RL algorithms and applying them in simulation — expertise and compute — are the two things Alphabet has the most of.

If RL in sim is indeed the solution to driving policy and path planning, I have a hard time seeing how Mobileye gets there first. Mobileye has access to less expertise and less compute. So why would it solve this RL problem before Waymo?

Maybe the answer is that Mobileye has a head start on applying RL to driving policy and path planning. But I’m skeptical of this idea because researchers at Waymo, including the head of research, have been talking and writing lately about the need for “smart agents” to accurately simulate real world driving. Two co-authors of the ChauffeurNet paper explicitly discussed using imitation learning to create smart agents that could enable RL in simulation. This suggests to me that Waymo has been thinking about how to do RL in sim, and — apparently unlike Mobileye — believes the creation of smart agents is necessary to get it to work. It doesn’t seem to me like Mobileye had the idea to use RL first, and Waymo just hadn’t considered it.

So, that’s why I think Waymo is most likely to solve autonomy first if pure RL in sim turns out to be the solution to driving policy and path planning.

On the other hand, if the solution requires massive amounts of unlabelled driving data, then for now Tesla is the best poised to solve autonomy first. For instance, if the solution requires billions of miles’ worth of state-action pairs for imitation learning (or a sampling of rare state-action pairs that statistically only appear often enough in billions of miles of driving), then right now as far as I know, no other company is set up to collect that data on that kind of scale.

This is true for Tesla whether a) large-scale imitation learning is a sufficient solution by itself, or b) large-scale imitation learning is insufficient by itself but necessary to create smart enough agents to enable RL in sim — which is a sufficient solution. Whatever the exact solution is, if it requires massive amounts of unlabelled driving data, then Tesla is the best bet.

I think this is a good, systematic way to think about the competitive landscape. It seems widely accepted that Waymo has an advantage in RL expertise and in compute, and it seems widely accepted that Tesla has an advantage in collecting unlabelled driving data. Moreover, it seems widely accepted that what’s important for pure RL in sim is primarily expertise (a proxy for algorithms) and compute. It also seems widely accepted that what’s important for pure imitation learning or for IL-bootstrapped RL in sim is unlabelled state-action pairs data. This framework for thinking therefore relies on widely accepted assumptions, and avoids relying on any highly controversial, highly uncertain, or highly subjective assumptions.

I would like to know if there is a good, systematic objection to this framework that doesn’t rely on any highly controversial, uncertain, or subjective assumptions.

The first objection I think of is that driving policy and path planning will be solved by hand-coded software, so AI expertise, compute, and unlabelled driving data won’t factor into it. But this is an idea that Waymo, Tesla, and Mobileye all seem to reject, and possibly Uber ATG as well. I would be curious to know if there is an autonomous vehicle company that believes driving policy and path planning can be solved by hand-coded software.

Another potential objection is that solving perception will require massive labelled sets of sensor data, and that not all companies can afford the cost of labelling. This might turn out to be the case in the future. But at least so far, there is no evidence I’m aware of that any company is spending more than other companies can afford. Waymo is reportedly spending about $1 billion a year, total — including non-R&D-related expenses. By comparison Tesla’s overall R&D budget is $1.4 billion — including R&D for new vehicles, batteries, electric motors, and manufacturing.

Can anyone think of another objection?
 
Last edited:
I think this is a good, systematic way to think about the competitive landscape. It seems widely accepted that Waymo has an advantage in RL expertise and in compute, and it seems widely accepted that Tesla has an advantage in collecting unlabelled driving data. Moreover, it seems widely accepted that what’s important for pure RL in sim is primarily expertise (a proxy for algorithms) and compute. It also seems widely accepted that what’s important for pure imitation learning or for IL-bootstrapped RL in sim is unlabelled state-action pairs data. This framework for thinking therefore relies on widely accepted assumptions, and avoids relying on any highly controversial, highly uncertain, or highly subjective assumptions.

...and instead rely on simplified assumptions that may have no correlation with how reality unfolds.

I think most of us can indeed agree Waymo is well (best?) positioned for AI in general on the virtue of its parent company and Tesla is well (best?) positioned to collect consumer fleet data.

The problem comes with the assumptions you make. In my view we can’t assume either of those will automatically translate into a lead on autonomous driving.

The mix of technologies, techniques, collaborations and experiences that will first lead to autonomous driving on volume is unknown. All it takes for your hypothesis to fall apart is any one of the following:

1) Failure by Waymo or Tesla to make effective use of their respective leads in these fields for whatever reason (making a wrong technology choice at some juncture causing delay)
2) Autonomous driving solution eventually relying on something other than RL lead or consumer fleet data lead (there is reason to believe the successful choice of technologies and their mix is not settled yet)
3) Risks posed by the regulatory or market environment treating solution providers unevenly (ie the ”best company” does not always win)

Finally, I’m not sure even the original assumptions are watertight. MobilEye’s REM mapping fleet is collecting an increasing amount of data as we speak that could well rival Tesla eventually. Also, @Bladerskb suggests MobilEye got a headstart on actually using RL compared to Waymo, which could be more signficant than Google’s general in-house prowess in this area for a great number of reasons.

To be clear, I still consider Waymo the current leader in autonomy — based on their demonstrated progress, though, not on a demonstrated strategic advantage necessarily. It may well be that Waymo maintains this lead.
 
Last edited:
  • Disagree
Reactions: strangecosmos
I'd say it's also pretty cut and dry that the demo video which persists to this day on the website is an outrigtht lie: "The person in the driver's seat is only there for legal reasons". That can't be anything but a lie. Neither can the implication that only "validation and regulatory approval" were required to release it at that time. It's just a plain lie.
Wow, do you also write briefs for the SEC?
Your implication is a blatant misstatement of the text of the blog post accompanying the video:
We are excited to announce that, as of today, all Tesla vehicles produced in our factory – including Model 3 – will have the hardware needed for full self-driving capability at a safety level substantially greater than that of a human driver.
Before activating the features enabled by the new hardware, we will further calibrate the system using millions of miles of real-world driving to ensure significant improvements to safety and convenience.
While this is occurring, Teslas with new hardware will temporarily lack certain features currently available on Teslas with first-generation Autopilot hardware, including some standard safety features such as automatic emergency braking, collision warning, lane holding and active cruise control. As these features are robustly validated we will enable them over the air, together with a rapidly expanding set of entirely new features.
Way more reasons than regulatory approval.

Did the person in the seat touch the wheel? No, you say? Then why do you claim they were needed? Having a safety driver is a legal requirement under CA self driving testing regulations. Otherwise Tesla could just have easily put someone in the rear seat with an E-stop/ over ride...

Late to deliver based on year+ estinates does not equal did not/ will not deliver. The jury cannot even be dismissed for deliberations until Tesla states FSD is complete, all cars that are going to be updated have been, and we are booking all the revenue.

Edit: link to blog post
 
Last edited:
  • Disagree
Reactions: rnortman
Way more reasons than regulatory approval.

The problem is, Tesla claimed it was about validation, calibration and regulatory approval.

In reality, turns out, it was about Tesla not having made the system yet and the video was likely just a hardwired demo.

That’s quite a change.

Just go back to October 2016 and read any Tesla site. The expectations people had based on Tesla’s communications at the time were dramatically different from today.
 
The problem is, Tesla claimed it was about validation, calibration and regulatory approval.

In reality, turns out, it was about Tesla not having made the system yet and the video was likely just a hardwired demo.

That’s quite a change.

Just go back to October 2016 and read any Tesla site. The expectations people had based on Tesla’s communications at the time were dramatically different from today.

I'm not saying people weren't confused/ misinterpreted Tesla's post. As we've gone over in the video thread, the post theme was the hardware capabilities, and specifically said that the SW was feature lacking, and would be for some time just to reach AP1. The post doesn't even mention regulations as a constraint.

Before activating the features enabled by the new hardware, we will further calibrate the system using millions of miles of real-world driving to ensure significant improvements to safety and convenience.
As these features are robustly validated we will enable them over the air, together with a rapidly expanding set of entirely new features.

It's a process flow...
 
@mongo Yes but the problem is — many believe at least — in that there were no features to activate, calibrate or validate for much of what Tesla was selling. Seems they had to make them first and probably still are yet to make many of them.

You can see how that would go beyond a misunderstanding, if so?

You see the difference: ”As these features are implemented/developed...” vs. ”As these features are robustly validated...”
 
@mongo Yes but the problem is — many believe at least — in that there were no features to activate, calibrate or validate for much of what Tesla was selling. Seems they had to make them first and probably still are yet to make many of them.

You can see how that would go beyond a misunderstanding, if so?

You see the difference: ”As these features are implemented/developed...” vs. ”As these features are robustly validated...”

People misunderstanding a statement, does not make that statement a lie. I'm not saying the communication was effective/ good / whatever, but I'm also not going to let people use double quotes to imply it said something other than what it did. That is why I always try to include links and direct quotes to the source material. Worst case, I'll use single quotes ' ' to mean approximate statement.

Yeah there is a difference there, but you are also pulling out one sentence from a paragraph that, as a whole, shows the progression. In context:

First the system needs calibrated by lots of testing before we can activate it(this is the implement/ developed step):
"Before activating the features enabled by the new hardware, we will further calibrate the system using millions of miles of real-world driving to ensure significant improvements to safety and convenience. "

Since it is not calibrated, previous features of AP1 will not exist:
"While this is occurring, Teslas with new hardware will temporarily lack certain features currently available on Teslas with first-generation Autopilot hardware, including some standard safety features such as automatic emergency braking, collision warning, lane holding and active cruise control. "

Once we have these, and other, features working, we will release them (even more implementation/ development):
"As these features are robustly validated we will enable them over the air, together with a rapidly expanding set of entirely new features."

For the fleet:
"As always, our over-the-air software updates will keep customers at the forefront of technology and continue to make every Tesla, including those equipped with first-generation Autopilot and earlier cars, more capable over time."
 
People misunderstanding a statement, does not make that statement a lie.

Technically true but irrelevant as it does not prove the opposite either. Because nowhere does Tesla even hint the features do not exist yet, they only talk of validation, calibration and approval, formulated in various different ways. If it turns out the features did not exist yet, it certainly can be argued Tesla lied.

Some people seem to believe Tesla lied when they claimed EAP/FSD was awaiting validation/calibration/approval (instead being in waiting to be made basically). Others believe otherwise. I have no idea what they did.
 
@mongo is right but the thing is most people don't read or parse the fine print. Most people just saw a video that showed a Tesla vehicle completely self-driving on its own on both city streets and highways and even dropping the driver off and finding a parking space on its own with nobody in it with the tag line "the driver is just here for legal reasons" and they reached the very natural conclusion that Tesla had finished L4 autonomy. I don't blame them for reaching that conclusion. Which is why I am on record as saying that Tesla should have been clearer in their communication. They should have said something like "the video is a simulation of what we believe FSD will do in the future on this hardware. It is not a demonstration of current FSD".
 
@mongo is wrong if the features did not exist yet in 2016 that Tesla claimed needed activation/validation/calibration/approval. A feature not yet made can not just need those and saying so would be extremely misleading.

If I was delivering your new car and say your Model 3 is awaiting for the final detailing and then you can get but it turns out the car is not even made yet, yes that would be a lie even though technically it is waiting for the detailing too...

The open question of course is: did the features exist or not. On this there are different beliefs.