Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Elon: "Feature complete for full self driving this year"

This site may earn commission on affiliate links.
If it requires supervision, then by definition it is not Level 5. If the software is not yet reliable enough to operate without supervision then, again, by definition it is not Level 5. "Everything necessary for Level 5" includes software mature enough to not require supervision.

"All the hardware necessary for Level 5" is a different statement (Which Musk made and then had to retract, with regards to my HW2.5 car) than "Everything necessary for Level 5."

Sure, but I don’t think Elon said the car would be Level 5 by end of 2019. Just that it would be Level 5 feature complete by end of 2019. That is an important distinction because it leaves room for it to be a prototype of a Level 5 car.

I would agree though that Musk during Autonomy Investor Day did expect Tesla to have an actual Level 5 car somewhere in the U.S. within 2020 but that sounded like a limited experiment to my ears. The end of 2019 feature complete bit sounded more like having implemented everything for Level 5 (hence Level 5 feature complete) but without validating it yet for truly driverless drive.
 
And I agree that FC will be rough around the edges, make mistakes, require driver supervision. My point is you cannot call that true L5. There is no such thing as L5 that has poor reliability and requires driver supervision.
Here is where I disagree with both of you. Feature Complete is a software development term, not a QA or support term. So when I hear Musk refer to he is talking about the software itself, not the knowledge base '/ Neural Network. It still needs training.
That last sentence is, as I understand it, a guardrail imposed by Tesla indicating the training set for some feature is not to the 'reliability' level that they will assume legal responsibility for it's failure to behave in some situation. There will always be edge cases, there will always be failures. As ... OK... I'll look it up as it's important to this conversation, the way Musk addresses it with Ark. The software gets released, until regulators say take off the training wheels / guard rails. that supervision will be in place. But the software will still be feature complete. He mockingly refers to the other technology as "if then else". Yes, that is when all the rules are explicitly coded and if the car can't do something or cant do it flawlessly the issue is with the "if then else" and it's a bug and goes back to development. What Tesla is doing is writing a framework in which the car can be taught to drive much like a 16 yr old is taught to drive. That 16 yr old improves through experience creating more nuanced neurons to understand how to navigate any given scenario. This is the NN or knowledge base shipped with the software. Over the course of time the car learns to be a better driver by importing the additional neurons that 16 yr old acquired because it is living.
Where the disconnect is, is the distinction between the NN and the software.
 
@wcorey

That could fit if Tesla was using anything close to end-to-end neural networks, but as far as we can tell they are using a lot of software algorithms complemented by an increasing range of task-specific neural networks.

It would thus not be just about training the networks after FC but on FC they would need to have all the features in place within this structure (and then of course continue training their range of networks on top of that).

It is not like they can simply implement a simple traditional software framework around a single NN, complete that, call it FC, and then just keep on training the NN. Tesla’s system — as far as we can tell so far — is far more a mix of components that need traditional implementing for it to be FC... as is everyone’s, really, so Tesla is not special or different in this regard at all.
 
Sure, FC software can be deployed in a limited fashion, I didn’t think that was contested. I obviously assumed release would mean GA. I would assume that is the general meaning for most people?

So yeah Musk’s FC for ”Level 5 no geofence” at end of 2019 can certainly mean deployment to testing fleets but I did not take his words to mean a general release (unless I missed some other statement from him which is possible as I acknowledged in #1672).

To be clear, how widely do you think Musk’s ”Level 5 no geofence” software will be depolyed at the end of 2019?

I claim no authority over this. These are my views as a Tesla car owner.
1629 and 1623 are the specific interviews I referenced that I'll look up for you.
 
  • Helpful
Reactions: electronblue
@wcorey

That could fit if Tesla was using anything close to end-to-end neural networks, but as far as we can tell they are using a lot of software algorithms complemented by an increasing range of task-specific neural networks.

It would thus not be just about training the networks after FC but on FC they would need to have all the features in place within this structure (and then of course continue training their range of networks on top of that).

It is not like they can simply implement a simple traditional software framework around a single NN, complete that, call it FC, and then just keep on training the NN. Tesla’s system — as far as we can tell so far — is far more a mix of components that need traditional implementing for it to be FC... as is everyone’s, really, so Tesla is not special or different in this regard at all.
Again...feature complete means development is done with it, initial coding is done, story is DONE. It doesn't mean it's bug free nor does it mean it's GA or even Beta.
 
here is no such thing as L5 that has poor reliability and requires driver supervision.
Not to be argumentative but I would disagree with that around the edges.
Requiring driver supervision seems to me to be largely a regulatory mandate.
As for no such thing as level 5 with poor reliability... How about a 16 yr old driving from his parents home to the local video arcade. Next drop him in Manhattan, NY and tell him to drive to NJ across the Washington Bridge. ... poor reliability but over time he will handle it with relative ease. The kid is self learning or, supervised learning. The Tesla is neither self learning nor supervised. I guess they are largely one in the same. The tesla downloads it's wisdom by way of NN.
To what degree then is L5 a learned or acquired level of competence?
When I started out I was titled 'trainee', some years later I reached Principal status, two ends of the learning spectrum. L5 in what scenario, going to the video arcade or navigating inner city traffic? But the development org is done in either case. It's just experience not coding.
 
Here is where I disagree with both of you. Feature Complete is a software development term, not a QA or support term. So when I hear Musk refer to he is talking about the software itself, not the knowledge base '/ Neural Network. It still needs training.
You can't distinguish between NN & heuristics that way. They work together. Afterall NN is software (what Karpathy calls software 2.0).

End to End it has to work - and FC also means it meets some kind of quality bar. The bar is not high enough to be good for release - but good enough to pass a set of test suit - usually unit tests and some integration tests. Even in the old waterfall world, the Test team wouldn't accept "code complete" software to test unless agreed upon test cases were passing.

To put it another way - just finishing with "driving policy" is not FC. If the code says "if red light then stop" but NN can hardly ever recognize red lights, it is not FC. But if NN recognizes red lights only 90% of the time, it could be FC. Infact I'd say, Musk thinks of this as FC. He has tweeted and talked about how the car is getting better at left turn or recognizing curbs etc.

To what degree then is L5 a learned or acquired level of competence?
L5 is when the error rate is x times better than humans. Say, seven 9s.

Now, to solve an edge case, it would require both NN to be trained and possibly driving policy exception. Lets say double parking is an edge case. It needs NN to recognize double parking and driving policy changes to go around the parked cars, possibly crossing the center line. So, in most cases, solving for edge case ("march of 9s") requires both software 1 and software 2 enhancements.

BTW, I tried to figure out whether "exponential" improvement is actually possible for FSD - like EM claims. His optimistic targets of achieving FSD are based on this. NN has the property of being not good enough for a long time (i.e. below avg human) but then suddenly getting much better than the best human. There are a lot of posts (and calculations) on this in the investor thread.

Tesla, TSLA & the Investment World: the 2019 Investors' Roundtable
 
Last edited:
  • Like
Reactions: GeoX750
Feature complete... To be honsest I am starting to lose faith in Elon. Where is the Adavnced Summons? Is it still coming? At the last quarterly discussion he mentioned that Tesla insurance would be introduced next month. We are still waiting. I asked Elon directly but no reply.
 
  • Like
Reactions: Matias
You say that very authoritatively. Is it an authoritative statement or world according to EVNow?
This is according to my experience the industry standard practice.

We are not talking about "code complete" from 90s. We are talking about feature stories being complete. To complete a user story you need to satisfy the acceptance criteria. If the user story is competently written, it needs to be an end-to-end story (note the word user) - not a "task" (I've seen my share of poorly written stories, though). So, if the story says stop at red light - it needs to include both the NN training for Red Light and driving policy of stopping the car when red light is detected by NN. Ofcourse some companies may have a higher level story called feature "stop at red light" and break that down into two separate stories that can be owned by two separate people.

BTW, there is a reason I'm saying Tesla does it this way too. It is clear some of these features are in Musk's (and the dev team members') car and he is actually testing these himself. He can't test the features, if they are not end to end and if they are not aiming for a reasonable quality standard.
 
Feature complete... To be honsest I am starting to lose faith in Elon. Where is the Adavnced Summons? Is it still coming? At the last quarterly discussion he mentioned that Tesla insurance would be introduced next month. We are still waiting. I asked Elon directly but no reply.
Enhance summon coming next week!
I'm not convinced Enhanced Summon will come out before FSD is out of beta. I'm skeptical that someone 150 feet away from their car can ensure that it won't hit anything. People have trouble with that when they're in the car! And of course with regular summon people also manage to hit stuff...
 
  • Like
Reactions: Dutchie
So from some of the above I gather that "feature-complete" means to some folks that all the hardware is in place, and the coding that will teach the car how to drive is complete, but the teaching comes after.

So my question: How long will it take to teach the car how to drive? One year? Two years? Five years? An intelligent 16-year-old can learn to drive in a few hours behind the wheel with an instructor, and may reach average driving ability in a few years.

However, I do not have as much faith in neural networks and deep learning as some folks have. learning a game with a very small number of precise rules is a far different task than any sort of interactions in the real world where unexpected situations come up.

I think we'll have self-driving cars, but not next year. I think we'll see our cars getting new features gradually, and then getting better at those actions while we remain alert, and I think that we won't move from Level 2 to Level 3 for another year at least, and then only on limited-access roads, and probably 4 or 5 years after NoA in the city is introduced, it might go from Level 2 to level 3.
 
LOL. I've coded in assembly too.

(though I should note we were using code complete at Microsoft in early part of this decade too)
My experience is when people hide, it's generally because there is something there to hide. Yeah, Microsoft..I could tell you stories but that would be before your time. You're referring to the Microsoft Press book I assume. Nobody is or has been talking about that. You've answered everything I needed to know. We're done.
 
My experience is when people hide, it's generally because there is something there to hide. Yeah, Microsoft..I could tell you stories but that would be before your time. You're referring to the Microsoft Press book I assume. Nobody is or has been talking about that. You've answered everything I needed to know. We're done.
Dude, I worked in MS for over a decade. You assume too much.

People "hide" because we don't want to get harassed in real life by people who don't like our online comments.
 
I've completely lost track of what we're even arguing about. Here's a gem from Elon Musk's interview with Lex Fridman though.
Lex Fridman:
"Do you see Tesla's full self-driving as still for a time to come requiring supervision of the human being. So, its capabilities are powerful enough to drive but nevertheless requires a human to still be supervising, just like a safety driver is in other fully autonomous vehicles?"
Elon Musk:
"I think it will require detecting hands on wheel for at least six months or something like that from here. Really it's a question of, from a regulatory standpoint, how much safer than a person does Autopilot need to be for it to be okay to not monitor the car."
I'm looking forward to being able to nap while on Autopilot later this year.:p
I guess he did say at least six months which could also mean never.









 
My experience is when people hide, it's generally because there is something there to hide. Yeah, Microsoft..I could tell you stories but that would be before your time. You're referring to the Microsoft Press book I assume. Nobody is or has been talking about that. You've answered everything I needed to know. We're done.
My experience is that when people switch from actual content to ad hominem attacks, they've just become desperate because they can't come up with cogent arguments. BTW, I was programming a lot earlier than the 90's.
 
  • Like
Reactions: EVNow