Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Elon: "Feature complete for full self driving this year"

This site may earn commission on affiliate links.
Going to be interesting to see how the "training" model goes for city driving on vision alone; on the highway its straightforward, review video from a driver takeover and refine. In the city that's a lot tougher as decisions have to be quicker, and takeovers may be more (false positive) or less (didn't react in time) and both will cause the model to train poorly.
 
  • Like
Reactions: diplomat33
Elon specifically said the entire AP code base was getting a fundamental re-write (which he claimed was "close" to being done)
In the software business it's well known that the last 10% of the code takes 90% of the time.
Closer to home, whenever I tell my wife that I've just got to fix this last bug, she laughs and goes to bed.
 
Thus the driver supervised beta releases.
However, you can't just ship something when it is safety critical.

That is an old industry proverb. It perhaps has it's origins with Voltaire - "Il meglio è l'inimico del bene" > "Perfect is the enemy of good".

Modern society sometimes forgets this. Nothing made by humans is perfect or can be. There are three fundamental reasons for it:
1) You can always improve something. Don't take my word for it, look at the mousetrap industry.
2) Not everybody agrees on what "perfect" really should be.
3) The systems surrounding a project are changing so the perfect list of features is dynamic. Some become unnecessary, and new ones must be added.

FSD is a project so it cannot be perfect. So we need to define what 'good enough' is by law. Tesla is a company that seems to understand this more than most. People complain about the imperfection of Autopilot when is the best driver's aid for all conditions for sale today.

The Autopilot crashes are unfortunate, but are to be expected. The fact these are generating lawsuits is detrimental to the goal of better automotive safety. The lawsuits are based on the premise that AP is good, but not perfect, and Tesla knew that and should have only sold something that doesn't exist nor can ever exist.
 
That is an old industry proverb. It perhaps has it's origins with Voltaire - "Il meglio è l'inimico del bene" > "Perfect is the enemy of good".

Modern society sometimes forgets this. Nothing made by humans is perfect or can be. There are three fundamental reasons for it:
1) You can always improve something. Don't take my word for it, look at the mousetrap industry.
2) Not everybody agrees on what "perfect" really should be.
3) The systems surrounding a project are changing so the perfect list of features is dynamic. Some become unnecessary, and new ones must be added.

FSD is a project so it cannot be perfect. So we need to define what 'good enough' is by law. Tesla is a company that seems to understand this more than most. People complain about the imperfection of Autopilot when is the best driver's aid for all conditions for sale today.

The Autopilot crashes are unfortunate, but are to be expected. The fact these are generating lawsuits is detrimental to the goal of better automotive safety. The lawsuits are based on the premise that AP is good, but not perfect, and Tesla knew that and should have only sold something that doesn't exist nor can ever exist.

These are great points. There is an excellent Rand Study called "The Enemy of Good" that makes the case that waiting until self-driving or other highly automated vehicles are much better than an average human driver instead of just "better" could cost hundreds of thousands of lives. Deploying Autonomous Vehicles Before They're Perfect Will Save More Lives

Ironically, we seem to be heading down a path of killing tens or hundreds of thousands of people in the name of safety.
 
That is an old industry proverb. It perhaps has it's origins with Voltaire - "Il meglio è l'inimico del bene" > "Perfect is the enemy of good".

Modern society sometimes forgets this. Nothing made by humans is perfect or can be. There are three fundamental reasons for it:
1) You can always improve something. Don't take my word for it, look at the mousetrap industry.
2) Not everybody agrees on what "perfect" really should be.
3) The systems surrounding a project are changing so the perfect list of features is dynamic. Some become unnecessary, and new ones must be added.

FSD is a project so it cannot be perfect. So we need to define what 'good enough' is by law. Tesla is a company that seems to understand this more than most. People complain about the imperfection of Autopilot when is the best driver's aid for all conditions for sale today.

The Autopilot crashes are unfortunate, but are to be expected. The fact these are generating lawsuits is detrimental to the goal of better automotive safety. The lawsuits are based on the premise that AP is good, but not perfect, and Tesla knew that and should have only sold something that doesn't exist nor can ever exist.
Indeed, I was thinking of the recent issues at Boeing which is a different realm of software than FSD. I dropped my reference to them which I now realize totally changed the tone of my statement regarding just release it.
Further AP features/ FSD with human override can't come soon enough. FSD without a steering wheel needs to be right before release.
 
These are great points. There is an excellent Rand Study called "The Enemy of Good" that makes the case that waiting until self-driving or other highly automated vehicles are much better than an average human driver instead of just "better" could cost hundreds of thousands of lives. Deploying Autonomous Vehicles Before They're Perfect Will Save More Lives

Ironically, we seem to be heading down a path of killing tens or hundreds of thousands of people in the name of safety.
They still advocate waiting until self-driving cars are better than humans! Does anyone seriously think that Tesla is close to having FSD drive better than a human?
 
They still advocate waiting until self-driving cars are better than humans! Does anyone seriously think that Tesla is close to having FSD drive better than a human?

Three things I'm confident of:

(1) I'm confident Tesla will not release FSD features until they have data showing that those features make driving safer with human supervision, which is what they clearly did with Autopilot.

(2) I'm confident they won't relieve the need for supervision until there is data showing unsupervised FSD is safer than a human driver.

(3) I'm confident Tesla critics will go crazy over every accident or mishap, while ignoring the much higher rate of accidents, injuries and fatalities in human supervised cars.

If people actually cared about reducing accidents and saving lives, they should be talking about how to get life-saving driver assistance technology like this into every car, instead of fearmongering and attempting to slow down its adoption.
 
That is an old industry proverb. It perhaps has it's origins with Voltaire - "Il meglio è l'inimico del bene" > "Perfect is the enemy of good".

Modern society sometimes forgets this. Nothing made by humans is perfect or can be. There are three fundamental reasons for it:
1) You can always improve something. Don't take my word for it, look at the mousetrap industry.
2) Not everybody agrees on what "perfect" really should be.
3) The systems surrounding a project are changing so the perfect list of features is dynamic. Some become unnecessary, and new ones must be added.

FSD is a project so it cannot be perfect. So we need to define what 'good enough' is by law. Tesla is a company that seems to understand this more than most. People complain about the imperfection of Autopilot when is the best driver's aid for all conditions for sale today.

The Autopilot crashes are unfortunate, but are to be expected. The fact these are generating lawsuits is detrimental to the goal of better automotive safety. The lawsuits are based on the premise that AP is good, but not perfect, and Tesla knew that and should have only sold something that doesn't exist nor can ever exist.

While I do agree that we should not be so perfectionist that we delay FSD too long, I do think that we should expect "reasonable" reliability and safety. There are issues that are outside of Tesla's control that we cannot reasonably expect Tesla to fix perfectly. But there are issues that are within Tesla's control, that I think we can reasonably expect Tesla to get right. For example, in the case of autonomous driving, we cannot reasonably expect the car to have zero accidents. But we can reasonably expect it not to hit something obvious (like a stopped truck in the middle of the road on a clear day) or not to change lanes when it is obviously unsafe or not to run red lights all the time. We can also expect Tesla to take reasonable steps to ensure driver attention or to warn the driver when there is a problem.

Now, I realize that people will disagree on what is "reasonable". It might be more of a legal question. It might be something for government regulators to figure out and define. But once we have a clear minimum standard then we can expect autonomous cars to meet that minimum standard.
 
  • Like
Reactions: Matias
(1) I'm confident Tesla will not release FSD features until they have data showing that those features make driving safer with human supervision, which is what they clearly did with Autopilot.
This I'm not so confident about. Their attitude is basically "trust us, it's safer". I'd like to see something much more rigorous than their safety report.
(2) I'm confident they won't relieve the need for supervision until there is data showing unsupervised FSD is safer than a human driver.
I agree. There is no way to fake this.
(3) I'm confident Tesla critics will go crazy over every accident or mishap, while ignoring the much higher rate of accidents, injuries and fatalities in human supervised cars.
I agree. That's why they need to release a more rigorous study than their safety report. I don't believe city NoA will improve safety but I'd be happy to be proven wrong.
 
Right now based on my limited experience, before even considering bumping up to the next SAE tier, I think they need to work on:

1) Predictive course steering input. The car starts to turn in too late. I could not figure out why AP kept shutting off autosteer, and then I realized I was giving the wheel input before AP sees the curve. Just like human advice: "Look farther ahead".
2) Adding a seat shaker so bad drivers will also be warned of danger.
3) Addressing gaps in the line markers such as found lane marking transitions. "Look far ahead".

I think just these 3 would have prevented the Mountain View fatality and perhaps others. If the car was human, I'd say it has object fixation. But in reality, they aren't processing predictive motion, and they aren't doing enough to alert distracted drivers.
 
This I'm not so confident about. Their attitude is basically "trust us, it's safer". I'd like to see something much more rigorous than their safety report.

I agree. There is no way to fake this.

I agree. That's why they need to release a more rigorous study than their safety report. I don't believe city NoA will improve safety but I'd be happy to be proven wrong.


Demanding a higher level of proof for automated driving technology than we require for cars that don't have it is another form of putting in place hurdles that slow down progress. The data that Tesla has released shows that Teslas with and without Autopilot get in far fewer accidents than average cars. If you want more data, don't buy the car. If you want safer highways, force other car brands whose cars get in accidents at far higher rates to adopt and improve their automated driver assistance, so the fools driving them stop cutting me off and drifting into my lane. And force them to get cracking on stopping for red lights, as running red lights is a major cause of serious accidents.
 
  • Like
Reactions: nepenthe and mongo