Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Elon: "Feature complete for full self driving this year"

This site may earn commission on affiliate links.
No, but FSD needs to be so close to 100% reliability that in practice, you can't tell the difference.

Although obviously the closer to 100% reliability, the better, it really only needs to surpass the reliability of human drivers before it becomes a net-safety benefit. At that point it's still probably best that the system runs supervised, but even unsupervised, it's as good, or better than human drivers, and thus saving lives. The question isn't just how many nines until it's indistinguishable from 100%, but how many nines until it's past the reliability of humans. (How many nines of reliability is the average human driver rated for? What about the average teen fresh off the driving test?)
 
  • Like
Reactions: diplomat33
I don't think the issue is wether Tesla can self-drive a car to a customer then sell it to them. It's if it's legal for Tesla to sell cars with self-driving to customers?
The issue is that the manufacturer of the self driving system is responsible for at-fault accidents while the system is in use. I believe this is the regulation that Tesla would like to change and I don’t think that’s ever going to happen.
 
Hummm, so you are thinking they (OEM/ Tesla) would be subrogated as a party by the insured/ injured in any accident?
Yes, that's what current state regulations say. It would be ridiculous to hold a passenger liable if the car is driving itself!
It sounds like Tesla would like to have customers be responsible for monitoring the "self driving" system and be responsible for accidents. That seems horribly unsafe in an urban environment where you may have to react very quickly if the system makes a mistake. I think that if Tesla were to release such a system it would quickly be banned by regulators (the NHTSA).
 
Although obviously the closer to 100% reliability, the better, it really only needs to surpass the reliability of human drivers before it becomes a net-safety benefit. At that point it's still probably best that the system runs supervised, but even unsupervised, it's as good, or better than human drivers, and thus saving lives. The question isn't just how many nines until it's indistinguishable from 100%, but how many nines until it's past the reliability of humans. (How many nines of reliability is the average human driver rated for? What about the average teen fresh off the driving test?)
EAP is nowhere near as reliable as a human driver. The paradox is that as it gets more reliable people will pay attention less which could actually make it less safe. If they do release "FSD" it might be so terrifyingly bad that people will keep their hands on the wheel and pay attention. As you improve it how do get people to not become complacent?
 
  • Disagree
  • Like
Reactions: OPRCE and fmonera
The issue is that the manufacturer of the self driving system is responsible for at-fault accidents while the system is in use. I believe this is the regulation that Tesla would like to change and I don’t think that’s ever going to happen.

Having the car self-drive to the customer's home would streamline deliveries but it would need to be super solid. It would be incredibly costly, not to mention super embarrassing, if the car crashed before arriving at the customer's home.
 
Yes, that's what current state regulations say. It would be ridiculous to hold a passenger liable if the car is driving itself!
It sounds like Tesla would like to have customers be responsible for monitoring the "self driving" system and be responsible for accidents. That seems horribly unsafe in an urban environment where you may have to react very quickly if the system makes a mistake. I think that if Tesla were to release such a system it would quickly be banned by regulators (the NHTSA).

I hear what you are saying. No points on your license if you were not driving (and were operating within the bounds of the system). If the insurance covers injury and property damage, then the only reason to go after the OEM would be gross negligence/ profit. So the insurance market will come up with new actuarial tables for each FSD, likely starting at high cost, then tapering down as more data is available.

Now, if we could ensure everyone was driving a Tesla, the injuries should be minor...
 
To me, the most interesting thing Elon said in the whole interview is (at 14:25):

“And we’re really starting to get quite good at not even requiring human labelling. Basically the person, say, drives the intersection and is thereby training Autopilot what to do.”

This hints that Tesla is using some form of imitation learning (also known as apprenticeship learning, or learning from demonstration).

It’s been reported that Tesla is taking a supervised learning approach to imitation learning. This approach is sometimes called behavioural cloning.

I’m starting to second guess whether Elon was talking about imitation learning for path planning and driving policy, or whether he was talking about weakly supervised learning for object recognition and other computer vision tasks. What he said in the interview is consistent with either interpretation. (See here for details.)
 
Last edited:
Its completely amazes me how ppl can be so gullible. Its absolutely baffling how shifty Elon can be. Here's a guy who says he's embedded with the autopilot team, that they report directly to him, that he meets with them every week and understands the work that they are doing completely. Even if we do him a favor and disregard his vows in 2015 and 2016. This same guy has been saying since 2017 and again in multiple occasions in 2018 that sleeping in a Tesla, the Tesla taxi network, Tesla self deliveries will be ready in 2019 and that the march of 9s will be done by the end of 2019, including up to last month's Q4 earnings call. Now less than 30 days from the call he is now saying its end of 2020 and that 2019 is only for "feature complete".

Yet everyone believes him totally, completely.

Astonishing!

Elon Musk says Tesla's ridesharing network could be ready by 2019 | Daily Mail Online

Where are you getting this everyone bit from?

Elon told everyone who had HW2 that they'd get Sentry Mode, but that was bullcrap. So do you think any HW2 owner is actually going to believe word for word what Elon says?

Or HW2.5 owners that are aware of it? No

Do you think I believe it? I don't totally disbelieve it, but it's more like the march of .001's.

.001
.002
.003

Okay, that might be a slight exaggeration. But, geesh you really need to learn how to read a crowd. Not just a few pets that keep you entertained.

We can still be excited by something Elon says, but skeptical. No one has pulled off FSD, and no one is actually really close except for limited white listed area. Waymo still has safety drivers even over a small white listed area.

Everyone that predicted any time soon is way off. But, estimates over 2 years are pointless. No one cares about some estimate that says 5+ years. I think that's why Elon always says 2-3 years. It keeps it fresh even if it's non-sense.

If he said 5-10 years no one would say anything about it.

Lots of people said we'd have L3 by now, but do we? Nope. Nothing here except L2+ and the plus is really being generous.

So now we say in 2 years yet again, and you'll post something about MobileEye to show it's 2 years for real this time. It's real because MobileEye is more grounded in their estimates. But, who knows what crap will happen over those 2 years. Intel has bought and destroyed companies in less than 2 years. It gives Uber 2 more years to kill more people to ruin it for all of us.

I imagine it's going to be 2 years for Tesla as well (to get to real L3). Things do tend to converge on a point as the world is too competitive for one to break free. Like all the Tesla employees that jumped ship to start their own thing.

There is also regulatory elements to consider.
There is infrastructure needs to make FSD safe/efficient like Car2Car communication, and infrastructure to car communication. Audi has this for stop lights, and I imagine it's pretty friggen sweet where its supported. It allows a person driving an Audi to hit all the greens. That's like Nirvana even without FSD.
 
Last edited:
Like others I like to walk and ride some times so bumper cars are not really an option.
Correct, I have not forgotten pedestrians:
Yes, people will likely die anyway. And it may be different people than would have without AP. The goal is for less people overall to due ir be injured. The 0.0001% error rate does not directly correlate to fatalities. Tesla makes 3 of the safest cars you can buy and EAB operates at a lower level than AP. Pedestrians are at the greatest risk for injury.

I guess I should have said, all those driving are driving a Tesla. Due to the frunk, Teslas are one of the best cars to be hit by, if you are going to get hit by a car.

Pedestrian density also correlates with lower speeds and sidewalks, which increase the error required for FSD to deviate into the walking path. Uncrontrolled crosswalks (or Jay walking) are the most difficult scenarios, especially in places like Ann Arbor where you must also stop for someone waiting to cross, not just in the process of crossing. They are adding more user activated crossing signals though, so that is good for both human and machine.

Bicyclists will be another big case to deal with. I think the current law is 5 feet of space when overtaking...
 
But I also want Tesla to make progress and eventually make FSD a reality.

Of course Tesla will make progress and of course somebody will eventually make FSD a reality -- my bet is it's not going to be Tesla btw, or at least they won't be anywhere near first. The question is whether the promises Telsa has made about current generation cars are going to be kept. And they won't be. And then the question for the courts to decide will be whether Tesla knowingly or negligently made false representations about a product they were offering for sale.
 
  • Like
Reactions: OPRCE and R.S
What’s interesting to me is that only Tesla has the hardware in place to take a machine learning approach to action.

We just need to rename this thread "wut".

Many companies have vehicles with way better cameras and associated hardware on them than Tesla. Anybody can take a machine learning approach to this problem, and everybody who's doing it seriously is employing machine learning in various ways. The ones trying end-to-end machine learning are well behind in performance compared to the hybrid approaches, so far. Seriously, you think Tesla understands machine learning better than Waymo somehow? wut.

According to Drago Anguelov, Waymo can’t collect enough data.

Drago Anguelov may be correct -- who knows -- but I'd be willing to bet that Waymo actually collects (and has stored) more data (in terms of actual camera frames) than Tesla has. Tesla throws away the vast majority of their data because it's simply too expensive to store it, and even more expensive to actually train DL models on vast amounts of data. Tesla doesn't have that kind of cash to burn. It's not about how much in principle you could have collected if you had bothered to -- the question is how much has Tesla actually collected and stored from their gazillions of "fleet miles"? If you listen to the people who have actually rooted the cars and looked at what's going on, the answer is very very little as a percentage of the total amount that they could have, in principle, collected.

Have you actually done the math on how expensive it is to collect, transmit, store, retrieve, and train (repeatedly during development/testing) DL models on exabytes of video frames? Aren't you some kind of financial analyst? Shouldn't you do the math on that?

Irrespective of timelines, Tesla appears to be taking a fundamentally different approach than all other companies — because only Tesla can.

No, Tesla's approach is in no way innovative and they are not the only ones trying it. Nvidia is also trying it, as just one example. It is quite truly a rather old idea which most people who actually know what they're talking about already concluded was not the best path forward in the near term.
 
  • Like
Reactions: OPRCE and R.S
Of course Tesla will make progress and of course somebody will eventually make FSD a reality -- my bet is it's not going to be Tesla btw, or at least they won't be anywhere near first. The question is whether the promises Telsa has made about current generation cars are going to be kept. And they won't be. And then the question for the courts to decide will be whether Tesla knowingly or negligently made false representations about a product they were offering for sale.

Congratulations on solving the Halting problem! Might I borrow your time machine at some point?;)


Many companies have vehicles with way better cameras and associated hardware on them than Tesla. Anybody can take a machine learning approach to this problem, and everybody who's doing it seriously is employing machine learning in various ways. The ones trying end-to-end machine learning are well behind in performance compared to the hybrid approaches, so far. Seriously, you think Tesla understands machine learning better than Waymo somehow? wut.

No one else sells hundreds of thousands of instrumented cars.

ave you actually done the math on how expensive it is to collect, transmit, store, retrieve, and train (repeatedly during development/testing) DL models on exabytes of video frames? Aren't you some kind of financial analyst? Shouldn't you do the math on that?

Collection: free due to car purchase. Transmission: free via owner's wi-fi, Internet connection at Fremont, negligible. Storage: negligible, redundant data is eliminated. Training: wish I new, either AWS or custom, but likely much lower than the the cost of EAP amortized.

No, Tesla's approach is in no way innovative and they are not the only ones trying it. Nvidia is also trying it, as just one example. It is quite truly a rather old idea which most people who actually know what they're talking about already concluded was not the best path forward in the near term

Again, who else has hundreds of thousands of instrumented cars on the road with people driving them for free?
 
  • Love
  • Like
Reactions: kbM3 and APotatoGod
Of course Tesla will make progress and of course somebody will eventually make FSD a reality -- my bet is it's not going to be Tesla btw, or at least they won't be anywhere near first. The question is whether the promises Telsa has made about current generation cars are going to be kept. And they won't be. And then the question for the courts to decide will be whether Tesla knowingly or negligently made false representations about a product they were offering for sale.

Well, if Tesla can deliver this "feature complete" version of autopilot and if it can really handle traffic lights, intersections etc and finding a parking space with no driver input, then I think that will go a long way to quieting the complaints. I think most owners will just be happy to have a car that can finally do what we saw in the infamous "FSD video".
 
  • Like
Reactions: kbM3
Collection: free due to car purchase. Transmission: free via owner's wi-fi, Internet connection at Fremont, negligible. Storage: negligible, redundant data is eliminated. Training: wish I new, either AWS or custom, but likely much lower than the the cost of EAP amortized.

Show me the numbers please. How much "redundant" data is eliminated and how is it identified as "redundant"? What does "redundant" even mean? How much data are they starting with? How much does a petabyte of storage cost on S3? How many GPU-hours does it take to train a DL net on that many camera frames, and what's the cost per GPU-hour on AWS? How much does it cost to label the data for training?

(Yes, I know, Elon is spewing some BS about auto-labeling but that is not really a thing. Imitation learning sure, but to get those bounding boxes around objects requires bounding boxes to imitate, i.e., labels. Imitation learning may be applicable to many parts of the problem but not all of it, unless you go end-to-end imitation learning somehow, which would be quite... something.)
 
Show me the numbers please. How much "redundant" data is eliminated and how is it identified as "redundant"? What does "redundant" even mean? How much data are they starting with? How much does a petabyte of storage cost on S3? How many GPU-hours does it take to train a DL net on that many camera frames, and what's the cost per GPU-hour on AWS? How much does it cost to label the data for training?

(Yes, I know, Elon is spewing some BS about auto-labeling but that is not really a thing. Imitation learning sure, but to get those bounding boxes around objects requires bounding boxes to imitate, i.e., labels. Imitation learning may be applicable to many parts of the problem but not all of it, unless you go end-to-end imitation learning somehow, which would be quite... something.)

If the data does not contain sudden braking, acceleration, or steering input, ignore it. If it does, check it out. If it is interesting add it to the training/ test set.
AWS storage cost: Over 500 TB/ month$0.021 / GB
Commercial hard drive cost: 50$ for 1 TB = 0.05 /GB total.
(rest you can Google)
 
If the data does not contain sudden braking, acceleration, or steering input, ignore it. If it does, check it out. If it is interesting add it to the training/ test set.

You're missing a vast quantity of really important data here. Driving is more than just sudden reactions. In fact sudden reactions are a small slice of the problem. Not overreacting is a huge problem. If you only have training data from when things are going wrong, your deep learning model will learn that things are always going wrong and it will always overreact. You have to balance the slam-on-the-brakes events with the "don't worry that pedestrian is patiently waiting for the traffic light to change before crossing the street" events.

AWS storage cost: Over 500 TB/ month$0.021 / GB
Commercial hard drive cost: 50$ for 1 TB = 0.05 /GB total.
(rest you can Google)

I'm sorry, you haven't added any of this up and you haven't addressed labeling and training costs.

I will give you a hint: The number is a least in the tens of millions per year, and probably in hundreds of millions annually -- if they are actually using more than a tiny fraction of the pixels that come out of their cameras in the "fleet". Which is why they're only using a tiny fraction of those pixels for training and testing their models. Which means they don't really have any more data than Waymo does; and I suspect the quality of their data is quite a bit worse than Waymo's.
 
  • Informative
Reactions: OPRCE
You're missing a vast quantity of really important data here. Driving is more than just sudden reactions. In fact sudden reactions are a small slice of the problem. Not overreacting is a huge problem. If you only have training data from when things are going wrong, your deep learning model will learn that things are always going wrong and it will always overreact. You have to balance the slam-on-the-brakes events with the "don't worry that pedestrian is patiently waiting for the traffic light to change before crossing the street" events.

Sure, but the NN can also compare the action it would take against the driver's and trigger an upload if they differ. I was merely giving a simple view of why they do not need all data from all cameras from all time.

I'm sorry, you haven't added any of this up and you haven't addressed labeling and training costs.

And you have not paid me to do so.

I will give you a hint: The number is a least in the tens of millions per year, and probably in hundreds of millions annually -- if they are actually using more than a tiny fraction of the pixels that come out of their cameras in the "fleet". Which is why they're only using a tiny fraction of those pixels for training and testing their models. Which means they don't really have any more data than Waymo does; and I suspect the quality of their data is quite a bit worse than Waymo's.

200k 3 sold in the last year+, say 50% take rate on $5k EAP = $500 million for AP development over 2 years (ignoring S/X).

The reason for down sampling is HW2 can't handle the data rate. That doens't mean the uploads were sub-sampled or cropped. The FSD development is using full size full frame rate on all cameras.