Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Elon: "Feature complete for full self driving this year"

This site may earn commission on affiliate links.
Elon would like to have a word with you about that claim. On the Road to Full Autonomy With Elon Musk — FYI Podcast (rewind to 13:40 to 14:30)

I find it very hard to take anything said in that "interview" seriously, as the two breathless sycophants without technical understanding spent most of their time pumping words of self-praise into Musk's mouth. There was not a single follow-up to pin anything down.

e.g. @12:20
ARK: What makes you think this amazing technical feat [L4 sleep-in-the-back Autonomous Driving] is a solvable problem and why should Tesla be the one to solve it?
EM: I am an Engineer, I wrote software for like 15 or 20 years, I mean like I understand software at quite a fundamental level, I know what we need to solve to make FSD feature-complete, I think we've got an extremely good technical team, I think we really have the best people, it's an honour to work with them, and I am certain that we will get this done this year.

To which the obvious follow-up would have been "What has caused you to gain a better understanding now of what is needed for FSD than you had in 2016, 17 and 18, when you also made hugely optimistic predictions of imminent success?"
 
he said those words at the times I outlined unprompted, if I remember correctly. They are very precise words (which is sort of unusual).

That's true, so are they to be taken as likely more accurate than the usual sales puffery then?

To clarify, I'm not saying I disbelieve everything or any particular thing he said but that the exchange left much to be desired.

I think Elon actively avoids putting himself in front of anyone likely to conduct a competent interview, so he can easily control the flow around uncomfortable areas.

Hence the appearances with Marques Brownlee, Joe Rogan, etc.
 
Certainly I agree they do not send back all data, as streaming 8 or 9 cameras would instantly saturate the LTE upstream and produce an impossible storage nightmare anyhow. So yes it has to be a carefully selected subset, but what the criteria for that are and how (or if) that data is ultimately fed into training the next release currently is open to speculation. e.g. the spare processor on HW2.5 could have been dedicated throughout to filtering for most significant/useful data to transmit.

You misunderstand. I would expect them to use strict criterial to limit when they send back data AND sub-sample it on top of that. :)
 
So Tesla actually mentions in the Model 3 lease agreement that you cannot buy the vehicle at the end of the lease because Tesla will use the vehicle for the Tesla Network:

"Please note, customers who choose leasing over owning will not have the option to purchase their car at the end of the lease, because with full autonomy coming in the future via an over-the-air software update, we plan to use those vehicles in the Tesla ride-hailing network"
Tesla launches Model 3 leases, will keep cars for autonomous Uber-like service after term

If Tesla is bluffing about the Tesla Network that is one heck of a bluff! Tesla is certainly acting like FSD is truly coming soon.
 
So Tesla actually mentions in the Model 3 lease agreement that you cannot buy the vehicle at the end of the lease because Tesla will use the vehicle for the Tesla Network:

"Please note, customers who choose leasing over owning will not have the option to purchase their car at the end of the lease, because with full autonomy coming in the future via an over-the-air software update, we plan to use those vehicles in the Tesla ride-hailing network"
Tesla launches Model 3 leases, will keep cars for autonomous Uber-like service after term

If Tesla is bluffing about the Tesla Network that is one heck of a bluff! Tesla is certainly acting like FSD is truly coming soon.

@Bladerskb has a nice theory about a Level 2 Tesla Network in the Model S forum, check it out.

But beyond all that speculation what does it actually mean when Tesla says that? Does it actually commit Tesla to anything?

No, other than not giving out a residual or after-lease pricing commitment, which gives them actually more flexibility... not less. They can still change their mind and offer it to the lessee after the lease. They can put those cars into the CPO fleet. They can sell them to a partner for offloading...

In reality, it could be just a PR move with nothing more to back it up than AP2 in 2016 (boost the stock price now, worry about delivering later). Or it could be a move that gives them more favourable leasing terms money-wise. Or it could be a Level 2 move in the nature of @Bladerskb ’s idea. Or it could be aspirational and a goal, but a stretch one. Or it could be your optimistic interpretation... We don’t really know do we.
 
@Bladerskb has a nice theory about a Level 2 Tesla Network in the Model S forum, check it out.

But beyond all that speculation what does it actually mean when Tesla says that? Does it actually commit Tesla to anything?

No, other than not giving out a residual or after-lease pricing commitment, which gives them actually more flexibility... not less. They can still change their mind and offer it to the lessee after the lease. They can put those cars into the CPO fleet. They can sell them to a partner for offloading...

In reality, it could be just a PR move with nothing more to back it up than AP2 in 2016. Or it could be a Level 2 move in the nature of @Bladerskb’s idea. Or it could be aspirational and a goal, but a stretch one. Or it could be your optimistic interpretation... We don’t really know do we.

No we don't really know. Yes, there could be other explanations for it. But the fact that Tesla would actually say "we can't let you buy the car after the lease because we have plans to use it in the Tesla Network" and Tesla mentions full autonomy. I just find it interesting, that's all. It make me go "hmm?!" That's all.
 
  • Like
Reactions: rnortman
No we don't really know. Yes, there could be other explanations for it. But the fact that Tesla would actually say "we can't let you buy the car after the lease because we have plans to use it in the Tesla Network" and Tesla mentions full autonomy. I just find it interesting, that's all. It make me go "hmm?!" That's all.

I bet Tesla wants you (and the market) to go ”hmm?!”

In reality all Tesla says you may not have a chance to buy the car, so you must agree to that. The rest is PR at this point.

After all in 2016 Tesla sold AP2 cars by saying that with FSD you can not make money outside of Tesla Network. Still, we know what their status was back then. Those cars sold then will be mostly ending their leases by the time Tesla Network appears even in theory. :)
 
Did anyone consider the possibility that when Elon / Tesla say they need "billions / millions" of miles of validation that what they really are speaking to is probability? In other words, the only data Tesla cares about right now would be "edge cases", and in particular, extremely rare ones.

This has been discussed many times before. When you train an ML algorithm, you are teaching it about a statistical distribution of probabilities. If your training set mostly contains the edge cases -- pedestrians unexpectedly jumping out in front of the car for example -- the ML algorithm will learn that it is very likely that pedestrians will jump out in front of the car, because in the training data this has a high probability. The car will then brake hard for every pedestrian walking along the sidewalk when you put it in the real world.

The really, really, really hard part about L3+ autonomy is not handling the edge cases on their own. It's handling the edge cases while also behaving reasonably in the normal cases. You need a lot of "normal" data in addition to edge cases if you are going to rely on deep learning to learn to drive, so that the ML algorithm can learn the subtle cues that differentiate normal from abnormal.

Better yet, get some lidar, more radar, better cameras, and much faster compute hardware if you're serious about L3+ autonomy.
 
This has been discussed many times before. When you train an ML algorithm, you are teaching it about a statistical distribution of probabilities. If your training set mostly contains the edge cases -- pedestrians unexpectedly jumping out in front of the car for example -- the ML algorithm will learn that it is very likely that pedestrians will jump out in front of the car, because in the training data this has a high probability. The car will then brake hard for every pedestrian walking along the sidewalk when you put it in the real world.

The really, really, really hard part about L3+ autonomy is not handling the edge cases on their own. It's handling the edge cases while also behaving reasonably in the normal cases. You need a lot of "normal" data in addition to edge cases if you are going to rely on deep learning to learn to drive, so that the ML algorithm can learn the subtle cues that differentiate normal from abnormal.

Better yet, get some lidar, more radar, better cameras, and much faster compute hardware if you're serious about L3+ autonomy.

Disagree, action is not based on probability, it is based on the current data (camera frames). Pedestrians on side walk != pedestrian in road. However, pedestrian on side might increase the changes of the vehicle shifting in the lane or slowing. Pedestrian moving toward road even more so.

You need both the normal and edge cases in your training data and in your testing (validation data) in sufficient quantities and variation to create a neither over fitted nor under fitted.
 
Disagree, action is not based on probability, it is based on the current data (camera frames).

A lot of your post made sense, but this is as far from making sense as I've seen from you. Driving is absolutely based on probabilities and any autonomous driving system must have a way of predicting probable future trajectories of other actors in some way. This may be part of an end-to-end big black box deep NN, or it may include some "SW1.0" doing it the old fashioned way, or more likely some of each, but if you cannot predict what other actors are going to do you are dead in the water. Realistic prediction is necessarily probabilistic. If all you do is extrapolate ballistic trajectories then you will both brake hard constantly for no reason and fail to brake when you should, because other cars, cyclists, pedestrians, etc. do not follow ballistic trajectories.
 
Seriously!?

>"Car tried to turn left into oncoming traffic, driver had to avoid collision,” wrote a customer of Waymo, the unit of Alphabet, in early March after taking a ride in one of the company’s experimental self-driving taxis in suburban Phoenix. The customer was referring to the human backup driver who sits behind the wheel and is supposed to take over if a potential safety problem arises.

With Waymo Robotaxis, Customer Satisfaction Is Far From Guaranteed

As Tim Cook said, this is the mother of all AI problems. Even Krafchick admits that they'll take a long time before solving all the problems. Remove your lidar tin foil hat, AVs are hard & no, Aptiv/Lyft L5 are still L2 systems on steroids. Waymo's guardian angels might be the closest thing to "a lidar car driving" out there.
 
  • Like
Reactions: CarlK
A lot of your post made sense, but this is as far from making sense as I've seen from you. Driving is absolutely based on probabilities and any autonomous driving system must have a way of predicting probable future trajectories of other actors in some way. This may be part of an end-to-end big black box deep NN, or it may include some "SW1.0" doing it the old fashioned way, or more likely some of each, but if you cannot predict what other actors are going to do you are dead in the water. Realistic prediction is necessarily probabilistic. If all you do is extrapolate ballistic trajectories then you will both brake hard constantly for no reason and fail to brake when you should, because other cars, cyclists, pedestrians, etc. do not follow ballistic trajectories.

There is sense in there, really :)
They may have longer time duration object oversight baked in outside the NN recognition side of things. It seems like that would be necessary for path tracking with occlusions. (is there a car blocked by the one I see for a left turn over two lanes).

For 100% safety, you need the compute the maximum acceleration and velocity of each movable object in the scene just in case it does that (pedestrian jumping in front of car). However, that is not a feasible solution, nor is it how people drive now. In the general image case, a situation either requires action, or it doesn't. A car moving toward the intersection from a cross street is only a known threat once the velocity and position make it so. The issue with using trends is that it only helps if the trend continues. A stopped car that suddenly accelerates can be just as much as threat as a decelerating car that resumes a constant speed.

The advantage a computer has is that it can track all the potential threats are react in real time, whereas people can only look in one direction at a time and so need to classify the potential threats and non threats to reduce the work load.

So, using the the two frame NN approach, the system can determine current collision threats, and also classify potential threats (the probability side of things) and adjust accordingly.
 
  • Like
Reactions: CarlK
Seriously!?

>"Car tried to turn left into oncoming traffic, driver had to avoid collision,” wrote a customer of Waymo, the unit of Alphabet, in early March after taking a ride in one of the company’s experimental self-driving taxis in suburban Phoenix. The customer was referring to the human backup driver who sits behind the wheel and is supposed to take over if a potential safety problem arises.

With Waymo Robotaxis, Customer Satisfaction Is Far From Guaranteed

As Tim Cook said, this is the mother of all AI problems. Even Krafchick admits that they'll take a long time before solving all the problems. Remove your lidar tin foil hat, AVs are hard & no, Aptiv/Lyft L5 are still L2 systems on steroids. Waymo's guardian angels might be the closest thing to "a lidar car driving" out there.

You've moved the goal posts. Your claim was that Lidar vehicles were having "similar issues." I think the evidence suggests that the issues aren't similar to Tesla's at all. For openers, discounting the trumped up video, Tesla doesn't have ANY vehicle that can drive itself today. Both the Phoenix and Las Vegas projects involve Lidar vehicles actually driving paying passengers around. Are there issues? Of course. Are they similar to Tesla's inability to avoid hitting stationary fire trucks and slamming on the brakes for imaginary tree limbs hanging over a road, not by a long shot.
 
This is the funniest and saddest thing I've seen on these forums recently.

I'm laughing so hard I'm crying.

Nice to take a comment out of context. Tesla will be fully accountable for anything but people like you will bear zero consequence when you're wrong. Well you could at least cry.

It very hard to do things. Especially those very challenging things that no one has ever done before. It's very easy to say things will not work. One dose not even have a brain to do that.

Yup, exactly as they have been acting for the past ~3 years. When they started talking about the car driving itself, they were still selling AP1.

When Tesla was selling AP1, which used Mobileye vision chip and Tesla's own software, it already had it's own self driving project going on in the backround. That's also reason of its eventual fall off with Mobilieye. We now know why Tesla had recruited Jim Keller and Peter Bannon in 2015 to design the AI chip we see today and why it was trying to recruit George Hotz to do the machine learning so it could get rid of Mobleye. Tesla did not say things it had no plan to do.

This has been discussed many times before. When you train an ML algorithm, you are teaching it about a statistical distribution of probabilities. If your training set mostly contains the edge cases -- pedestrians unexpectedly jumping out in front of the car for example -- the ML algorithm will learn that it is very likely that pedestrians will jump out in front of the car, because in the training data this has a high probability. The car will then brake hard for every pedestrian walking along the sidewalk when you put it in the real world.

The really, really, really hard part about L3+ autonomy is not handling the edge cases on their own. It's handling the edge cases while also behaving reasonably in the normal cases. You need a lot of "normal" data in addition to edge cases if you are going to rely on deep learning to learn to drive, so that the ML algorithm can learn the subtle cues that differentiate normal from abnormal.

Better yet, get some lidar, more radar, better cameras, and much faster compute hardware if you're serious about L3+ autonomy.

You need to watch the Andrej Karpathy presentation that I have also mentioned in an earlier post. He has clearly said normal cases are all solved it's those edge cases that are remaining challenges. Of course if you know how to do things better you could always go to Tesla or any of those companies to ask for "some lidar, more radar, better cameras, and much faster compute hardware" to get things done.
 
Last edited: