Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla’s next major milestone: no driver input required on the highway

This site may earn commission on affiliate links.
Elon said at the Tesla Shareholder Meeting that he was personally testing an Enhanced Autopilot build that drove autonomously from highway onramp to offramp with no input from him. He said the feature set could be released in a few months (i.e. 2069 A.D.). This is still Level 2 autonomy since it requires driver monitoring and occasional intervention when the system is doing something wrong. But requiring no driver input (including no interventions) on most highway trips would be a huge milestone in autonomy development. The SAE levels of automation don’t capture this axis of improvement.

The feature set for full autonomy (Level 4) on the highway would be complete, but not yet at a level of safety or reliability where the human can stop babysitting the system. From there, it’s just a matter of time until the safety and reliability improve to the point that fully autonomous driving on the highway is better than the average human driver. Progress on autonomous driving seems to be happening at a rapid exponential rate, at least on some metrics. For example, Cruise Automation reduced its disengagement rate by 14x in one year. Compare that to Moore’s law, where the number of transistors on an integrated circuit have historically increased 2x every 1.5 to 2 years.

So, releasing an Enhanced Autopilot build that can do highway onramp to offramp with no driver input (most of the time) starts the clock on fully autonomous highway driving. It also starts the clock on Level 2 driving with no driver input (most of the time) in other settings, like suburban streets, city streets, and rural roads. Highways are the easiest setting to crack. But once highways are cracked, it seems like it’s only a series of incremental steps beyond that to do rural roads, which are sparsely trafficked and seldom have pedestrians. And from rural roads, it seems like only another series of incremental steps to suburban streets, like those that Waymo is driving on in the Phoenix metro area. The streets tend to be wide, simple, and vehicles and pedestrians are relatively sparse. Speed limits are low.

That’s why the Enhanced Autopilot build that Tesla is testing seems like a really important milestone to me. I feel that is perhaps unknowable (at least to me, a layperson) or at least highly uncertain which two scenarios is more likely: 1) Elon’s hyperaggressive view that superhuman full autonomy in all settings and conditions will be commercially ready by the end of 2019 or 2) full autonomy won’t materialize anytime within the next 10 years. An update that enables Teslas to drive with (mostly) no driver input on highways would tilt things, in my mind, sharply toward Scenario 1. Even if Elon is being characteristically premature in his timeline, I could buy the idea that he’s only a year or so off. I could even see it as possible that he’s exactly right.

The same is true, by the way, if Waymo or Cruise can launch a genuine, commercial, public ride-hailing service in a limited area. One caveat there is how much the service relies on remote human operators.

With Enhanced Autopilot, there appears to be virtuous cycle wherein as soon as the highway babysitter mode can deployed at a level of development where it improves safety, there will be a deluge of data from production cars that allows the babysitter system to be improved more rapidly than before. (On the assumption that the factor limiting improvement is collecting and labelling data.) The improvement in smoothness, safety, and reliability will embolden more Tesla owners to enable it and contribute more data. Presumably, some of that data will generalize to other driving settings. If a babysitter mode can be activated for other settings, the same virtuous cycle will begin there too.

Based on this reasoning, an update enabling what Elon described is the event I’m most eagerly anticipating for Tesla, beyond even profitability or a smooth Model 3 ramp. It would be a historic achievement of computer science. I think it would also be one of two possible indicators that full autonomy is for real and coming soon (a limited autonomous ride-hailing service being the other). If this really can happen in 2018, I’ll be so insanely excited.
 
Last edited:
...safety...

I think the highway onramp to offramp ability is just a matter of routing synchronizing with navigation.

Current routing coding does not solve the issue of safety: How not to fatally slam under a tractor trailer in Florida or into a median concrete in Mountain View.

The current coding system relies on human driver's skill to avoid obstacles and to slow down for a sharp 25 MPH exit ramp.

Self-routing capability without solving the safety issue is a nice convenience feature but I wouldn't consider it as a milestone for Level 4.

I think we might want to see how Tesla Vision will solve the problem of collisions and compare that to Waymo's LIDAR's system.
 
As I see it, the hard part is the perception software. Extracting as much info as possible from the data packets that come from the sensors. A human looks at a scene and quickly understands it, plus we are trained to spot potential dangers. The car identifies very few things and everything else does not exist for the car. Tesla right now tries to find the lane you are in and the moving cars and that's about it. Yes we know they are already doing quite a bit more but seems it's only in shadow mode for now.
Navigation is easy in comparison, the struggle is to do it with as little compute as possible but the perception is the really hard part and with lots of room to improve.
On/off ramp depends how it is done. If they use high resolution maps, that's a crutch and I wouldn't get excited about it. If they detect the road, then the lanes and reconcile the data, then yeah that would be a big step forward as it would lead to huge improvements without relying on external data (data that can at times become outdated). We don't get fooled by lane marking because we see the road and the flow but the car doesn't so if it starts to find the road, it really becomes a lot smarter. Detecting the road is not easy at all though.
 
  • Informative
  • Like
Reactions: kbM3 and Mobius484
Since we have little data, this is purely my opinion, nothing else. My read is that even with the supposedly coming feature set (L2 as you say), this will be VERY far from a true L4 that requires NO intervention. Evil is in the millions details. From that standpoint, it seems Waymo (judging from the ~60k Chrysler they just bought), are way more advanced than anybody. Years ahead. And local traffic is way more complex than highway driving. I also remain convinced that the time Tesla will be spending implementing solutions to “see” through cameras vs. getting data from LIDAR will be a net competitive loss. Can camera do it? Maybe, but what’s the engineering cost associated to compensating this rather than investing in actual self driving behavior? As a geek, I enjoy the discovery process. As an investor I’m worried. And as usual: “I’m not holding my breath, I’d rather hold my wheel...”
 
  • Like
Reactions: _jmk and Mobius484
Since we have little data, this is purely my opinion, nothing else. My read is that even with the supposedly coming feature set (L2 as you say), this will be VERY far from a true L4 that requires NO intervention. Evil is in the millions details. From that standpoint, it seems Waymo (judging from the ~60k Chrysler they just bought), are way more advanced than anybody. Years ahead. And local traffic is way more complex than highway driving. I also remain convinced that the time Tesla will be spending implementing solutions to “see” through cameras vs. getting data from LIDAR will be a net competitive loss. Can camera do it? Maybe, but what’s the engineering cost associated to compensating this rather than investing in actual self driving behavior? As a geek, I enjoy the discovery process. As an investor I’m worried. And as usual: “I’m not holding my breath, I’d rather hold my wheel...”

You see in 3D with your eyes so yes cameras can do it. We use stereo vision but you can also use structure from motion as well as learn the size of things to derive distance.

LIDAR is low resolution (even the ones they call high resolution) , is very costly and has issues with certain weather conditions..It can't replace a camera by any means , it just does 3D mapping while the cameras do lots of things.
Radar is super low resolution today, high res radars (as they are called) will somewhat catch up with LIDAR and doesn't have issues with rain/fog/snow while being much much cheaper.
At the end of the day the camera is the highest resolution sensor and it's gonna be the one that needs to be the smartest, by far.
People tend to think that LIDAR is very capable but it's low resolution, it's not even easy to identify a cyclist, just that is a lot of work.

In the short term LIDAR is not viable from a cost perspective , except for taxis. Tesla does charge way too much for FSD today but add LIDAR and it's gonna be even more. Tesla can and might include FSD in the base price in a few years for a key competitive advantage - they will have their own silicon soon too. LIDAR is not a good thing actually because nobody solved the weather issues so it's a suboptimal solution but it was the only thing one could use to make dumb autonomous robots work. Dumb cameras+LIDAR will not result in a very safe car so everybody will need much smarter cameras and 3D mapping is just a fraction of what has to be done. No LIDAR is a bit more work and a bit more compute but it's way cheaper and the weather does not kill you - the weather issues might be solvable but in time and at a cost. In the long run radar is better positioned as the issue there is just resolution.

 
  • Like
Reactions: kbM3
Since we have little data, this is purely my opinion, nothing else

Same here.

From there, it’s just a matter of time
But once highways are cracked, it seems like it’s only a series of incremental steps
And from rural roads, it seems like only another series of incremental steps to suburban streets,

That's a lot of incremental steps... and they're not actually incremental, they're massive, evolutionary steps and steep challenges. In highway driving, you don't have stop signs, yield signs, stop lights, bicycles, pedestrians and a hundred other potential hazards you have in rural and city driving.

And even for highway driving, I'd love to see how any near-future version of FSD handles a crowded highway toll booth. Will the car know down to the inch which lanes are toll-tag lanes -- without inching forward slowly to avoid the lane dividers using the ultrasonics -- like Summon does now? And many interchanges around here the lanes change frequently based on traffic or lane closures. So you can't really rely on GPS or navigation data to tell the car which lane is open and which lane takes a toll tag. So I guess "on-ramp to off-ramp" means on no-toll roads. The disclaimers begin.

2) full autonomy won’t materialize anytime within the next 10 years.

Totally this.

I think it would also be one of two possible indicators that full autonomy is for real and coming soon (a limited autonomous ride-hailing service being the other).

Don't hold your breath.

. If this really can happen in 2018, I’ll be so insanely excited.

Not gonna happen. In 2018 or 2019.. or I'd say not even 2020. On-ramp to off-ramp L2, perhaps. Anything beyond that, nope.

Remember that LA to NY fully autonomous road trip Elon promised in 2017? We won't see that for at least a few more years, and it's still going to be just a more advanced L2. And they still haven't moved forward with autonomous supercharging other than in the lab with the snake. So even then, the fully autonomous cross country trip will be "except for supercharging"... and then all the disclaimers continue. No different from the fully autonomous video demo they released, which was later proven to be several different takes just spliced together to give the appearance of autonomous surface street driving.
 
Why I’m skeptical of definitive pronouncements about the timeline of autonomy deployment when those pronouncements are based on a subjective impression of the difficulty of the task and the rate of progress: the illusion of explanatory depth — we tend to understand how things work much more poorly than we feel we do. And even experts don’t know or at least don’t agree on how deep learning works. So whether full autonomy will be deployed in 2 years or 12 years is not something I trust myself or other laypeople to have the capacity to know. Even with experts, if their opinion is based on gut feeling, I’m skeptical. I’m more persuaded by attempts to extrapolate forward past trends in data, but even this is frought with irreducible uncertainty.
 
No different from the fully autonomous video demo they released, which was later proven to be several different takes just spliced together to give the appearance of autonomous surface street driving.

At what time codes in the video are different takes edited together? It looks like one long take that probably took multiple attempts. Where is the proof?

 
Why I’m skeptical of definitive pronouncements about the timeline of autonomy deployment when those pronouncements are based on a subjective impression of the difficulty of the task and the rate of progress: the illusion of explanatory depth — we tend to understand how things work much more poorly than we feel we do. And even experts don’t know or at least don’t agree on how deep learning works. So whether full autonomy will be deployed in 2 years or 12 years is not something I trust myself or other laypeople to have the capacity to know. Even with experts, if their opinion is based on gut feeling, I’m skeptical. I’m more persuaded by attempts to extrapolate forward past trends in data, but even this is frought with irreducible uncertainty.

I'm confused. This statement seems to contradict your entire first post, which is largely based on comments/promises that Elon Musk has made about the progress of AD based solely on his 'gut'.

You're skeptical of any pronouncements based on a subjective impression (everything EM says/promises), but you think autonomous driving is just a few easy incremental steps away from where we are now, and might happen in 2018?
 
  • Like
Reactions: BinaryField
There was a long thread that "Zapruder film" analyzed the video. I'll see if I can find it. I'm pretty sure that eventually Tesla admitted it was stitched together.

If I remember correctly, the debate was about whether or not the three "computer vision" renderings on the right side of the video frame were created in real-time or if the NN vision "overlays" were generated and added later to make it seem Tesla already had vision software that could tag and react to a live environment (which it seems they did not when this video was made).

I'm not sure how they could have "spliced" a video like this together to make it look like a single run.
 
  • Informative
Reactions: strangecosmos
Tesla EAP is still in beta. Its nice how they are rolling out features one at a time.

FSD is not even in beta yet. I hope it will be equally nice to see its features as they roll out as well.

If it were up to me.....I'll stick with beta for the rest of my life. I don't need FSD....just its beta features -
 
  • Like
Reactions: Mobius484
I hate to be a downer but isn't this just "we finally got AP working almost reliably"?

People really want level 3, hands off. And wasn't the original goal to be launching FSD about now?

It just seems like trying to hype up what little progress is being made.
 
You see in 3D with your eyes so yes cameras can do it. We use stereo vision but you can also use structure from motion as well as learn the size of things to derive distance.

LIDAR is low resolution (even the ones they call high resolution) , is very costly and has issues with certain weather conditions..It can't replace a camera by any means , it just does 3D mapping while the cameras do lots of things.
Radar is super low resolution today, high res radars (as they are called) will somewhat catch up with LIDAR and doesn't have issues with rain/fog/snow while being much much cheaper.
At the end of the day the camera is the highest resolution sensor and it's gonna be the one that needs to be the smartest, by far.
People tend to think that LIDAR is very capable but it's low resolution, it's not even easy to identify a cyclist, just that is a lot of work.

I know all of this and you kinda make my point. Waymo doesn’t “just” have LIDAR, it ALSO has camera and other sensors. The point is that the more you can “cheaply” (in terms of algorithm) know about your environment and dynamics far around and overlay those sources of data, the better can you focus on actually building a FSD algorithm that works. As I said Camera might be possible, but at what cost. Human do more than just watch, that was the point of my twitter thread I pasted. All kind of very smart inference takes place, sophisticated assumptions done by our brain, etc. We had past experience where something exceptional is now wired in our brain as fear and we watch out for this situation (try getting this with NN machine learning... what’s exceptional by definition is part of the noise). Can I get there with kinda shitty cameras? Theoretically I guess, practically, it depends on whether you want to win on a highly competitive market or just win an argument. Just watch those recording done by the various Tesla camera (see the evergreen thread) and tell me you would feel at ease driving a car from within a bunker with just those video feeds! It is a half blind car! But yeah, I’m sure half blind people have driven on road in the past...

Then, to your point of LIDAR being expensive: yes, so what? Waymo is ready pretty much and Tesla just got automated wipers months ago. I bet a number of Tesla (maybe not all, but, say, 80%) who bought EAP+FSD (both combined are NOT cheap and who cares about the mythical (...) EAP if you want FSD) would have preferred to pay 5 or 10k more and get Waymo like FSD, today. LIDAR will only get cheaper by mass production.

It’s FSD war out there, and not winning by saying that it would cost to much for proper tools to win it, means they’d rather not try and license something that works.

My bet? A delayed launch of a first FSD-branded feature post Summer (delayed...), with lots of warning by Tesla of this being very alpha and “you must hold your wheel and pray at all time”, then a few accidents by people who stupidly believed that FSD meant FSD (crazy media!), and... Tadam! ... an announcement by Q1 next year by Elon Musk himself saying he changed his mind, he always knew LIDAR was needed but it was too expensive, but not anymore, and here is ... AP version 4! Oh and if you paid for AP 2.x, here is a $10k token of appreciation on your next Tesla.

That would piss me off as an owner, reassure me as an engineer and get me excited as an investor.
 
I know all of this and you kinda make my point. Waymo doesn’t “just” have LIDAR, it ALSO has camera and other sensors. The point is that the more you can “cheaply” (in terms of algorithm) know about your environment and dynamics far around and overlay those sources of data, the better can you focus on actually building a FSD algorithm that works. As I said Camera might be possible, but at what cost. Human do more than just watch, that was the point of my twitter thread I pasted. All kind of very smart inference takes place, sophisticated assumptions done by our brain, etc. We had past experience where something exceptional is now wired in our brain as fear and we watch out for this situation (try getting this with NN machine learning... what’s exceptional by definition is part of the noise). Can I get there with kinda shitty cameras? Theoretically I guess, practically, it depends on whether you want to win on a highly competitive market or just win an argument. Just watch those recording done by the various Tesla camera (see the evergreen thread) and tell me you would feel at ease driving a car from within a bunker with just those video feeds! It is a half blind car! But yeah, I’m sure half blind people have driven on road in the past...

Then, to your point of LIDAR being expensive: yes, so what? Waymo is ready pretty much and Tesla just got automated wipers months ago. I bet a number of Tesla (maybe not all, but, say, 80%) who bought EAP+FSD (both combined are NOT cheap and who cares about the mythical (...) EAP if you want FSD) would have preferred to pay 5 or 10k more and get Waymo like FSD, today. LIDAR will only get cheaper by mass production.

It’s FSD war out there, and not winning by saying that it would cost to much for proper tools to win it, means they’d rather not try and license something that works.

My bet? A delayed launch of a first FSD-branded feature post Summer (delayed...), with lots of warning by Tesla of this being very alpha and “you must hold your wheel and pray at all time”, then a few accidents by people who stupidly believed that FSD meant FSD (crazy media!), and... Tadam! ... an announcement by Q1 next year by Elon Musk himself saying he changed his mind, he always knew LIDAR was needed but it was too expensive, but not anymore, and here is ... AP version 4! Oh and if you paid for AP 2.x, here is a $10k token of appreciation on your next Tesla.

That would piss me off as an owner, reassure me as an engineer and get me excited as an investor.

I think you need to take a huge step back and try to learn more while remaining neutral.

LIDAR gives you distance and reflectivity and you have a somewhat vague 3D map.You know that something 3D is there and get a vague shape.
All the tough things are done by the camera even if you have LIDAR.
3D mapping and using "AI" to identify objects, interpret certain situations and so on are very very different things.

If you look at cost estimates for Google's cars, many analysts will say 200k$. You can't sell a M3 at 200k, that doesn't work.
How good Waymo is, we don't actually know., You can fake it with maps and LIDAR as a crutch but that doesn't mean they are all that advanced if the camera is dumb. Ofc Google started working on this a long time ago and everybody else is trying to catch up. What Tesla has is the fleet that is growing and helps with data.
And again LIDAR is a suboptimal solution as it can't handle all weather.so It is NOT the right tool for FSD, not today and not anytime soon as those issues are not solved. That's before factoring in cost as cost wise it is simply mad to use LIDAR today, no customer would pay the price. Don't get fooled by marketing.

3D mapping can be done with LIDAR, cameras (multiple ways with cameras) and radar. LIDAR is very costly and has issues with unfavorable weather. Radar sees through rain , has reasonable prices but resolution is low. Higher res that is not far behind LIDAR are showing up now though. The camera is the only sensor that can do everything on its own. You can have FSD with just camera and add other sensors for redundancy. I usually avoid making this point but you can even argue that due to its much much higher resolution the camera is better for 3D mapping.

This is a guy sitting on a chair and waving at the LIDAR , single frame from a Velodyne VLP-16 and this is short range, the greater the distance, the fewer details you get. https://cdn-images-1.medium.com/max/800/1*VFFQSGRC18kV5VGKi_FO5A.png
 
Last edited:
I think you need to take a huge step back and try to learn more while remaining neutral.

LIDAR gives you distance and reflectivity and you have a somewhat vague 3D map.You know that something 3D is there and get a vague shape.
All the tough things are done by the camera even if you have LIDAR.
3D mapping and using "AI" to identify objects, interpret certain situations and so on are very very different things.

If you look at cost estimates for Google's cars, many analysts will say 200k$.
Yeah, I took a big step back, thanks for the advice. Waymo system is estimated at $8’500 or less, and Velodyne’s latest VLS-128 is at $12’000 or less for volume. Not sure exaggerating prices by 2’500% makes you look neutral...