Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

What will happen within the next 6 1/2 weeks?

Which new FSD features will be released by end of year and to whom?

  • None - on Jan 1 'later this year' will simply become end of 2020!

    Votes: 106 55.5%
  • One or more major features (stop lights and/or turns) to small number of EAP HW 3.0 vehicles.

    Votes: 55 28.8%
  • One or more major features (stop lights and/or turns) to small number of EAP HW 2.x/3.0 vehicles.

    Votes: 7 3.7%
  • One or more major features (stop lights and/or turns) to all HW 3.0 FSD owners!

    Votes: 8 4.2%
  • One or more major features (stop lights and/or turns) to all FSD owners!

    Votes: 15 7.9%

  • Total voters
    191
This site may earn commission on affiliate links.
Waymo reads traffic signs, Waymo even reads the arm of a police officer waving for the traffic to move ahead when traffic lights have been turned off. They can even detect body movements from pedestrians to estimate their movements.

Your lack of knowledge on Waymo makes it easy to dismiss how far they’ve actually come in truly cracking the Level 4+ nut. The simple things Tesla has talked about like roundabouts and stop signs are child’s play compared to what Waymo does.

It really is very hard to complete the full list of tasks really needed for autonomous driving. It is much more than reading lanes, stop signs and taking turns.
 
Last edited:
Your mistake is that you think driver supervision is the end goal when it is the intermediary goal. FSD still means no driver supervision once the software is fully validated. That is still Tesla's end goal, ie robotaxis. Remember that Tesla is working on full autonomy. AP is only a driver assist now, temporarily, while Tesla finishes full autonomy. So, the requirement for the driver to pay attention is just a temporary intermediary step. When Tesla first releases automatic city driving, the software will still be beta and not be able to handle some edge cases, thus the driver will need to pay attention. When Tesla has data that proves that it is safe to remove the driver, then Tesla will remove the requirement for the driver to pay attention. If you are thinking, why doesn't Tesla make the software good enough so that the driver is not required BEFORE they release to the public, the answer is that Tesla needs fleet data to make the software good enough. So Tesla has to release the software and require driver supervision in the beginning.

I agree with all of the above. What I disagree on is how long it will take, and as I keep saying, the implied timeline in the promise of true autonomous driving. When Tesla began offering advance purchase of FSD Elon Musk seemed to truly believe that software ready to submit to regulators was less than a couple of years away, and maybe less than a year. I believe it is ten to fifteen years away. Few people keep their car that long. If I am right, then most of the people who paid for FSD will never get to use the feature they paid for. Tesla tacitly acknowledged this when it stopped promising true autonomous driving to people who now pay in advance for FSD. Now, autonomous driving is "just a goal" and the promise is just a set of driver-assist features that will require full driver attention for an indefinite time.

How many people would have paid for FSD if Tesla had said, "Autonomous driving is our ultimate goal. We expect to reach that goal in fifteen years. Until then, if you pay for FSD you'll get some driver-assist features that will still require your full attention." ?

... I'm skeptical that there won't be a significant portion of people who are looking at their phone instead of pressing the brake ...

Sadly, people are doing that now: texting on their phone, eating, applying makeup, talking on the phone, etc. I think most Tesla drivers understand that AP (and city NoA when it arrives) require their full attention. News of the occasional, rare, accident will keep that in the front of people's minds.

And when the system gets so good that it hardly ever needs driver intervention, it will be saving more lives than are lost by drivers who ignore the warning and treat it as though it were level 4.

Autonomous driving is a challenge but it is not impossible. After all, Waymo already has unsupervised safe city self-driving.

Apples and oranges. The kind of self-driving Waymo is doing is very different and much easier than what Tesla is promising.

I am not sure. Look at AP now. Sure, there are some drivers who don't pay attention on AP now but it is a relatively small number. It depends on how reliable people think traffic light response is. The less reliable, the more they will pay attention, The more reliable it is, the less they will want to pay attention. As others have said, the real challenge is when traffic light response gets to like 99.9% reliable because it's not reliable enough to actually stop paying attention but it is reliable enough to lull people into a false sense of security.

This is true. But as I noted above, at that point the car will already be saving far more lives than are lost to inattentive drivers. This is a hard issue to face, like the trolley problems: Do you make a decision that will save ten lives but take one? I'd hate to make that choice as a trolley driver, but it is the road that autonomous cars will take. There will be a point where the system is saving many lives that would have been lost but killing a few people who would not otherwise have died.

And in 50 years, if our economy has not completely collapsed under the weight of climate change, pollution, overpopulation, and resource depletion, people will think you are a dangerous maniac if you want to drive a car yourself. We condemned great numbers of people to die prematurely when we decided to switch from horses, which go at 3 mph, to cars, which could go at 15 or 20 mph. That number increased exponentially as cars got faster. And autonomous cars will bring the number way back down, but there will always be deaths in auto accidents as long as we have automobiles.

FSD is coming and that's a good thing. But probably not in what little remains of my lifetime.
 
I think they're fundraising. Just like Cruise, Aptiv, NuTonomy, VW, BaiDu, MobilEye/Intel, Bosch, Continental, Daimler, Uber, Tesla, and the four dozen other companies working on autonomous driving. Don't you find it odd that some of these companies have been working for literally decades, and we're still no closer?

So again I say, we do not know if this is a solvable problem or not.
What do you think of this video?
Are they close?
 
  • Like
Reactions: diplomat33
According to this article, @verygreen found stop sign and traffic light 3D assets. So looks like Tesla will be adding stop sign and traffic light visualizations soon. Another good sign that Tesla is moving closer to adding traffic light and stop sign response.
Tesla adds stop sign and traffic light 3D renders in move to city driving visualization - Electrek
Software update 40.2.1 says it will show the visualizations when using TACC and not to rely on them for safe driving. I tested ONCE today, won’t do that again tho, and it did indeed show red traffic light symbol when I was already a bit too far into the intersection for comfort. Was only using TACC at 12mph so anything faster would definitely be unsafe to test it.
 
Software update 40.2.1 says it will show the visualizations when using TACC and not to rely on them for safe driving. I tested ONCE today, won’t do that again tho, and it did indeed show red traffic light symbol when I was already a bit too far into the intersection for comfort. Was only using TACC at 12mph so anything faster would definitely be unsafe to test it.

Thanks! I don't have the .40 update yet.
 
  • Like
Reactions: boonedocks
Your lack of knowledge on Waymo

I presume this was directed at me? If it was, all I can say is that I've been watching Waymo since they were a Google 20% project.

Are they close?

No. Waymo is only driving routes that they've mapped with extreme precision, and only in perfect conditions. Even with all of that advantage, they still have failures. Level 4 driving means the car can be anywhere, at any time, in any condition and only needs a human in the most extreme condition. Level 5 means all of that, but zero intervention at all ever. We are not nearing Level 4 or Level 5 anytime soon.
 
...
No. Waymo is only driving routes that they've mapped with extreme precision, and only in perfect conditions. Even with all of that advantage, they still have failures. Level 4 driving means the car can be anywhere, at any time, in any condition and only needs a human in the most extreme condition. Level 5 means all of that, but zero intervention at all ever. We are not nearing Level 4 or Level 5 anytime soon.
Level 4 definition includes restriction for limited geofenced region and other limitations.
Self-driving car - Wikipedia
It is NOT anywhere, anytime, any condition as you have stated.
 
No. Waymo is only driving routes that they've mapped with extreme precision, and only in perfect conditions. Even with all of that advantage, they still have failures. Level 4 driving means the car can be anywhere, at any time, in any condition and only needs a human in the most extreme condition.

No its not. Level 4 is geo-location limited, weather limited and ODD limited.
 
Level 4 definition includes restriction for limited geofenced region and other limitations.
Self-driving car - Wikipedia
It is NOT anywhere, anytime, any condition as you have stated.

No its not. Level 4 is geo-location limited, weather limited and ODD limited.

My reading of J3016-201806 is that the references to geographical restriction for Level 4 is about a business/service restricting operational parameters. Like I don't want my city taxi driving you from SFO to LAX, so I limit you to the Bay Area. I'm willing to be wrong here, but I literally just reviewed the document right now before typing this response. If you've read it, we can certainly discuss more.

Regardless, Waymo is not Level 4, because they restrict your routes, you have to get approval for the route, and they have remote human operators monitoring the vehicles at all times. So, sure, there's no monitor in the car like there has been for years. They simply bought enough LTE bandwidth to allow live streaming. That driver is still there, watching the whole thing, which makes it at best Level 3.
 
  • Disagree
Reactions: DanCar
My reading of J3016-201806 is that the references to geographical restriction for Level 4 is about a business/service restricting operational parameters. Like I don't want my city taxi driving you from SFO to LAX, so I limit you to the Bay Area. I'm willing to be wrong here, but I literally just reviewed the document right now before typing this response. If you've read it, we can certainly discuss more.

Regardless, Waymo is not Level 4, because they restrict your routes, you have to get approval for the route, and they have remote human operators monitoring the vehicles at all times. So, sure, there's no monitor in the car like there has been for years. They simply bought enough LTE bandwidth to allow live streaming. That driver is still there, watching the whole thing, which makes it at best Level 3.

Put simply, L4 is a car that can self-drive completely but only in certain driving modes (ie restricted to certain routes or limited to certain times of day). Waymo cars can self-drive as seen by the videos of the car driving with no driver inside but are restricted to certain areas (geofenced). So Waymo is most definitely L4. Restricting the route is called geofencing and is a key criteria that makes it L4.

Here is a useful table with the official narrative definitions from the SAE.

Table-of-the-levels-of-automated-driving-for-road-vehicles-dened-by-SAE-International.png
 
  • Like
Reactions: DanCar
That table was updated last year, BTW. Also, I hold in my hand the latest Recommended Practice that produced the new chart. You're working from old data from 2014 that was revised in June of 2018.

The fundamental definitions of the levels of autonomy have not changed.

But if you have an updated chart that supports your claims, show me.
 
The fundamental definitions of the levels of autonomy have not changed.

But if you have an updated chart that supports your claims, show me.

As I said above, I have J3016-201806. You can find it freely on the SAE's website.

But there's also SAE International Releases Updated Visual Chart for Its “Levels of Driving Automation” Standard for Self-Driving Vehicles and SAE Levels of Driving Automation - ANSI Blog

Of particular note, I think it's important to review the sections on the role of the remote agent, what happens when the ODD changes while the vehicle is in autonomous mode (summed up in Table 2), and how an ODD is defined. Again, based on reading Section 3, it seems that the purpose of a geo boundary is to limit where the vehicle is allowed to operate not for the purpose of making it reliable but rather for the purpose of limiting service areas and distances. Like a delivery vehicle that's only allowed to service a certain area (one of SAE's examples).

Based on Table 3 in Section 4, Waymo's remote operators specifically invalidate these vehicles as Level 4. They are not working as dispatch, they are working as remote operators.
 
Of particular note, I think it's important to review the sections on the role of the remote agent, what happens when the ODD changes while the vehicle is in autonomous mode (summed up in Table 2), and how an ODD is defined. Again, based on reading Section 3, it seems that the purpose of a geo boundary is to limit where the vehicle is allowed to operate not for the purpose of making it reliable but rather for the purpose of limiting service areas and distances. Like a delivery vehicle that's only allowed to service a certain area (one of SAE's examples).

I think you might be nitpicking. if Waymo chooses to only operate their ride sharing service in a specific geofenced area, how is that not what SAE describes?

Based on Table 3 in Section 4, Waymo's remote operators specifically invalidate these vehicles as Level 4. They are not working as dispatch, they are working as remote operators.

I assume you are referring to this table?

upload_2019-12-9_15-32-22.png


If Waymo cars were not really self-driving but instead there were remote operators somewhere remote controlling the cars at a distance, then you would be correct. But that is not what Waymo is doing. When you see the car self-driving with nobody in the driver seat, that is the car driving, not a remote operator. The car is operating at L4. Waymo's remote users are merely monitoring the cars and I believe they can give instructions to the car. So it would still be L4 based on that table.
 
  • Disagree
Reactions: DrDabbles
Based on Table 3 in Section 4, Waymo's remote operators specifically invalidate these vehicles as Level 4. They are not working as dispatch, they are working as remote operators.
”The level of driving automation system feature corresponds to the feature’s production design intent ... As such, it is incorrect to classify a level 4 design-intended ADS feature equipped on a test vehicle as level 2 simply because on-road testing requires a test driver to supervise the feature while engaged, and to intervene if necessary to maintain safe operation.” - SAE J3016
Do you really think Waymo is making remote realtime interventions on their test vehicles? That would be inane and unsafe. I'm sure they are only intervening when the system itself detects a problem or gets into a situation it can't deal with.
ODD can include location, weather, road type, time of day, etc.
 
If Waymo cars were not really self-driving but instead there were remote operators somewhere remote controlling the cars at a distance, then you would be correct.

Literally in the Verge video above, the Waymo rep. describes how they have remote operators for fall back. In his own words. You can't really argue with Waymo about what Waymo says they do, can you? Nobody here is claiming those operators are driving the cars full time, I'm claiming by that chart you posted, they meet the Level 3 criteria, and specifically not the level 4. They are not doing "dispatch" work (also defined in the document), they are doing remote operator work. That's Level 3.

Do you really think Waymo is making remote realtime interventions on their test vehicles?

Yes. Their representative literally said as much in the Verge video posted today.

That would be inane and unsafe.

Perhaps. But considering the area they're operating in is fairly flat and likely has high quality mobile data reception, they can probably mitigate any risks reasonably well. And presumably a loss of signal means the vehicle degrades to its safest state and pulls over.

or gets into a situation it can't deal with

From what we've seen repeatedly, unprotected left turns are their biggest failure domain right now. And at a guess, that's what those people are still mainly focusing on. The video showed a lot of behaviors that exist in a Tesla when driving on surface streets, for what that's worth. Not least of which is the wheel rapidly jerking back and forth as lane lines disappear and control confidence goes out the window.

ODD can include location, weather, road type, time of day, etc.

Yep. But based on the examples given, and the path for degrading into safer failure modes, it seems like these ODD choices are based on preferences rather than technical requirements. So, like I said above, a delivery company or taxi service not operating outside of a region, or a patrol vehicle not operating at night, etc.

I say this because if we look at how Level 3 is supposed to degrade, it can pass control back to an operator when the parameters of the ODD are no longer met. But with Level 4, there's no such call out. To me, this would mean that if ODD parameters were about being capable of operating, then a Level 4 vehicle would need to transition to an operator when, for example, it starts raining. But only Level 3's definiton has that language included.

If we were to use the overly generous definitions being presented by people here, then Tesla has a Level 4 capable vehicle right now. I hope nobody would agree with that assessment. But, in fact, the vehicle degrades autonomy functions when inclement weather or poor visibility is detected, the vehicle operates autonomously within geo-fenced areas of highways and freeways as well as on and off ramps. And, they're only engaging the human while they test their beta software. Once it's not in Beta, you can totally take your hands off the wheel.

That would be a truly dangerous interpretation of those autonomy levels.
 
If we were to use the overly generous definitions being presented by people here, then Tesla has a Level 4 capable vehicle right now. I hope nobody would agree with that assessment.

I have not seen anyone argue that Tesla has L4 autonomy now.

In fact, we don't need to guess. The SAE document that you are using has a very nice logic flow diagram (Fig 9 on page 20) for determining the level of autonomy of a vehicle:

upload_2019-12-9_16-33-21.png


Does Tesla's Autopilot perform the entire DDT? No. Does it perform both longitudinal and lateral motion but not complete OEDR? Yes. So clearly a Tesla car is L2 right now.

Does a Waymo car perform the complete DDT and DDT fallback within a limited ODD? Yes. So Waymo cars are L4.
 
  • Informative
Reactions: pilotSteve
Yes. Their representative literally said as much in the Verge video posted today.
That's not my interpretation of what the representative said.
"These folks don't joystick the car or anything like that, but they can help answer specific questions that a car might have about an ambiguous situation. And that's where human intuition and human understanding of the entire context is super important. Like that moving van, is it really staying there? Or is it about to start driving? Well, if the door's down and they're unloading a lamp out of the back, it's gonna be there for a while. That's not something we've gotten around to making the car smart enough to understand, but a human sees it in a moment and can send that signal. So it's not really a command to the car, it's just adding information."
I say this because if we look at how Level 3 is supposed to degrade, it can pass control back to an operator when the parameters of the ODD are no longer met. But with Level 4, there's no such call out. To me, this would mean that if ODD parameters were about being capable of operating, then a Level 4 vehicle would need to transition to an operator when, for example, it starts raining. But only Level 3's definiton has that language included.
If it starts raining and rain is not in the ODD then the car must automatically perform a fallback to a minimal risk condition (i.e. pull into a parking spot).
If we were to use the overly generous definitions being presented by people here, then Tesla has a Level 4 capable vehicle right now. I hope nobody would agree with that assessment. But, in fact, the vehicle degrades autonomy functions when inclement weather or poor visibility is detected, the vehicle operates autonomously within geo-fenced areas of highways and freeways as well as on and off ramps. And, they're only engaging the human while they test their beta software. Once it's not in Beta, you can totally take your hands off the wheel.
This has been an ongoing argument. I think that city NoA will be a beta Level 5 system because that is clearly the design intent. That doesn't mean it can be used without a safety driver! Just like every other self driving prototype (other than Waymo) needs a safety driver.
 
  • Helpful
Reactions: pilotSteve