Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

FSD email from Tesla (April 5, 2019)

This site may earn commission on affiliate links.
he DOES say it NEVER does 100% of the dynamic driving task. Because the human is required to MONITOR THE DRIVING ENVIRONMENT AT ALL TIMES.

Which is part of the dynamic driving task
I don't have a copy of the SAE level definition document (you have to pay for it). Can you post where it says that monitoring the driving environment is part of the dynamic driving task?

That depends on what info is available to the observer. It'd certainly be POSSIBLE to tell if you had read any system specs or documents, or potentially if you could see any messages from the vehicle when engaging the system or during the drive.

If your outside observer is just "random guy standing on the sidewalk as the car goes past" it's possible that guy couldn't tell. But I'm not sure why you'd care. CA sure doesn't.

CA regs don't mention, or care about, the answer to your question in any way. Again something you'd be aware of if you bother to read them.


They care about the actual system not "how it might look to a random other person"
The vehicle has an interface just like the NoA interface. How would the DMV be able to tell the difference then?

Anyway I've emailed the Miguel Acosta, the chief of the Autonomous Vehicle Branch to get us an answer.
 
I don't have a copy of the SAE level definition document (you have to pay for it). Can you post where it says that monitoring the driving environment is part of the dynamic driving task?

Wayback Machine

that's the SAE doc. Free.

SAE specs said:
Dynamic driving task includes the operational (steering, braking, accelerating, monitoring the vehicle and roadway) and tactical (responding to events, determining when to change lanes, turn, use signals, etc.) aspects of the driving task, but not the strategic (determining destinations and waypoints) aspect of the driving task.

I added the bold for the part we are talking about- though I've already told you all of this in previous posts anyway.

That item is the fundamental difference between levels 2 and 3.

For both 2 and 3 the vehicle can handle the steering/acceleration/braking/etc... but it's only when the monitoring of the driving environment can be entirely done by the system, no human needed, that it becomes a level 3 (or higher) system.


And under CA regs, only L3 or higher systems require the additional regulation/permits.
 
  • Informative
Reactions: 1 person
Again- Teslas current (and end of this year) system continue to require the driver to monitor the driving environment to deal with things the car can not.

Which means the human is always doing/responsible for part of the dynamic driving task. Which means the car is not, ever, doing/responsi ble for 100% of that task.

It'll be interesting to see how this transpires. There are some pretty obvious pitfalls here, so bad things are going to happen. So it'll be interesting to see how regulatory agencies and even the SAE respond with their definitions of level 2 and/or modifying the regulations. As I've said elsewhere, even if the level 2 cars are safer than human drivers, people will not be happy when they are involved in accidents that kill or maim people other than the driver (the "real" requirement is that the vehicle be "much, much, much safer" than human drivers (and there's more to it even than that), from a practical standpoint, though not a logical standpoint).

The danger is that it is not necessary to have this monitoring of the driving environment done on a consistent basis (or even done at all) for a highly capable vehicle to complete a dangerously high percentage of daily trips autonomously. The higher the percentage completed without monitoring of the driving environment, the more dangerous it becomes, obviously (until the percentage completed in arbitrarily complex environments is extremely close to 100% anyway).

To me it seems like a mistake to leave monitoring of the driving environment in the hands of the human for a high capability level 2 system. As the level 2 system gets closer and closer to full automation, it gets more and more dangerous, which could not possibly have been the intent of the standards organization. It's almost like the standards organization expected manufacturers to just jump to level 3 once they were "close" (but in reality, what stops that from happening is the reporting requirements and additional liability). I think the standards organization and regulators are going to have a tough time calibrating this and refining the definitions so as to maintain adequate public safety. I don't have a magic proposal for changing the definitions or anything. It just seems like we are on a quite unsafe trajectory.

It's true that the liability will still rest with the human driver in these awful situations. But I think it's likely to lead to people who wouldn't ordinarily be criminals ending up being brought up on manslaughter charges.
 
Those are all valid concerns, but what little data we have so far suggestions that Tesla drivers at least do not over-trust the system on the whole.

Tesla MIT study concludes that drivers maintain vigilance when using Autopilot

“The two main results of this work are that (1) drivers elect to use Autopilot for a significant percent of their driven miles and (2) drivers do not appear to over-trust the system to a degree that results in significant functional vigilance degradation in their supervisory role of system operation,” the MIT scientists concluded.
 
Those are all valid concerns, but what little data we have so far suggestions that Tesla drivers at least do not over-trust the system on the whole.

Tesla MIT study concludes that drivers maintain vigilance when using Autopilot

I'm not surprised they don't overtrust the system since it's AP HW 1 & 2, and they only go 9 miles between tricky disengagements on average.

It sounds like you understand that this study really isn't that relevant to this conversation...but for further detail:

There is VERY little data in this study, BTW, and I don't think particularly relevant to the obvious concern: What happens when the system becomes highly reliable and highly capable? THAT'S the key, obvious problem with these systems which I've been harping on above and will become the concern of regulators. That is the maximum danger point (which I guess is counter-intuitive to some people?).

Anyway, I would have preferred you link directly to the PDF rather than via the sickening Teslarati site! I followed them on Twitter for a bit then I had to stop due to the garbage headlines and fawning Tesla coverage. ;) Click-bait regurgitation "journalism" makes my blood boil. ;)

For example:

Teslarati said:
The data used in the study was generated from the over 1 billion miles driven by Tesla owners since its [Autopilot's] activation in 2015, about 35% of which were determined to be assisted by Autopilot

A casual reader would conclude 350 million AP miles for the study, amirite? Actually, this statement is just wrong. It is disgraceful "journalism". Some would call it fake news. At best, it is misleading. There is no way to determine from that statement that the dataset is 112,427 AP miles.

The paper (in the introduction - perhaps that's as far as Teslarati got) actually talks about "1 billion miles driven on Autopilot" since 2015...and mentions their dataset contains 35% Autopilot miles out of the total number of miles in the dataset...
Specifically:

The MIT Study said:
...the Autopilot dataset includes 323,384 total miles and 112,427 miles under Autopilot control. Of the 21 vehicles in the dataset, 16 are HW1 vehicles and 5 are HW2 vehicles.
The Autopilot dataset contains a total of 26,638 epochs of Autopilot utilization.

So as you say, it is VERY little data. From 21 vehicles (HW1 & 2, which is arguably less capable, so likely safer), with possible selection bias - since these cars also had cameras installed in them specifically for this study. (Do these drivers participating in a study, affiliated loosely with MIT, really represent the average Tesla driver of the near future?) The authors of the study SPECIFICALLY understand this, and they are very clear it is a limited study. It's a study!

Some further excerpts I found to be mostly self-evident, but still worth posting here (emphasis in bold added by me):

The MIT Study said:
…these findings (1) cannot be directly used to infer safety as a much larger dataset would be required for crash-based statistical analysis of risk, (2) may not be generalizable to a population of drivers nor Autopilot versions outside our dataset, (3) do not include challenging scenarios that did not lead to Autopilot disengagement, (4) are based on human-annotation of critical signals, and (5) do not imply that driver attention management systems are not potentially highly beneficial additions to the functional vigilance framework for the purpose of encouraging the driver to remain appropriately attentive to the road…

…Research in the scientific literature has shown that highly reliable automation systems can lead to a state of “automation complacency” in which the human operator becomes satisfied that the automation is competent and is controlling the vehicle satisfactorily. And under such a circumstance, the human operator’s belief about system competence may lead them to become complacent about their own supervisory responsibilities and may, in fact, lead them to believe that their supervision of the system or environment is not necessary….The corollary to increased complacency with highly reliable automation systems is that decreases in automation reliability should reduce automation complacency, that is, increase the detection rate of automation failures….

…Wickens & Dixon hypothesized that when the reliability level of an automated system falls below some limit (which the suggested lies at approximately 70% with a standard error of 14%) most human operators would no longer be inclined to rely on it. However, they reported that some humans do continue to rely on such automated systems. Further, May[23] also found that participants continued to show complacency effects even at low automation reliability. This type of research has led to the recognition that additional factors like first failure, the temporal sequence of failures, and the time between failures may all be important in addition to the basic rate of failure….

….We filtered out a set of epochs that were difficult to annotate accurately. This set consisted of disengagements … [when] the sun was below the horizon computed based on the location of the vehicles and the current date. [So all miles are daytime miles]

Normalizing to the number of Autopilot miles driven during the day in our dataset, it is possible to determine the rate of tricky disengagements. This rate is, on average, one tricky disengagement every 9.2 miles of Autopilot driving. Recall that, in the research literature (see§II-A), rates of automation anomalies that are studied in the lab or simulator are often artificially increased in order to obtain more data faster [19] such as “1 anomaly every 3.5 minutes” or “1 anomaly every 30 minutes.” This contrasts with rates of “real systems in the world” where anomalies and failures can occur at much lower rates (once every 2 weeks, or even much more rare than that). The rate of disengagement observed thus far in our study suggests that the current Autopilot system is still in an early state, where it still has imperfections and this level of reliability plays a role in determining trust and human operator levels of functional vigilance...

...We hypothesize two explanations for the results as detailed below: (1) exploration and (2) imperfection. The latter may very well be the critical contributor to the observed behavior. Drivers in our dataset were addressing tricky situations at the rate of 1 every 9.2 miles. This rate led to a level of functional vigilance in which drivers were anticipating when and where a tricky situation would arise or a disengagement was necessary 90.6% of the time…..

.In other words, perfect may be the enemy of good when the human factor is considered. A successful AI-assisted system may not be one that is 99.99...% perfect but one that is far from perfect and effectively communicates its imperfections….

...It is also recognized that we are talking about behavior observed in this substantive but still limited naturalistic sample. This does not ignore the likelihood that there are some individuals in the population as a whole who may over-trust a technology or otherwise become complacent about monitoring system behavior no matter the functional design characteristics of the system. The minority of drivers who use the system incorrectly may be large enough to significantly offset the functional vigilance characteristics of the majority of the drivers when considered statistically at the fleet level.
 
Last edited:
Did you guys catch Lex Fridman's post today on the subject?


Thanks. I saw Elon retweet this earlier today but this post made me listen. Sounds like that HW3 will be awesome, "too bad" I didn't buy it. Elon really does not believe in driver monitoring! He even said it might go to "no more hands required" (presumably only with HW3) in 6 months. :confused: It's going to be an interesting, dangerous journey...
 
So, I got an email back from the Chief of the Autonomous Vehicle Branch of the DMV and he said that "California Autonomous Vehicles Regulations govern the testing and deployment of SAE level 3, 4, and 5 Automated Driving Systems." so @Knightshade is right and I am wrong (bold added in honor of @Knightshade :p).

Seems like complete madness to me but if you read the rules literally that is what they say. It seems like a system can have any feature set and as long as you say that the driver is responsible for monitoring the driving environment then the system is an SAE level 2 system and not subject to CA Autonomous Vehicles Regulations. I still can't reconcile this with their decision not to allow the Uber system to be used in CA when it clearly required a driver to be responsible for monitoring the driving environment both in design and in practice.

I still believe that Tesla will not release what they have described as a level 2 system in California. But for now @Knightshade is correct and anything Tesla want to release is allowed as long as the driver is responsible for monitoring the driving environment. I'll just have to not do any road biking later this year until they work out the bugs.

P.S. I did ask a followup question but I suspect the DMV will not answer hypotheticals.
 
So, I got an email back from the Chief of the Autonomous Vehicle Branch of the DMV and he said that "California Autonomous Vehicles Regulations govern the testing and deployment of SAE level 3, 4, and 5 Automated Driving Systems." so @Knightshade is right and I am wrong (bold added in honor of @Knightshade :p).

Seems like complete madness to me but if you read the rules literally that is what they say.


Man, if only someone had told you that 30 or 40 times previously (or you'd just read them yourself)!

In all seriousness though, I do appreciate your coming back to admit this- usually folks just go silent when this kinda thing happens- so it at least settles things for anybody else who wasn't sure for some reason.


It seems like a system can have any feature set and as long as you say that the driver is responsible for monitoring the driving environment then the system is an SAE level 2 system and not subject to CA Autonomous Vehicles Regulations.


Well, yes, that's kind of the point.

If the driver is ultimately responsible for being in charge then it's not autonomous by definition. It's just a normal car with some "driver aids"... so in theory it should be safer than a "normal" car without aids (and Teslas own data suggests that's true- though many don't wish to accept that data)

It's only when you take the human away and leave ALL responsibility with the computer that we're into totally "unknown" safety territory and thus added regulation is needed right now.

I still can't reconcile this with their decision not to allow the Uber system to be used in CA when it clearly required a driver to be responsible for monitoring the driving environment both in design and in practice.

Except it did not clearly require a driver. They just said it did.

See again the reports from your own links from those who actually rode in Ubers cars that the system didn't care at all about a driver being there for long periods of time in any way. It no only wasn't checking, it didn't require them there AT ALL.

Uber only put them there so they could "claim" they were needed, but they didn't actually need to be.


I still believe that Tesla will not release what they have described as a level 2 system in California. But for now @Knightshade is correct and anything Tesla want to release is allowed as long as the driver is responsible for monitoring the driving environment. I'll just have to not do any road biking later this year until they work out the bugs.

If you are using marked bike lanes I expect you'd be fine... if you're riding in "regular" car lanes I honestly don't know how well Teslas system does detecting bicycles- it seems to do pretty well with motorcycles but they may offer a better radar signature than a bicycle does.
 
Depends what Elon means exactly here
I literally laughed out loud when he said he expects hands off wheel in six months. It does sound like he’s talking about level 3-5 autonomy and not just level 2 with your hands off the wheel. He’s the most optimistic engineer I’ve ever seen and I would say that engineers are an optimistic bunch.
The interviewer is making the same point that I’ve been trying to make that the experience of using any of the current level 3-5 systems is a level 2 experience because there is still a human monitoring the system.
 
  • Funny
Reactions: JeffK
I literally laughed out loud when he said he expects hands off wheel in six months. It does sound like he’s talking about level 3-5 autonomy and not just level 2 with your hands off the wheel. He’s the most optimistic engineer I’ve ever seen and I would say that engineers are an optimistic bunch.
The interviewer is making the same point that I’ve been trying to make that the experience of using any of the current level 3-5 systems is a level 2 experience because there is still a human monitoring the system.


the list of stuff Elon is "expecting" in 6 months that we're still waiting for years later is getting pretty long :)

Even shorter periods he's terrible at -Remember, advanced summon was coming in 6 weeks on November 1st last year...
 
If the driver is ultimately responsible for being in charge then it's not autonomous by definition. It's just a normal car with some "driver aids"... so in theory it should be safer than a "normal" car without aids (and Teslas own data suggests that's true- though many don't wish to accept that data)

I do not think Lix Fridman would agree with you on the italicized part of this one! Even if there are some situations that are safer, whether safety improves overall for a given set of aids may also depend on the ODD. And it’s possible it will be less safe the more driver aids you add, the more reliable they get. Even that cherry-picked Tesla data might soon start to show an increase in accident rate as EAP gets better and better...
 
Last edited:
the list of stuff Elon is "expecting" in 6 months that we're still waiting for years later is getting pretty long :)

Even shorter periods he's terrible at -Remember, advanced summon was coming in 6 weeks on November 1st last year...

I hope he is right on this, this time, that stock I own could use some pumping. There is a first time for everything - maybe this will turn out to be a leap forward in technology which happens from time to time. ;)
 
can't read all posts... should we get FSD or not? Wait 22nd announcement? Buy before May 1 cost increase? Thanks. peace

I think it depends on your expectation of what it is you are purchasing. I recommend not paying much attention to what Elon and Tesla say you will get. Try to set your own realistic expectations. Personally, if I were buying it, I would not expect it to do any driving for me.

If you have the correct expectations and you have $4k or whatever it is burning a hole in your pocket, and it seems worth it to you, it might well be a fun toy to have access to.

I’m not particularly convinced FSD will actually become more expensive with time, even if it is very good. But it might. Depends on what other manufacturers can provide to compete. Tesla might also have a sale in the future, if the capability is there, and they needed a cash infusion for whatever reason.
 
  • Like
Reactions: Dan_LA
I think it depends on your expectation of what it is you are purchasing. I recommend not paying much attention to what Elon and Tesla say you will get. Try to set your own realistic expectations. Personally, if I were buying it, I would not expect it to do any driving for me.

If you have the correct expectations and you have $4k or whatever it is burning a hole in your pocket, and it seems worth it to you, it might well be a fun toy to have access to.

I’m not particularly convinced FSD will actually become more expensive with time, even if it is very good. But it might. Depends on what other manufacturers can provide to compete. Tesla might also have a sale in the future, if the capability is there, and they needed a cash infusion for whatever reason.

Hey man, thanks for insightful opinion. Same time who gets Performance to get FSD? hahaha... I dont think any manufacturer is close on that I have to agree. And again, what the hell means FSD, I mean, which level Elon is talking about? Level 5?

My AP has AP lane, never tried... since I mostly drive on Carpool lane.. not sure if knows to differentiate I am in a carpool lane. As for FSD, is just cost of opportunity, and you are right... so clear he needs $1b NOW, considering 50% of Tesla get the FSD.

Let's see on 22nd. Are you planning to get, anytime? Just want to get this over haha... took the decision to not dabble with any decisions mostly. But to me the question is like ultimately are we going to get ANY FSD (like no driver input) in the next 10 years?? Just feel that just the hardware upgrade is worth it... but again, I didn't get my CF spoiler neither the DM emblem yet... let alone a AI MB Chip module installed haha.
 
Hey man, thanks for insightful opinion. Same time who gets Performance to get FSD? hahaha... I dont think any manufacturer is close on that I have to agree. And again, what the hell means FSD, I mean, which level Elon is talking about? Level 5?

My AP has AP lane, never tried... since I mostly drive on Carpool lane.. not sure if knows to differentiate I am in a carpool lane. As for FSD, is just cost of opportunity, and you are right... so clear he needs $1b NOW, considering 50% of Tesla get the FSD.

Let's see on 22nd. Are you planning to get, anytime? Just want to get this over haha... took the decision to not dabble with any decisions mostly. But to me the question is like ultimately are we going to get ANY FSD (like no driver input) in the next 10 years?? Just feel that just the hardware upgrade is worth it... but again, I didn't get my CF spoiler neither the DM emblem yet... let alone a AI MB Chip module installed haha.

Not planning to buy anytime soon. When the price comes down/when it is on sale, and also actually exists, maybe I will buy then.

I definitely think seeing whether Tesla can at some point master CF spoiler installation is a good litmus test to use to assess their overall competence. That could be the turning point we are all waiting for.

Elon says I bought a horse though, so now I have to reconsider my life choices.

Elon Musk on Twitter
 
Last edited:
Not planning to buy anytime soon. When the price comes down/when it is on sale, and also actually exists, maybe I will buy then.

I definitely think seeing whether Tesla can at some point master CF spoiler installation is a good litmus test to use to assess their overall competence. That could be the turning point we are all waiting for.

Elon says I bought a horse though, so now I have to reconsider my life choices.

Elon Musk on Twitter

I have already overpaid $10k on 3P+, will try to nail a discount for FSD after what I saw. I mean it might be pointless as not sure when we will trust EAP and FSD... just today I had a phantom brake on freeway.. glad i was watching for sure.
 
  • Informative
Reactions: 1 person