Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

HW3 soon moving from emulation to native mode

This site may earn commission on affiliate links.
I think you are not getting how L3 works.... L3 systems must be and are designed so that if driver doesn't take over there wont an accident. (with the exception of the possibility that eventually the vehicle slowly comes to a stop in lane and then gets read ended).

L3 needs all of the redundancy that L4 requires.... vehicle actuation, power, sensors, computation, software
I believe that L3 requires the user to take over control of the vehicle in a reasonable amount of time when alerted to do so. It just needs to get into fewer accidents than humans (and I don't think that is even specified by the SAE). So the question is what is the probability of a sensor failure and what is the probability of that failure causing an accident? All sorts of things can fail on a regular car and cause an accident but the brake master cylinder is the only redundant system that I'm aware of.
The steering rack of the Model 3 and the FSD computer do have redundancy so in the event of a sensor failure the car could safely come to a stop while maintaining directional control. Obviously there is a risk of getting rear ended during the time it takes the person behind the wheel to take over. I just think that risk is small enough.
 
It just needs to get into fewer accidents than humans (and I don't think that is even specified by the SAE).

Right

I believe that L3 requires the user to take over control of the vehicle in a reasonable amount of time when alerted to do so.

Yes that is the expectation of the human and that is humans responsibility. This is what will be stated in user manuals and various other terms and conditions user will need to agree too. And this will be the legal requirement for human drivers using such a system as well.

However, L3 systems still must be and I am positive every one will be designed so that an accident won't happen even if human never takes over... (with the one exception already mentioned)

So the question is what is the probability of a sensor failure and what is the probability of that failure causing an accident? All sorts of things can fail on a regular car and cause an accident but the brake master cylinder is the only redundant system that I'm aware of.

Components have different levels of reliability and ways they can fail.... a Sensor for example.. can fail internally, get covered in a splash of mud... get hit by a rock, have a power failure, power connection failure, data connection failure, and more

Of course it is impossible to make everything in a car redundant like the tires and axels and windshield.

The steering rack of the Model 3 and the FSD computer do have redundancy so in the event of a sensor failure the car could safely come to a stop while maintaining directional control.

Yes it does.... but it would need to a come to a stop in a matter of a few seconds (and even then there is risk of accident)... coming to a stop on the highway within a few seconds and hoping the driver that is watching tv will safely take over is NOT at all within acceptable risk range

But even further... a sensor or other component could fail in the middle of a complex situation... or an obstacle could cut in or suddenly brake shortly after the sensor fails.... Furthermore, even if a sensor doesn't fail... it could be for some reason or other sending faulty data.

. Obviously there is a risk of getting rear ended during the time it takes the person behind the wheel to take over. I just think that risk is small enough.

Yes... especially since these systems are designed to not pull over after 30 seconds or more and may not come to complete stop until after 1 minute...and during the time from say 10 seconds - 30 seconds the driver the car will be freaking out like crazy... loud alerts and vibrations and flashers. (really would only happen if someone is like passed out drunk or other medical condition... though SAE L0-L2 is of course no better in these cases)

^^ This is in the range of acceptable risk for an OEM.

All of this said... I do not see hardware redundancy as a main factor stopping Tesla from releasing L3.... I also don't think Tesla is currently working towards that goal.
 
but this doesn't have anything with L3... Tesla has never made any public statements about making autopilot L3 or anything about L3. And right now they are not even working towards that.

I realize that. In fact, Tesla has only talked about L5. But I did not think asking how close the rewrite will get to L5 would be a realistic question, especially considering that Tesla has not demonstrated any autonomous driving yet. So I asked about L3 since it is the first level of true autonomous driving and it still requires the driver as fallback. So I thought L3 would be a more realistic benchmark.
 
I realize that. In fact, Tesla has only talked about L5. But I did not think asking how close the rewrite will get to L5 would be a realistic question, especially considering that Tesla has not demonstrated any autonomous driving yet. So I asked about L3 since it is the first level of true autonomous driving and it still requires the driver as fallback. So I thought L3 would be a more realistic benchmark.

Reasonable thought process. Though I don't think that benchmark makes sense.

Tesla will definitely be releasing door to door autonomy and iteratively improving that for a long time, before even attempting to develop something they can release as L3 in any situation
 
Reasonable thought process. Though I don't think that benchmark makes sense.

Tesla will definitely be releasing door to door autonomy and iteratively improving that for a long time, before even attempting to develop something they can release as L3 in any situation

True. I realize that is what Tesla is aiming for. We shall see what they do. Again, considering that Tesla has not given us "feature complete" with driver supervision yet, it seems like a rather lofty goal to be talking about door to door autonomy. Let's see if the new rewrite can get to "feature complete" with driver supervision first.
 
True. I realize that is what Tesla is aiming for. We shall see what they do. Again, considering that Tesla has not given us "feature complete" with driver supervision yet, it seems like a rather lofty goal to be talking about door to door autonomy. Let's see if the new rewrite can get to "feature complete" with driver supervision first.

Entirely agree... I expect that to take a long long time for it to be fully released.... but still see it as a closer goal than any kind of L3
 
I have a completely unsubstantiated hunch traffic cones, trash cans, lights, road markings and stop signs are already using the 3D labeling. They just all behave differently on the IC than moving and stationary vehicles. Like I said it’s just a hunch and I have no other insight, but there is something about them and the hand off between cameras that is much smoother and maybe more reliable than the display of stationary vehicles and oncoming vehicles for that matter. I may be wrong but there is certainly something different to me...
 
I have a completely unsubstantiated hunch traffic cones, trash cans, lights, road markings and stop signs are already using the 3D labeling. They just all behave differently on the IC than moving and stationary vehicles. Like I said it’s just a hunch and I have no other insight, but there is something about them and the hand off between cameras that is much smoother and maybe more reliable than the display of stationary vehicles and oncoming vehicles for that matter. I may be wrong but there is certainly something different to me...

3D labeling? You mean the video labeling Elon was talking about?

Well I agree I think the trash cans and traffic cones etc do behave differently than the cars and other moving objects, but I think that is just because it is a different network.
 
Yes it does.... but it would need to a come to a stop in a matter of a few seconds (and even then there is risk of accident)... coming to a stop on the highway within a few seconds and hoping the driver that is watching tv will safely take over is NOT at all within acceptable risk range
Why would it need to come to a stop in a few seconds? That would probably be dangerous on a highway. Keep in mind that computers have perfect memory. The car would already have a path planned out from before the camera failure. Obviously something about the scene could change while the car is blind but it just seems acceptably improbable to me.
But even further... a sensor or other component could fail in the middle of a complex situation... or an obstacle could cut in or suddenly brake shortly after the sensor fails.... Furthermore, even if a sensor doesn't fail... it could be for some reason or other sending faulty data.
Complex situations are rare and sensors failures are also rare. The chances of both happening simultaneously is infinitesimally small. Think about a random ten second window of a normal drive. I bet 99.9% of them could be safely handled after losing a camera.
All of this said... I do not see hardware redundancy as a main factor stopping Tesla from releasing L3.... I also don't think Tesla is currently working towards that goal.
I agree. This discussion is all sort of silly since I don't think Tesla will achieve L3-5 with the current hardware even without sensor failures.
 
Why would it need to come to a stop in a few seconds? Obviously something about the scene could change while the car is blind but it just seems acceptably improbable to me.

Complex situations are rare and sensors failures are also rare. The chances of both happening simultaneously is infinitesimally small. Think about a random ten second window of a normal drive. I bet 99.9% of them could be safely handled after losing a camera.

It is not likely and rare... and most of the time would be fine.... but 99.9% is not nearly good enough and certainly is not in the range of acceptable risk for an automaker. Imagine this.... say every human driver on the road everywhere... 1 out of every 1000 minutes on the road they decided to have their eyes closed for continuously ... or they were looking down at their phone continuously without looking up.

Even on the highway. or traffic jam.... its still well in the realm of possibility that the car in front slows down or brakes for one reason or another.
 
1 out of every 1000 minutes on the road they decided to have their eyes closed for continuously ... or they were looking down at their phone continuously without looking up.
I know we're talking about Tesla here but I expect that their cameras have a failure rate of far less than once per thousand minutes. I've seen far more reports here of drive unit failures (which could also cause an accident!) than I have of camera failures. Say the cameras have a failure rate of 1 per 100,000 hours and that failure results in an accident 0.1% of the time, average speed of 30mph, that would be an accident every 3 billion miles which is perfectly acceptable.
 
I know we're talking about Tesla here but I expect that their cameras have a failure rate of far less than once per thousand minutes. I've seen far more reports here of drive unit failures (which could also cause an accident!) than I have of camera failures. Say the cameras have a failure rate of 1 per 100,000 hours and that failure results in an accident 0.1% of the time, average speed of 30mph, that would be an accident every 3 billion miles which is perfectly acceptable.

you make good points.

I guess I still feel both of those numbers are a little more optimistic than what I would guess. Camera could fail for a variety of reasons including things like temporary occlusion... and I feel the accident rate would be closer to 1% when there is a critical camera failure. However, of course I am just guessing and even if the numbers are shifted in my favor that is still a very low potential accident rate.
 
Last edited:
you make good points.

I guess I still feel both of those numbers are a little more optimistic than what I would guess. Camera could fail for a variety of reasons including things like temporary occlusion... and I feel the accident rate would be closer to 1% when there is a critical camera failure. However, of course I am just guessing and even if the numbers are shifted in my favor that is still a very low potential accident rate.

The biggest problem is we don't know when the cameras fail. One of the reasons I pointed out the rear camera, and it's failures is because we know about it. We know about it since I'm sure we've all seen it fail from time to time. The whole black screen thing that might take a few seconds to clear up. Sure we don't know where in the pipeline it failed, but we know it failed.

It doesn't fail in a way that requires removal, but it fails in temporary ways. Usually it's just a temporary glitch, and sometimes longer.

I tend to be pretty against an all-camera solution regardless of the manufacture. I am because my entire history with digital Cameras is filled with them not working even 99.9% of the time.

Like the latest camera that doesn't work for me is occasionally my iPhone camera will be all black. Nothing short of rebooting the phone gets it to work against.

In my Tesla I look forwards to being able to use the other cameras in live mode like Elon said we would be able to in some recent twitter reply to the request. That way I'd at least know how solid the other cameras really were. One might say to use sentry mode video recording to judge, but I've seen numerous failures with that. Where people blame the write speed of the USB drive.
 
Last edited:
No, that was your conclusion. I don't see why the system can't be L3 and provide 5 seconds of notification to facilitate handover. If driver is asleep, well, that's their problem and not really supported by L3 anyways.

I was asking that specifically from diplomat to check my own recollection of a previous conversation where the driver monitoring aspect of L3 was discussed. I thought that he specifically was onboard with those of us that feel that Tesla not only can't do L3, but that they don't even want to do L3. Apparently I was mistaken because he was blissfully unaware of it.

This whole conversation seems a bit like we're trying to force something we want on a system that simply isn't intended to do it.

I'm not an exception as months ago I was hoping that FSD would accomplish at least L3, and I really felt it could. That was until Blader ruined that with the driver monitoring stuff. I haven't been able to find a way around it in a way that would really work.

The simple fact is the view "if the driver is asleep, well. that's their problem" is completely irresponsible. We know the psychology of humans, and what happens to a significant percentage of them when they're not in control.

If you, and I were designing an L3 system I'm sure you'd allow me to put in an effective driver monitoring system. They're not that expensive, and they're useful well beyond that of L3 driving.

Tesla doesn't have an effective driver monitoring system because it's not intended to do L3 driving. It's intended to go straight from L2 to L4, and you don't need it with L4.
 
Last edited:
  • Like
Reactions: croman
Lmao... that could be a reason, but far from the only reason

Could be a reason? Ha

The lack of driver monitoring is the easiest limitation to explain to non-technical people. Or people that aren't intimately familiar with the entire sensor imaging pipeline that takes place in a system like a Tesla.

Most of us know that person who almost instantly falls asleep in a Tesla. We've also all seen videos of people sleeping while driving in a Tesla even with the current torque based sensor. Where they either defeated it using a steering wheel weight or just happened to apply just enough torque to satisfy it.

There have been plenty of studies done showing that people do tend to fall asleep when they're not asked to do something.

So it's not a question of will people fall asleep at the wheel, but what will happen when they do.

An L3 system is allowed to simply come to a stop in the lane with the hazards on if the driver doesn't take over. This would be rather disastrous for Tesla if 1/100 people fell asleep on a journey somewhere, and the car simply stopped in the lane because they failed to take over.

That itself would be all over the news simply for the annoyance of it. We're talking millions of vehicles that it operates on (assuming it happens a couple years from now)./

But, the bigger problem might be the lack of situational awareness in waking up. It takes people quite a few seconds upon waking up to regain their composure, and alertness. They simply can't transition in 5 seconds or less.
 
  • Disagree
Reactions: mikes_fsd
Here is the SAE doc if you want to check yourself.

Thanks.

I'm not seeing any specific mention of driver monitoring in that document. So I'm not sure what Blader is referring to.

The document consistently mentions "DDT fallback-ready user", and to me the only way to achieve that is through driver monitoring. It's not that driver monitoring always guarantees a fallback-ready user, but they can detect drowsiness and can use that to nag the user before they fall asleep. So the car essentially won't let the driver sleep.

It's also important to point out that the document is simply a framework, and not a requirement. Actual requirements will come from regulatory, and it looks like the European union will require driver monitoring pretty soon regardless of whether they are L1/L2/L3.

Europe Is Poised For A Revolution In Road Safety While The U.S. Lags Behind
 
It's also important to point out that the document is simply a framework, and not a requirement. Actual requirements will come from regulatory, and it looks like the European union will require driver monitoring pretty soon regardless of whether they are L1/L2/L3.
It is a requirement to register an AV in California. There are of course additional requirements. I wouldn't be surprised if the NHTSA eventually requires driver monitoring on L2 and L3 systems.
(b) “Autonomous vehicle” means any vehicle equipped with technology that is a combination of both hardware and software that has the capability of performing the dynamic driving task without the active physical control or monitoring of a natural person, excluding vehicles equipped with one or more systems that enhance safety or provide driver assistance but are not capable of driving or operating the vehicle without the active physical control or monitoring of a human. For the purposes of this article an “autonomous vehicle” meets the definition of levels 3, 4, or 5 of the SAE International’s Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles, Standard J3016 (SEP2016), which is hereby incorporated by reference.
https://www.dmv.ca.gov/portal/wcm/c...essAV_Adopted_Regulatory_Text.pdf?MOD=AJPERES