Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Impossibility of L5 using any level of new hardware

This site may earn commission on affiliate links.
If it only has to be better than an average human driver then all the extremely rare corner cases are not so important.
The law makes great allowances for the fallibilities of human beings, because...well it has to, that's all there is for now. The law will not make such allowances for AI systems operated by corporations worth tens of billions. I saw Elon talk about the system needing to be two or three hundred times safer than a human. He thinks that will suffice for regulatory approval I surmise. I tend to agree. My own view was that Tesla will need to demonstrate, very conclusively indeed, that their system is two orders of magnitude safer than a human. I take that to mean an average human. If so, that looks very achievable to me, but it will take time to gather the evidence from millions of miles driven by FSD operating as a "driver aid" legally (which is how it will initially operate). Assuming it really is that safe out of the box, that will take a minimum of two years, before they can even enter the law change process. That can take a very long time, even without political and partisan objections. Imagine the objections by unions representing professional drivers, supported by Big Oil and the legacy car companies. I reckon European countries without car industries (Switzerland?) might go first, not the US.
 
Current traffic codes prohibit human drivers from being uninvolved in the driving process, so to speak.

This is true in many states- but not all of them.

In fact some states already allow the car to drive on its own, so long as it is capable of obeying all existing traffic laws and rules of the road.

In other states it can only drive itself under very specific conditions and/or with prior state approval

In other states there's no laws on the books at all to cover self driving.


It is the random/patchwork nature of state laws that are the problem in the US... otherwise Tesla would need to turn individual driving features/levels on or off every time you cross a state line- which would be a nightmare (and potentially dangerous)

The ideal is the NHTSA coming up with federal rules that cover the whole US (and similar wider bodies in regions, like the EU, doing so) so you have far fewer bodies needing convincing.



Mind you, virtually none of these issues prevent Tesla from rolling out most FSD features at all

It would just require them to keep them level 2 (that is- still REQUIRES the human to be paying attention and in control of the vehicle at all times). That's legal today in all 50 states.
 
  • Informative
Reactions: woodisgood and Thp3
They have incentive (economical) of not saying complete truth. Not blaming them.
No, that's not how it works. Unless you are legitimately seeing something that the people working on it do not then they are pursuing what they see as the best way to get where they need to go. Sure, they may be downplaying difficulties but if they honestly thought the same way that you do and didn't think they'd be able to make it work they would have changed directions as soon as they realized this.
 
The law makes great allowances for the fallibilities of human beings, because...well it has to, that's all there is for now. The law will not make such allowances for AI systems operated by corporations worth tens of billions.

If a human kills someone there can be jail time for negligence. Since corporations can't go to jail and liability is shared among staff, the equivalent is big fines. Lawsuits worth hundreds of millions of negligence can be shown.

Tesla need to be extremely careful to make sure they dotted every T and crossed every I, because when someone does inevitably sue them they will have to show that they made every reasonable effort to avoid it. They won't have the "driver must be paying attention, it's their fault" get-out.

That's one reason why Waymo has gone so slowly. They were demonstrating tech way beyond what Tesla has today back in 2015, but getting regulatory approval and getting the legal safeguards in place has taken this long.
 
The law makes great allowances for the fallibilities of human beings, because...well it has to, that's all there is for now. The law will not make such allowances for AI systems operated by corporations worth tens of billions. I saw Elon talk about the system needing to be two or three hundred times safer than a human. He thinks that will suffice for regulatory approval I surmise. I tend to agree. My own view was that Tesla will need to demonstrate, very conclusively indeed, that their system is two orders of magnitude safer than a human. I take that to mean an average human. If so, that looks very achievable to me, but it will take time to gather the evidence from millions of miles driven by FSD operating as a "driver aid" legally (which is how it will initially operate). Assuming it really is that safe out of the box, that will take a minimum of two years, before they can even enter the law change process. That can take a very long time, even without political and partisan objections. Imagine the objections by unions representing professional drivers, supported by Big Oil and the legacy car companies. I reckon European countries without car industries (Switzerland?) might go first, not the US.
Things are different here in the US. Waymo could start operating their robo taxis in Arizona today without a safety driver if they felt they were safe enough. In California it does not appear that the regulations specify how safe an autonomous vehicle must be. Personally I think the public will accept safety only a few times greater than a human. Tesla's fleet is so large it seems like it would take very little time to prove that.
 
You mean like riding a bus, taxi, or with my wife on a daily basis :) I don't really find being a passenger that big a deal.

I've heard that answer before but it's the first time it triggered a thought. The difference when you have a human driver is that you can talk (or yell) when there is something they missed. Maybe we just need to make sure the autonomous car can respond to voice commands (stop!, pull over, slow down, move over to the right part of the lane, careful around the potholes, watch that red car -- the driver looks drunk, whatever...).

I'm not sure how helpful that would be for the car, but it might make the passengers more comfortable.
 
I've heard that answer before but it's the first time it triggered a thought. The difference when you have a human driver is that you can talk (or yell) when there is something they missed. Maybe we just need to make sure the autonomous car can respond to voice commands (stop!, pull over, slow down, move over to the right part of the lane, careful around the potholes, watch that red car -- the driver looks drunk, whatever...).

I'm not sure how helpful that would be for the car, but it might make the passengers more comfortable.

If you look at public transit, buses, taxis the vast majority of people don't pay attention at all. Even when travelling in countries that driving is super aggressive most people just white knuckle it in silence. I think Elons timeline is laughable but the day will come and people won't even look up from their phone.
 
Ok, better than a human driver in enough in theory right ? Now let's consider the psychological side of it.

Let's say you're driving along in your aotonomous car (no driving wheel).
And you see the accident coming, but there is nothing you can do...I bet it will take you a while before you step in one of these car...

Um, you mean one of the millions of human-caused accidents every year??? What scares me is driving along and seeing the kid in the next car texting/smoking/drinking and driving... Autopilot is already better than most humans!

In the near future, I hope only expert drivers (proven with a rigorous test process) are allowed to have steering wheels...
 
The fear of accidents is certainly understandable. But if you dissect that fear, it tends to come from fear of personal (or your family's) bodily harm, financial burden and headache and possibly liability, and probably lastly, causing harm to others.

If an accident is merely an inconvenience that you walk away from and hail another Robo, in which nobody is seriously harmed because of the system's ability to at least reduce collision severity, and from which you have no resulting headaches to deal with it (insurance, repair, etc.)...it kind of becomes less frightening.

"Yeah my Robo hit another car on the way to work. Sorry I'm 10 minutes late..."

This is a really valuable and important thought. It helps me understand non-ownership too, which has been a really difficult historic concept for me.
 
My 90 y/o father in law just passed a driving test. I promise you the CURRENT autonomy is VASTLY superior to him in every way right now.

Having said that I am very skeptical we'll have FSD in 2020.

On a long interstate drive (let’s say 600+ miles per day for several consecutive days), I suspect Autopilot is already noticeably better than any human driver.
 
  • Funny
Reactions: Daniel in SD
Not knowing is not an incentive. Tesla obviously tried to explain something to investors and their customers, assuming we can understand something. Including being critical of what they are saying. Otherwise, the only option is to listen to Tesla and be excited all the time about what they say.
 
If it only has to be better than an average human driver then all the extremely rare corner cases are not so important.

The trouble is that we are good precisely at corner cases, because we have ingenuity, creativity, and an understanding of the physical world around us. Robots will be good at the boring and tedious stuff, which is where we need help (we tire and are easily distracted), but not the corner cases.

For example, imagine someone walks up along side a fairly busy road wearing a stop sign t-shirt. Humans will obviously see that it's a troll and correctly ignore them, no training needed. A robot might inadvertently slam on the brakes and get rear ended. This is obviously a contrived example, but it illustrates a point. Basically, accidents might be rarer overall, but they'll be extremely bizarre and, to a human observer, seem downright stupid. There are just too many crazy corner cases to train for, in fact many corner cases will be too unique to even get sufficient training data in the first place.

This property of autonomy - that it'll make really dumb mistakes compared to people - will make it difficult for people to trust the machine for a long time to come.

The one consolation is that the car just needs to be safe in those cases, which is much easier to achieve. If people aren't liable for the dumb mistakes (eg in a Tesla operated fleet), that could certainly boost people's tolerance. But there will be a big backlash from individual owners when they get lulled by the 99.999% reliability only to suddenly get into an accident because the NN misinterpreted something (like remember when the autowipers would go crazy in tunnels? Imagine if instead the whole car suddenly went crazy in some novel situation the NN authors hadn't anticipated).
 
  • Like
Reactions: AlanSubie4Life
Btw, Musk addressed this ‘troll’ scenarios, saying that a NN can be trained to filter out crazy stuff (check his interview with MIT guy). The problem, again, it is a ‘feature’ to implement that someone (human) should add. And it goes on and on.
 
In corner cases the main thing is to just stop as quickly as possible. No swerving or deciding if it's going to sacrifice the driver or the flock of nuns, just apply braking and come to a halt as fast as possible.
 
In corner cases the main thing is to just stop as quickly as possible. No swerving or deciding if it's going to sacrifice the driver or the flock of nuns, just apply braking and come to a halt as fast as possible.

Except for the times when stopping would put you in the middle of the problem and the solutions is to accelerate away. e.g. The robotaxi starts crossing an intersection at a 4-way stop sign, and then notices a driver on the cross-road blowing their stop sign. If the robotaxi stops, it gets t-boned -- it needs to get out of the way instead.
 
Th
I personally think FSD will be here quite soon and will be an awesome feature. I know a lot about AI and NN's and was extremely impressed with the approach Tesla is taking.

But I am against the idea of getting rid of the wheel and pedals. I would always want the option to take control for a couple of reasons:

1) Human instinct should not be underrated. Sometimes you can just sense that a situation is dangerous and you want to be able to make driving decisions that a general AI would never make.

2) I like to drive. Part of the appeal of a Model 3 is actually driving it. I get pleasure from driving a performance car and I don't want to lose that. I could see using FSD 90% of the time but I want the option to take it out to a winding road and have fun.
This is very much my attitude. I expect FSD (with some limitations - see below) to be with us within 2 years and am looking forward to that, so when I am driving on business, I can keep up with work on the go and relax on arrival, rather than then spending all the evening on emails after a long drive. I don’t want the steering wheel to disappear. There will be occasions I want to manually drive, but wonderful to have the choice depending on the situation.

From a practical point of view, a couple of times this winter I have had to take over from Autopilot because the sensors had become covered by ice in driving snow. Therefore manual override should always be possible, even if rarely used. Also in England where I live there are lots of narrow roads with passing spots. Human drivers gesture to each other who to go first. I don’t know how that could possibly work with FSD and on those single track roads I expect to continue driving manually indefinitely.
 
Last edited:
Except for the times when stopping would put you in the middle of the problem and the solutions is to accelerate away. e.g. The robotaxi starts crossing an intersection at a 4-way stop sign, and then notices a driver on the cross-road blowing their stop sign. If the robotaxi stops, it gets t-boned -- it needs to get out of the way instead.

Why would it stop in that example? It can see the other car, it knows that if it keeps going it won't get hit because it can see the direction of travel of other vehicle.

It's only in situations where an accident is unavoidable or for some reason it can't determine what to do (e.g. cameras covered with mud or obscured by smoke) where it will stop.