The people working on this, who probably know a lot more about it than you do, don't see these limitations.
They have incentive (economical) of not saying complete truth. Not blaming them.
You can install our site as a web app on your iOS device by utilizing the Add to Home Screen feature in Safari. Please see this thread for more details on this.
Note: This feature may not be available in some browsers.
The people working on this, who probably know a lot more about it than you do, don't see these limitations.
The law makes great allowances for the fallibilities of human beings, because...well it has to, that's all there is for now. The law will not make such allowances for AI systems operated by corporations worth tens of billions. I saw Elon talk about the system needing to be two or three hundred times safer than a human. He thinks that will suffice for regulatory approval I surmise. I tend to agree. My own view was that Tesla will need to demonstrate, very conclusively indeed, that their system is two orders of magnitude safer than a human. I take that to mean an average human. If so, that looks very achievable to me, but it will take time to gather the evidence from millions of miles driven by FSD operating as a "driver aid" legally (which is how it will initially operate). Assuming it really is that safe out of the box, that will take a minimum of two years, before they can even enter the law change process. That can take a very long time, even without political and partisan objections. Imagine the objections by unions representing professional drivers, supported by Big Oil and the legacy car companies. I reckon European countries without car industries (Switzerland?) might go first, not the US.If it only has to be better than an average human driver then all the extremely rare corner cases are not so important.
Current traffic codes prohibit human drivers from being uninvolved in the driving process, so to speak.
No, that's not how it works. Unless you are legitimately seeing something that the people working on it do not then they are pursuing what they see as the best way to get where they need to go. Sure, they may be downplaying difficulties but if they honestly thought the same way that you do and didn't think they'd be able to make it work they would have changed directions as soon as they realized this.They have incentive (economical) of not saying complete truth. Not blaming them.
The law makes great allowances for the fallibilities of human beings, because...well it has to, that's all there is for now. The law will not make such allowances for AI systems operated by corporations worth tens of billions.
Things are different here in the US. Waymo could start operating their robo taxis in Arizona today without a safety driver if they felt they were safe enough. In California it does not appear that the regulations specify how safe an autonomous vehicle must be. Personally I think the public will accept safety only a few times greater than a human. Tesla's fleet is so large it seems like it would take very little time to prove that.The law makes great allowances for the fallibilities of human beings, because...well it has to, that's all there is for now. The law will not make such allowances for AI systems operated by corporations worth tens of billions. I saw Elon talk about the system needing to be two or three hundred times safer than a human. He thinks that will suffice for regulatory approval I surmise. I tend to agree. My own view was that Tesla will need to demonstrate, very conclusively indeed, that their system is two orders of magnitude safer than a human. I take that to mean an average human. If so, that looks very achievable to me, but it will take time to gather the evidence from millions of miles driven by FSD operating as a "driver aid" legally (which is how it will initially operate). Assuming it really is that safe out of the box, that will take a minimum of two years, before they can even enter the law change process. That can take a very long time, even without political and partisan objections. Imagine the objections by unions representing professional drivers, supported by Big Oil and the legacy car companies. I reckon European countries without car industries (Switzerland?) might go first, not the US.
You mean like riding a bus, taxi, or with my wife on a daily basis I don't really find being a passenger that big a deal.
I've heard that answer before but it's the first time it triggered a thought. The difference when you have a human driver is that you can talk (or yell) when there is something they missed. Maybe we just need to make sure the autonomous car can respond to voice commands (stop!, pull over, slow down, move over to the right part of the lane, careful around the potholes, watch that red car -- the driver looks drunk, whatever...).
I'm not sure how helpful that would be for the car, but it might make the passengers more comfortable.
Ok, better than a human driver in enough in theory right ? Now let's consider the psychological side of it.
Let's say you're driving along in your aotonomous car (no driving wheel).
And you see the accident coming, but there is nothing you can do...I bet it will take you a while before you step in one of these car...
The fear of accidents is certainly understandable. But if you dissect that fear, it tends to come from fear of personal (or your family's) bodily harm, financial burden and headache and possibly liability, and probably lastly, causing harm to others.
If an accident is merely an inconvenience that you walk away from and hail another Robo, in which nobody is seriously harmed because of the system's ability to at least reduce collision severity, and from which you have no resulting headaches to deal with it (insurance, repair, etc.)...it kind of becomes less frightening.
"Yeah my Robo hit another car on the way to work. Sorry I'm 10 minutes late..."
My 90 y/o father in law just passed a driving test. I promise you the CURRENT autonomy is VASTLY superior to him in every way right now.
Having said that I am very skeptical we'll have FSD in 2020.
They have incentive (economical) of not saying complete truth. Not blaming them.
If it only has to be better than an average human driver then all the extremely rare corner cases are not so important.
In corner cases the main thing is to just stop as quickly as possible. No swerving or deciding if it's going to sacrifice the driver or the flock of nuns, just apply braking and come to a halt as fast as possible.
This is very much my attitude. I expect FSD (with some limitations - see below) to be with us within 2 years and am looking forward to that, so when I am driving on business, I can keep up with work on the go and relax on arrival, rather than then spending all the evening on emails after a long drive. I don’t want the steering wheel to disappear. There will be occasions I want to manually drive, but wonderful to have the choice depending on the situation.I personally think FSD will be here quite soon and will be an awesome feature. I know a lot about AI and NN's and was extremely impressed with the approach Tesla is taking.
But I am against the idea of getting rid of the wheel and pedals. I would always want the option to take control for a couple of reasons:
1) Human instinct should not be underrated. Sometimes you can just sense that a situation is dangerous and you want to be able to make driving decisions that a general AI would never make.
2) I like to drive. Part of the appeal of a Model 3 is actually driving it. I get pleasure from driving a performance car and I don't want to lose that. I could see using FSD 90% of the time but I want the option to take it out to a winding road and have fun.
Except for the times when stopping would put you in the middle of the problem and the solutions is to accelerate away. e.g. The robotaxi starts crossing an intersection at a 4-way stop sign, and then notices a driver on the cross-road blowing their stop sign. If the robotaxi stops, it gets t-boned -- it needs to get out of the way instead.