Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla autopilot HW3

This site may earn commission on affiliate links.
I hope and believe that Waymo, Tesla, and the others will be more responsible about having competent, intelligent, AWAKE safety drivers.
I think that having two safety drivers at all times is probably a good idea. I think that those problems will get worse as the systems get closer to human level safety. It seems like it would be very difficult to maintain vigilance in a vehicle that can go 10k+ miles (meaning months of testing) without driver intervention being required.
I agree. But we need to keep in mind that Tesla will undoubtedly release FSD to the fleet when they think it is ready for the public, requiring of course driver supervision and nags like with AP. I certainly hope that Tesla owners will be competent, intelligent and awake as safety drivers. I think the vast majority will be. But how do you guarantee that they all will be. Past experience might suggest otherwise. Maybe Tesla will continue to rely on AP nags to ensure that owners are responsible?
I'm not convinced that current nags will be enough. There are plenty of videos of people on AP not maintaining vigilance. The good thing about freeway driving is that you can basically treat it like a bumper car track (plenty of videos demonstrating that here!). Treating city streets like a bumper car track is much more dangerous.
 
  • Like
Reactions: diplomat33
I certainly hope that Tesla owners will be competent, intelligent and awake as safety drivers. I think the vast majority will be. But how do you guarantee that they all will be.
You don't. You are writing nonsense. Tens of thousands of people die on America's highways every year. An order of magnitude more are maimed and injured. There is no perfect safety. What there is is a terrible situation that we put up with because of the convenience of using cars. We put the carnage out of mind.

So, yeah, people will be killed by FSD and by the testing of FSD. So what? All you are objecting to is its visibility.

And frankly. it is small comfort for a family who loses a loved one because the FSD car was not ready yet: "Sorry your kid died when our FSD car accidentally ran him over while he was walking home from school but take comfort, many lives will be saved when we finish the FSD software in 10 years."
So today we say "Sorry your kid died. You knew the risks when you got behind the wheel. Lots of other kids died this year too, and it won't get much better in the future. Better luck with your other kids." Pretending otherwise doesn't change anything. Testing FSD is an attempt to change things.
 
You don't. You are writing nonsense. Tens of thousands of people die on America's highways every year. An order of magnitude more are maimed and injured. There is no perfect safety. What there is is a terrible situation that we put up with because of the convenience of using cars. We put the carnage out of mind.

So, yeah, people will be killed by FSD and by the testing of FSD. So what? All you are objecting to is its visibility.
I think you're misunderstanding the point. There are dangers in testing a system that by itself is much less safe than a human. It requires the human to monitor the system and intervene when necessary. There is plenty of evidence from Uber, Waymo, Cruise, et al. that it is difficult to get humans to maintain vigilance in highly capable autonomous vehicles.
All I'm saying is that testing of autonomous vehicles should be done in a way that does not increase the number of accidents. I believe that is possible.
 
You don't. You are writing nonsense. Tens of thousands of people die on America's highways every year. An order of magnitude more are maimed and injured. There is no perfect safety. What there is is a terrible situation that we put up with because of the convenience of using cars. We put the carnage out of mind.

So, yeah, people will be killed by FSD and by the testing of FSD. So what? All you are objecting to is its visibility.


So today we say "Sorry your kid died. You knew the risks when you got behind the wheel. Lots of other kids died this year too, and it won't get much better in the future. Better luck with your other kids." Pretending otherwise doesn't change anything. Testing FSD is an attempt to change things.

Of course, there is no perfect safety. I want FSD precisely to save lives and stop the huge number of deaths on the roads today. But I am advocating for common sense testing of FSD.

Look at the death related to the Uber car. That death could have been avoided if Uber had been more diligent. Why shouldn't we expect companies to exhibit better safety in their FSD testing? You say there is no perfect safety, but there is no reason to be less safe on purpose.

To say "people are dying in cars now, so let's test unreliable FSD cars and who cares if it kills people" is just cruel and dumb. We can do safe FSD testing so we should.
 
All I'm saying is that testing of autonomous vehicles should be done in a way that does not increase the number of accidents. I believe that is possible.
And all I'm saying is that this is, according to Elon, immoral. I agree with him. Testing should be done in such a way as to get to "safer than human" as fast as possible.

One way to look at this is that most driving fatalities and injuries occurring now are happening because we didn't start working on FSD a few years earlier.
 
  • Like
Reactions: EinSV
And all I'm saying is that this is, according to Elon, immoral. I agree with him. Testing should be done in such a way as to get to "safer than human" as fast as possible.

But it is also immoral to deliberately kill people during FSD testing when you don't have to, which would happen if you do your FSD testing in an irresponsible manner.
 
And all I'm saying is that this is, according to Elon, immoral. I agree with him. Testing should be done in such a way as to get to "safer than human" as fast as possible.

One way to look at this is that most driving fatalities and injuries occurring now are happening because we didn't start working on FSD a few years earlier.
Yeah, that's not going to happen. And also the big flaw with that argument is no one can prove that it's necessary or beneficial to sacrifice lives to develop autonomous vehicles. I've never actually even heard Elon Musk make the argument anyway.
 
I agree. But we need to keep in mind that Tesla will undoubtedly release FSD to the fleet when they think it is ready for the public, requiring of course driver supervision and nags like with AP. I certainly hope that Tesla owners will be competent, intelligent and awake as safety drivers. I think the vast majority will be. But how do you guarantee that they all will be. Past experience might suggest otherwise. Maybe Tesla will continue to rely on AP nags to ensure that owners are responsible?

People, in general, are idiots. They eat and drink (both alcohol and non-alcoholic beverages) while driving, they text while driving, they apply makeup while driving, they turn to look at their passenger while driving. And a few go above and beyond idiocy, driving an EAP car as though it was a mature FSD car.

It is my opinion that EAP (and I presume AP as well) makes a car safer when used properly because the car will not make the mistake a driver would have made and the driver will prevent the car from making the mistake that EAP/AP makes.

When Tesla releases its "FSD" package at Level 2 (and damnitall!!! I wish they hadn't used that dishonest name because the F in FSD stands for FULL and by any rational thinking person FULL self driving means the car is FULLY self driving) most Tesla owners will understand that just as their car now keeps to its lane and adjusts its speed but still requires their vigilance, so too, the "FSD" package car will make turns on city streets while still requiring their vigilance.

And yes, there will be mistakes because nothing is perfect, but I think there will be fewer deaths because for every idiot that runs down a pedestrian because he thinks that FSD means FSD, two lives will be saved by people operating properly and letting the car do what it can do while intervening when the car shows signs of screwing up. Even that first Level 2 City NoA car will brake for two pedestrians the driver alone would not have been able to respond quickly enough for, for every one run down because the driver was too stupid to intervene.

We should not be testing cars in a way that increases accidents, and we don't need to test cars in a way that increases accidents and I don't believe that the responsible players will test cars in a way that increases accidents. Uber should probably not be allowed to test self-driving cars at all, since it has shown itself to be irresponsible.
 
As we all know - absolutely nobody dies on roads unless an AV is involved.

Ah yes, this age old argument. Uber disabled their safety systems and acted negligently. But you're right. Killing people in the name of autonomy is worthwhile, so who cares, right? I mean, after all, she was homeless.

This argument is intentionally obtuse, and I feel like you must know that.

Elon has explained before why Tesla is doing things this way; and states that allow him to go full speed are, in effect, agreeing. Elon believes that even if more people die this way, that FSD will get here faster and start saving lots of lives sooner. Therefore it is immoral to do things any other way. As it turns out though, I suspect that even if beta FSD causes several deaths, it is even now actively preventing more of them than it causes. They are just less visible, as they are things that don't happen.

The question, to me at least, is the type of collisions. If AP prevents fender benders and things of that nature, that's great. But if it causes collisions in situations where a human likely wouldn't have, that's a pretty big problem. We have some examples of this- two cars driving under Semis crossing the road in front of them, for example. Without AP, it would be reasonable to believe that those two drivers may have slowed down because the sun was in their eyes. Obviously nothing is guaranteed, but AP has some pretty glaring failure modes that present very real risks where a human driver wouldn't do.

In any event, since the public at large is exposed to these dangers, it is quite literally regulators' jobs to investigate and understand these technologies and their risks, and finally propose potential legislation to protect us from negligent operators like Uber and their absolutely garbage approach to safety of human life. What kind of company disables all auto braking and life safety systems because the car is too eager to panic stop? Oh, right, the kind of company that Uber has always been. And they are the exact type of operation that make a bad name for this tech, and cause regulators to step in in the first place.
 
Ah yes, this age old argument. Uber disabled their safety systems and acted negligently. But you're right. Killing people in the name of autonomy is worthwhile, so who cares, right? I mean, after all, she was homeless.
Its just a knee jerk reaction.

Have you ever said let us completely ban alcohol because of a drunken driver killing someone ? I'd be totally for it.

Or let us ban sugar / simple carbs - because 40% of Americans have diabetes or pre-diabetic ?
 
Uber was irresponsible, which is not surprising. Uber has always shown a disregard for rules. During the testing phase, the safety drivers have to be trained, competent, and alert. The driver in the Uber car that killed a pedestrian was falling asleep, if we can judge by the video. Uber's attitude was "Our cars can do this. All we need is to have some schlub in the driver's seat to satisfy the regulators." They probably just offered a bunch of high-school drop-outs minimum wage to sit in the seat and told them not to worry about it, they just had to be there.

I hope and believe that Waymo, Tesla, and the others will be more responsible about having competent, intelligent, AWAKE safety drivers.

Based on the NTSB report, the operator was not asleep, not falling asleep, not using their phone, etc. They were annotating data, looked away from the road, and the intentionally disabled safety systems detected a pedstrian, but didn't alert the driver and didn't take corrective action. Again, because they were intentionally disabled. If Arizona wasn't negligent as well, they'd probably have alerted their citizens to the fact that these systems were being tested, and had some oversight like "ADAS must always be enabled". But since Arizona would prefer money over safety, we get Uber killing a person.

Not to worry, though. Arizona told Uber to get lost, so Pennsylvania let them test on their streets instead. o_O
 
Why did the fatal Uber crash happen?

Because the safety driver was aware the car should be autonomous, while it was still making fatal mistakes. Few and far enough between obviously to lull the safety driver into a wrong assumption of complete safety.
So wouldn't the best safety driver be one who permanently, actively monitors the car as though he had to be able to take control at any time, unaware of the autonomy level of the car?

The "shadow mode" approach as Tesla communicates it is the right way IMO regarding safety drivers. I'm aware it's not working as communicated yet (the car itself isnt learning and the data sent back is only for specific Tesla defined learning cases).
This doesnt invalidate the point that essentially making any level 2 driver a safety driver of a disguised autonomous system is a smart idea.
The fact that there will always be people who think the system is further along than what they're being told is just the fact there's always a percentage of extremes on any spectrum.
 
Here's an inevitable issue: Every fatality or serious injury by a car with any level of autonomy will be reported far and wide and analyzed by every armchair commentator on the internet. The accidents these cars avoid that human drivers would not have avoided, will never be reported outside of the occasional "My car saved me!" post here on TMC or other similar sites and will never gain general attention.

It is my contention that AP/EAP has prevented more accidents than it has caused, and when NoA/city at Level 2 is rolled out to owners who paid for "FSD" the same will be the case. The vast majority of Tesla owners will use the system responsibly, and a few idiot jerkwads will ignore the road, and the first time one of these cars kills someone the headline will be "FULL SELF-DRIVING CAR KILLS PEDESTRIAN" (because Tesla chose the misleading name for the NoA/city options package) ignoring the fact that it was driver negligence; but there will be no reporting on any of the "My car prevented an accident" posts we will see here. And the latter will outnumber the former.
 
  • Like
Reactions: arcus and mongo
The accidents these cars avoid that human drivers would not have avoided, will never be reported outside of the occasional "My car saved me!" post here on TMC or other similar sites and will never gain general attention.
Thats where Musk's excellent reach on twitter helps. Tesla can tweet these videos that Musk retweets.

Avoid Crashes? Tesla Shows 4 Video Examples Of Autopilot Action - Tesla Motors Club

Its like all the EV fires that get reported disproportionately more often. But eventually the FUD will lose its effectiveness.
 
Why did the fatal Uber crash happen?
It is also arguable whether a non-AV would have avoided that accident. Esp., if the driver was distracted (like it happens a lot). Crossing such a road at night has always been dangerous.

OTOH, some years back when a senior driver plowed through people on the sidewalk - there was a lot of talk about taking away driving license from senior drivers. Obviously didn't happen.

100-Year-Old Driver Hits 11 People on L.A. Sidewalk
 
The "shadow mode" approach as Tesla communicates it is the right way IMO regarding safety drivers. I'm aware it's not working as communicated yet (the car itself isnt learning and the data sent back is only for specific Tesla defined learning cases).
This doesnt invalidate the point that essentially making any level 2 driver a safety driver of a disguised autonomous system is a smart idea.

I think the real problem with the first point is that the shadow observes, which is great, but that there's an active AP mode being used by the driver. It would be the safest, but least convenient option, to have shadow mode running and AP not available. What Tesla is facing now is a major training issue, where people aren't equipped with the knowledge to expect where, when, or how AP will fail.
 
What Tesla is facing now is a major training issue, where people aren't equipped with the knowledge to expect where, when, or how AP will fail.

Drivers shouldn't be thinking about where AP will fail, they should be determining if the current speed and heading are safe and intervening if they are not*.

* repetitive phantom braking locations and proper types of roads excepted.
 
  • Like
Reactions: EVNow
What Tesla is facing now is a major training issue, where people aren't equipped with the knowledge to expect where, when, or how AP will fail.

I kind of disagree: I feel that after using EAP for a while, I have a pretty good sense of what it can and cannot handle, by experiencing what it does. E.g., it tries to hold the center of the lane even when it really should be hugging one side or the other, so in those places I take over; on curves, it cannot hold the center of the lane as well as I'd like, so on very curvy roads I either slow it down (just a flick of the thumb wheel) or I take over. It's a cooperative effort, and you learn it just as you first learned to drive. Some people will be too stubborn and stupid and will cause accidents, but more people will use it properly and avoid accidents.

And regulators in most jurisdictions will permit true FSD as soon as the insurance companies are convinced it's safer than human drivers and push for approval.

I just think that time is further away than Elon does.
 
AP isn't much of a test problem. We regularly have to disable it.

What if FSD is good enough to average about 15k miles between accidents? If it has enough close calls you might still be attentive. But if it has operated successfully for year or two (with luck), how much attention will you be paying while it's driving in year 3? And that's about a 5x worse than human average accident rate, right? Will the driver be able to avoid that one time a year when FSD screws up?

For many of us with repetitive drives maybe it will perform those drives like clockwork, with only strange obstacles causing any new problems. Maybe we'll only have to pay attention on new-to-us routes or when a camel steps in front of the car.

When we first get FSD features I'm sure we'll be busy working with it. But if FSD becomes almost as good as a human, the human may have problems catching the errors and getting it to human accident rates or better. Nonetheless I can't wait to try!