Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Autonomous Car Progress

This site may earn commission on affiliate links.
No its equivalent to having someone put an unexpected blindfold on you while going 90 mph.

It has absolutely no comparison to a car's mechanical failure. My goodness can u ppl actually try to have a logical evidence driven discussion?

The mechanical failure directly mentioned was a motor seizing, which is a possible failure mode that would take a car from 90 (why is it going 90?) to 0 in a hurry, likely in a worse fashion due to wheel lock up.

Regarding loss of 'a' camera. Rear view and side cameras can be worked around long enough to move off the road, front cameras are currently redundant for Tesla. In general, if one were paying attention before being blindfolded, they could stay in lane while activating the hazards and braking. Same reaction as running into a snow squall/white out. Even easier if the vehicle still has high res GPS, but not required.

Loss of the control system entirely would definitely be an issue.
 
Tl;dr; I think exclusive vision can do it, but added sensors make it easier and expand operationso range. Can't avoid all bad drivers, nor can you reduce accidents below 0.

That's kind of a funny post because you keep mentioning other sensors as off-post.

So it's like you're trying to defend a vision only system, but then you're kind of suggesting that yeah maybe having some other sensors would be a good thing. So feel free to mention other sensor, but please avoid mentioning ultrasonics.

My point was that Vision only wasn't enough.

Yah, hence my spring board disclaimer. I think vision can do it alone, but Tesla has more than vision, so I was tacking that on as an advantage over standard human driving. Ultrasonic was mentioned as a super-last-result drive by feel system, not a recommendation.

There is an inescapable fact that the safety requirement for autonomous driving will be magnitudes of degrees greater than a human driver. That you have to have redundancy in not just the number of sensors, but the type of sensor. It's not just eliminating at fault accidents, but it's also avoiding no fault accidents. Even with lots of sensors the state of the art in autonomous driving seems to be causing accidents because it's overly cautious. It's going to be made massively worse if the autonomous car is given less information, and then forced to slow down excessively in adverse conditions.

Side bar: I'll admit to having trouble with the whole multiple times safer criteria. What is the baseline? There are people who have had 0 at fault accidents, you can't improve that. Is urban city accident rate applied to North Dakota interstate accident rate? I would prefer a noise added standardized test suite of scenarios. I think this may be an area Tesla has a big advantage. What regression testing is needed to validate an AP SW change? If x million miles is needed to validate, you need to either have that many miles on disk to simulate against or redrive that much. Given all recent Tesla can be used to collect real world sensor data and send it back, they have an edge (not saying they are doing this on fleet currently).

Back on topic: you can't eliminate not-at-fault accidents, no matter what speed a car goes with any physically possible spacing, it can get hit.

I agree, rule based/ legally driving autonomous vehicles lack the human natural tendencies that driver's have and rely on others to have.

You can't escape the irrefutable fact that some sensing systems are better than other sensing systems for specific situations, and weather conditions.

Sure, but is it necessary to have the best sensors that can work in the most possible conditions? Or is a sensor suite that is at least as good as current drivers sufficient?
2D tracking of all objects via GPS and active transmission makes things as easy/ situation handling as they can be, but it is not practical.

It's also really tough to achieve vision that exceeds the human vision system if you include every part of the human vision system. Not just in the dynamic range, but in the ability to process the image. So we use other sensing systems to give the computer data to use to validate data from the vision system or the other way around.

Agreed, vision is hard, lidar gives physical data directly, but that does not mean lidar is required long term.

Like if you read up on it you'll see some articles talking about the need for bike to car communication because vision only systems seem to have issues detecting cyclist. The bike to car communication is a way to solve that problem for now until the computer vision system is at a point that solves that problem.

Bike identification is important due to laws regarding passing/ following clearance. But how accurate does identification need to be? If the system can discern an object exists that is smaller than a car in the lane/ shoulder area is that good enough?
Slow motor cycle vs moped vs electric bike vs bike vs jogger with baby stroller. What granularity is useful?

As for bike to car, I would argue against that because I see it as a crutch that makes the system fall over if it is removed. If a bike does not have the system or the battery dies/ HW fails is the bicyclist now at risk?

Keep in mind were only a couple years from when Subaru (a vision only system) was slowing down for shadows in the road. We Tesla owners are lucky in that we have radar.

Radar helps, that is for sure. But it is not fair to point to early system problems as an indicator of future performance. Shadows don't change the visible road surface area, so systems that use temporal data can weed them out.
The problem I'm currently chunking on is pothole detection. It's very like a shadow or discoloration, other than the change in appearance at close range.

There is also the need for autonomous driving to improve the efficiency of our roads. Now you argued that the proper way to deal with every situation was to simply slow down or to stop all together. That's fine for a human driver who is limited to vision only, but it seems like a massive artificial limit for autonomous cars when the one of the primary benefits of them over a human is to use additional sensors.

It's a high level decision about what range of conditions should be drivable and at what rate. Until all cars are autonomous in a road section, that section will be limited on the high end by the human driver performance.

Speed should be proportional to sensor range/ control rate. Radar is big help with speed during adverse weather as long as it can detect stopped vehicles on its own.
 
The mechanical failure directly mentioned was a motor seizing, which is a possible failure mode that would take a car from 90 (why is it going 90?) to 0 in a hurry, likely in a worse fashion due to wheel lock up.

Regarding loss of 'a' camera. Rear view and side cameras can be worked around long enough to move off the road, front cameras are currently redundant for Tesla. In general, if one were paying attention before being blindfolded, they could stay in lane while activating the hazards and braking. Same reaction as running into a snow squall/white out. Even easier if the vehicle still has high res GPS, but not required.

Loss of the control system entirely would definitely be an issue.
My impression is that most of the legal redundancy requirements are in control redundancy (steering and braking), not sensor redundancy. There are requirements for that in regular cars. examples:
1) if power steering fails, manual steering is still available, Nissan/Infiniti steer-by-wire system has a manual fallback.
2) braking systems use a dual circuit design (you still have brakes on at least two wheels even if one circuit fails) and there is an additional handbrake for emergencies.

The autonomous cars would have to provide the electronic equivalents to meet the requirements given the human can't be relied on to actuate the manual backups in regular cars (the manual pedals and steering wheel might not even be there in some cars).
 
  • Like
Reactions: mongo
My impression is that most of the legal redundancy requirements are in control redundancy (steering and braking), not sensor redundancy. There are requirements for that in regular cars. examples:
1) if power steering fails, manual steering is still available, Nissan/Infiniti steer-by-wire system has a manual fallback.
2) braking systems use a dual circuit design and there is an additional handbrake for emergencies.

The autonomous cars would have to provide the electronic equivalents to meet the requirements given the human can't be relied on to actuate the manual backups in regular cars (the manual pedals and steering wheel might not even be there in some cars).

The regulatory requirements for self-driving consumer cars aren't even written down. So we don't know what the sensory redundancy requirement would be for things like cameras, radars, etc. Whether it's redundancy in the sensor itself, or in the wiring.

There is also a question of the redundancy in the self-driving computer itself. So there was some fail safe if the car was driving 80mph on the freeway, and suddenly it failed. It can't just brake and leave itself in the middle of the freeway.

What I'm concerned about is that Waymo, and Cruise automation will get to autonomous driving first, and they'll get to set the rules. Both of those companies do the sensor fusion thing. So all the discussion regarding what is actually necessary will be mute as it won't matter.
 
Bike identification is important due to laws regarding passing/ following clearance. But how accurate does identification need to be? If the system can discern an object exists that is smaller than a car in the lane/ shoulder area is that good enough?
Slow motor cycle vs moped vs electric bike vs bike vs jogger with baby stroller. What granularity is useful?

As for bike to car, I would argue against that because I see it as a crutch that makes the system fall over if it is removed. If a bike does not have the system or the battery dies/ HW fails is the bicyclist now at risk?

What I like about car-car communication, bike-car communication, and road-car communication is that the car can be informed of what's ahead with absolute positive identification.

I would think of it as an automatic version of Waze.

As a person who enjoys biking I would put a transponder on my bike that didn't require battery power. Where it would allow a car with a mesh automotive networking system to pick up my bike, and relay the information to all the other cars in the area with a compatible system. These would be all types of cars whether human driven or autonomous cars.

Now this wouldn't be a primary means of sensing me. I would expect a human driver to see me, and an autonomous car to detect me. Instead it would be an early warning system for a variety of road situations.

Where it's very similar to waze where I'll see what the traffic is a couple miles ahead so I'll be prepared to deal with it before it happens.

It would be tremendously useful for things like

Reporting pot holes
Reporting road debris
Reporting a crazy driver behind me driving over 100mph

Now some of this will likely be accomplished through always connected internet connectivity. Where the system is constantly downloading new data.

But, I do believe some kind of car-car communication is necessary.

The frustration with car-car communication isn't that they don't exist because they do, but there isn't a single standard.

I should add the biggest benefit of car-car communication is to ask the car in front of you to get out of your way. That's how one car company advertised how a system would work.
 
  • Like
Reactions: mongo
Brad Templeton has some worthwhile thoughts on V2V and related technologies: v2v | Brad Ideas

I tend to think that any V2V communication has to be optional. If it's deemed necessary for autonomy, then what happens when it fails? Say a pirate radio station pops up and blankets the spectrum used by V2V. Sure, the FCC will get right on that — meanwhile how do I get to work?

Suppose someone finds a security flaw in the universal V2V protocol, or in a widespread implementation of it. Can mobility providers and private owners disable V2V until there's an update to close the security hole? Or are we all stranded?
 
  • Like
Reactions: mongo
The regulatory requirements for self-driving consumer cars aren't even written down. So we don't know what the sensory redundancy requirement would be for things like cameras, radars, etc. Whether it's redundancy in the sensor itself, or in the wiring.

There is also a question of the redundancy in the self-driving computer itself. So there was some fail safe if the car was driving 80mph on the freeway, and suddenly it failed. It can't just brake and leave itself in the middle of the freeway.
The current requirements is to meet the FMVSS (for high speed cars, low speed neighborhood vehicles like the Waymo Firefly don't have to meet such requirements). So it needs to meet the requirements of any regular car today (which means no cars without steering wheels or pedals). Steering/braking actuation redundancy seems to be what is certain to be required (Tesla added this with AP2.5), because this is necessary to match the existing functions that the human does.

What I'm concerned about is that Waymo, and Cruise automation will get to autonomous driving first, and they'll get to set the rules. Both of those companies do the sensor fusion thing. So all the discussion regarding what is actually necessary will be mute as it won't matter.
From what I have seen, neither company had lobbied for more regulations, but rather is pushing for exemptions and laxer regulation to allow them to do more testing under the current frameworks. I would worry more about vendors selling systems or sensors that try to lobby to make what they sell required.
 
Brad Templeton has some worthwhile thoughts on V2V and related technologies: v2v | Brad Ideas

I tend to think that any V2V communication has to be optional. If it's deemed necessary for autonomy, then what happens when it fails? Say a pirate radio station pops up and blankets the spectrum used by V2V. Sure, the FCC will get right on that — meanwhile how do I get to work?

Suppose someone finds a security flaw in the universal V2V protocol, or in a widespread implementation of it. Can mobility providers and private owners disable V2V until there's an update to close the security hole? Or are we all stranded?
Even if it is deemed legally mandatory and a common standard is developed, V2V should not be relied on for safe function of an autonomous system. Even the most optimistic predictions will have the fleet operating with plenty of non-autonomous vehicles for decades.
 
My impression is that most of the legal redundancy requirements are in control redundancy (steering and braking), not sensor redundancy. There are requirements for that in regular cars. examples:
1) if power steering fails, manual steering is still available, Nissan/Infiniti steer-by-wire system has a manual fallback.
2) braking systems use a dual circuit design (you still have brakes on at least two wheels even if one circuit fails) and there is an additional handbrake for emergencies.

The autonomous cars would have to provide the electronic equivalents to meet the requirements given the human can't be relied on to actuate the manual backups in regular cars (the manual pedals and steering wheel might not even be there in some cars).

Related to that, the Model 3 has a dual electric motor steering rack. I wonder if regen counts toward the braking requirement (main and parking being two other methods)
 
Even if it is deemed legally mandatory and a common standard is developed, V2V should not be relied on for safe function of an autonomous system. Even the most optimistic predictions will have the fleet operating with plenty of non-autonomous vehicles for decades.
It would be nice to add telemetry to everything but that's not going to happen ever. The car needs to deal with kids running into the road, etc. That requires real time visual data and processing. V2v is a distraction from the main problem. Just another semi- blind crutch like lidar.
 
  • Like
Reactions: favo and mongo
I don't see why autonomous driving has to be "magnitudes of degrees" (whatever that is) greater than a human driver.
It just has to be as good or better.

You didn't say "average driver" but:

I don't think the average driver is very good ... I think I'm better than the average drvier ... and I expect that the average driver thinks they are better than me ...

On that basis I would settle for "better than any & every human driver". That should not be hard to achieve, might not even be one order of magnitude better than The Average Drvier
 
You didn't say "average driver" but:

I don't think the average driver is very good ... I think I'm better than the average drvier ... and I expect that the average driver thinks they are better than me ...

On that basis I would settle for "better than any & every human driver". That should not be hard to achieve, might not even be one order of magnitude better than The Average Drvier
Most people are better than the average driver when they are paying attention. The advantage of autonomous systems is that they don't get distracted so are much better all the time.
 
  • Like
Reactions: mblakele
Why do we human drivers not need redundant systems but an autonomous car does. I would bet the instances of full steering / braking failure would result in the "normal driver" having an accident as well. I guess in the case of a human driver they would be considered "at fault" Hmmmm
 
To take a slightly different take: Autonomous cars may not need to be significantly better than a human driver for the driver's sake, but they may have to for the manufacturer's sake. A car manufacturer would likely be completely sunk if they were suddenly on the hook for every accident that one of their vehicles causes if they aren't significantly better than a human driver.
 
To take a slightly different take: Autonomous cars may not need to be significantly better than a human driver for the driver's sake, but they may have to for the manufacturer's sake. A car manufacturer would likely be completely sunk if they were suddenly on the hook for every accident that one of their vehicles causes if they aren't significantly better than a human driver.

That is what insurance is for. As long as it as good as a human driver, but never gets distracted. It will be cheaper to insure for both the owner and the manufacturer.
 
No its equivalent to having someone put an unexpected blindfold on you while going 90 mph.

It has absolutely no comparison to a car's mechanical failure. My goodness can u ppl actually try to have a logical evidence driven discussion?

Absolutely not true. You can even try it on your own. Have someone blind fold you while you are driving and see if you cant stop the car safely without running into someone just based on your memory of what the road looked like the instance before you lost vision. A computer would do this a million times better. It knows what the road looked like and can anticipate enough of the road and conditions to safely stop. There would need to be some redundancy but not every single system and every single wire.

The problem is people THINK certain things are required but have not thought about why they wouldnt be. It might still be mandated by law which is a different story then being necessary for a system to work.
 
If for some reason a camera fails, the car just stops and calls for help. Just like you would if your engine seized.
So, just stopping is a good idea in the middle of a busy freeway at freeway speeds? What about a busy city street or expressway?

Please do the former with your car, even if it's just a gradual coast to stop and stay inside your car w/it stopped in the middle of traffic. Let's see what happens to your car and the cars around you.

I've had a driver's license for over 25 years. I've NEVER had an engine seize nor have I ridden in a 4+ wheel vehicle in my life where that happened. I also can't think of a time where I've had a sudden and/or unexpected loss of propulsion when driving on a public road.

Unrelated: Uber self-driving trucks are now moving cargo for Uber Freight customers
 
  • Like
Reactions: 1 person