Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

some examples of where level 5 autonomy might have difficulty

This site may earn commission on affiliate links.
Saghost wrote:
.

That may be; I certainly don't know. What I can share with you is that experienced North Country drivers do a lot of winter driving by, in effect, the seat of our pants. We "feel" the road - where the crown is, what the slope of our...and the opposite....lane seems to be, subtle differences in wear patterns of the pavement. And, obviously, the difference between what is the road surface and what is the edge, and the no-brainer "ga-dump ga-dump" of center lane bots, of which there are approximately zero on the roads I'm discussing.
All that made the more subtle by snow/ice coverings. I'll be the first one to congratulate programmers who include those kinds of experiences into their algorithms.

Meanwhile, back in the blizzard -

So where were we? Hearts in our mouths, soft stuff in our pants, foot on the brake pedal and wondering what could have caused the truck's lights to disappear. And then - looming out of the blinding snow just a few feet from our pickup's hood -

a bison. Crossing the road, directly behind the truck. Others of its herd had, we later learned when both truck and we stopped for conversation, begun to cross not in front of but effectively "alongside" the leading truck; he saw those apparitions next to him and braked; as the bison entered the road his massive bulk extinguished the truck's lights from our vision. And as those lights had been the only distinguishable object in our diminished world, when they went out so did everything.

Now, a bison is the very largest creature anyone, anywhere outside Africa, can encounter on the world's highways. And few hippos or elephants wander about in blizzards. Twice the mass of a moose, no vehicle is a match for a bison other than the largest Class 8s under just the right circumstances. We were very, very lucky.

Back to autonomous driving - can today's non-visual hardware properly anticipate such a situation? Can it parse the back of a semi with a monster passing by it? And react appropriately? Remember: the truck braking was not the truck stopping. It was the driver gut-reacting ex-post to an event that was, for its driver, already finished - his foot was on the brake pedal only momentarily. Fortunately for us, that was enough for us also to react and slow ourselves for some unknown event.

The first word out of my mouth was holy and the second one wasn't.

I've done a little driving like that myself, in Colorado winters away from cities. Not my favorite thing.

The radar should be able to see both the bison and the truck - some automotive radar promises to see pedestrians at 100m, a far harder task.

You'd need programming to handle the situation, but again the car's sensors can perceive far more about what is going on in your scenario than us humans could, so it should be able to handle it.
 
  • Like
Reactions: 22522
This makes sense from a system-wide point of view. But as a system user, I only care about whether an autonomous vehicle is safer than driving myself.

I suppose the ideal situation is that I get to drive for myself, while everyone else has to ride around in their autonomous vehicle. :)

But you have to be 100% confident to let your 40,000 - 140,000 car drive off by itself and not cause a crash / be a general nuisance by blocking roads, ambulances / get itself stolen (thrown in a steel box where it can't report itself as stolen). To get to a point where your car will summon to you or even start touting for business as a taxi is so far in the future as to be a joke from a logistic point of view.
I'm sure Tesla - through a rehearsed corridor will show it can drive across the US - on empty highways... and they'll edit out the takes where things go wrong!!!
 
LOL. Well, this might be safer for you if you are a better driver than the autonomous system in all cases, which means that you are never distracted, never go over the speed limit, never miss another vehicle in a blind spot,never drift out of your lane, etc., which would make you one of the world's safest drivers. That's great! ... are you a driving instructor or commercial driver? (I'm not meaning to malign your skills, which may indeed be excellent... but, just by way of additional info, I have read that something like 90% of all drivers believe that they are better than average drivers, which is mathematically impossible).

I don't think anyone has the illusion that (at least with technology of the next 5 years) we're going to have autonomous vehicles that can handle every conceivable incident; the idea is that they should be significantly better than the average driver, and thus reduce the total number of crashes and especially fatalities. Even US DOT recognizes this, so they are not standing in the way of development of these systems. There are certainly unusual circumstances that arise on the road from time to time - but even human drivers can take the wrong action in certain cases.

If you look at the leading causes of crashes, DUI is at or near the top of the list. An autonomous system won't do that. Another leading cause of crashes is "aggressive driving" (speeding, cutting off other drivers, tailgating, etc.) - Autonomous systems won't do that either.

So I appreciate your concern, I have some of the same concerns, but this is classic change management... we have to look towards the advantages of the new situation, while also managing risk as best we can. ;)

I agree it's going to probably soon be provable to be safe.
But there's A = safe getting back to Autopilot version 1 (level 2?). There's B - allowing level 5 in your country. There's C - having confidence in your asset to let you want to send it out around the country as a taxi / summon across country (not on private land). It's mad to have confidence that B or C will come soon.
I'd add it shows poor statistical knowledge of Elon Musk to state that paraphrashing "based on 100m miles and 1 fatality that the car is already better than the average" when the average is aprox 1 death every 90m miles. You can't do anything accurate off 1 sample. And actually it's probably 2 or 3 deaths. The 2nd being the one that I think happened 1st (reported 2nd) where in China the guy drove into the back of the streetsweeper. Both deaths seem to be attributed to lack of attention caused by extreme over-confidence. I work in programming supporting statisticians in the medical trial industry evaluating drug safety / efficacy.
 
Yeah, there are going to be several lifetimes spent writing the test suite needed for these systems to be "certified" by the DOT and other national standards bodies. There are lots of edge cases which will need to have the behavior verified before regulators will sign-off on it.

And that's without considering regional driving styles. As an example, if I drove around home how I was recently driving in Rome (and I was a "timid" driver there), I'd be arrested...

I know what you mean! In Greece taxi's on the islands will tailgate to about 1m behind the car in front - drive in the middle of the road - waiting for a overtaking opportunity. I'm not sure how a Tesla would react if a car blocked it's rear facing camera!
Hard enough that a large proportion of the World drive on the other side of the road!
 
The regulation should require any manufacturer who claims level 5 fully autonomous to design their cars without human interfaces at all, ie. no windows, no wheel, no joystick, no screen, and only voice assistance like Google Assistance. This is because level 5 is supposed to be based on a mature AI technology. I am really worrying that Tesla and nVidia are betting on their new hypes too much, AI or deep learning--who proves 40x computing power is enough, who proves driving is complexity like playing chess, who proves that AI in 2 years is able to socialize or play friendly to all the drivers and animals/pedestrians of various moving speeds at the stop sign?! If a so called level 5 AI does not want to deal with all these, maybe we need to considering upgrade infrastructures with new marking/signals, fence, traffic rules, and/or even rails, but this cannot be done in a few years and how about other countries? Technology alone may solve 99% of the problems but that 1% will render it failure to human life.
 
I agree it's going to probably soon be provable to be safe.
But there's A = safe getting back to Autopilot version 1 (level 2?). There's B - allowing level 5 in your country. There's C - having confidence in your asset to let you want to send it out around the country as a taxi / summon across country (not on private land). It's mad to have confidence that B or C will come soon.
I'd add it shows poor statistical knowledge of Elon Musk to state that paraphrashing "based on 100m miles and 1 fatality that the car is already better than the average" when the average is aprox 1 death every 90m miles. You can't do anything accurate off 1 sample. And actually it's probably 2 or 3 deaths. The 2nd being the one that I think happened 1st (reported 2nd) where in China the guy drove into the back of the streetsweeper. Both deaths seem to be attributed to lack of attention caused by extreme over-confidence. I work in programming supporting statisticians in the medical trial industry evaluating drug safety / efficacy.

Elon's - 130 million miles without accident compared to national average 1 death every 90 million miles - is more basically faulty than mere sample size. That 90 million miles average is comprised of all kinds of driving conditions. Tesla Autopilot sample is only in conditions under which AutoPilot will function; basically controlled-access highways in good visibility.
 
  • Like
Reactions: electracity
Elon's - 130 million miles without accident compared to national average 1 death every 90 million miles - is more basically faulty than mere sample size. That 90 million miles average is comprised of all kinds of driving conditions. Tesla Autopilot sample is only in conditions under which AutoPilot will function; basically controlled-access highways in good visibility.

Bingo!
 
I don't think the new hardware is capable of some sections of the German autobahn too as 250m vision might not be efficient.

250m is plenty sufficient on the autobahn.

First, I will argue that if you can judge the speed of a car at 250 meters using just your side mirror, while driving and paying attention to the car in front of you that you're trying to pass, vs a computer then I'm going to say you're crazy.

Second, 200mph = 89.408 m/s, so 250m / 89.408 m/s = ~2.8 seconds if you were stopped. If you are traveling 100 mph then you can double that time to over 5.5 seconds. It's plenty of time for the 200 mph driver to slow down and/or the car to take action or you take over. German drivers are supposedly great drivers so it's more than likely the fast car will slow down. Granted the entire 5.5 seconds the car recognizes the threat. The car approaching can see you the entire time beyond the 5.5 seconds.

Third, why would an autonomous car be driving in the fast lane on it's own unless there was some wild obstruction on the right??? In which case, passing in under 5.5 seconds is easily possible. If there's a traffic jam then the fast car will have to take action anyway.

I doubt the self-driving car would find itself in the left most lane.

Even a normal car would get rear ended if the approaching driver is dumb or not paying attention

 
One quarter of the brain is believed devoted to vision processing. The vast majority of the analysis is automatic/subconscious. Processing the external environment is vastly complicated.

I agree on the brain. Studied neuroscience for "fun" at a top university for a bit. But all those studies are measuring activity in different parts of the brain, but really the studies really have no clue to lots of stuff. Like sure that part of my brain is firing but I am not thinking about driving for huge swaths of time.

Even when I learned to drive, I always found it amazing how easily one can drive without truly consciously "thinking". But I do have one friend who is a car nut who "drives mentally" -- very different!
 
Here's my scenarios that I think automation may have big challenges.

1. In California motorcyclists can drive between two cars. So they essentially make their own lane. We are supposed to watch for them, which is near impossible for a driver.
2. Here in California, we have a ton of cyclists who are riding for leisure. They ride illegally side by side when there is no bike lane chatting with their friends. In some places it is quite safe if the shoulder is wide, other places is down right dangerous. Sometimes the pelotons come through and they are extremely aggressive and even cross over into the opposing lane.
3. Try driving in India where there really are no lanes.
4. In different parts of the world there are large animals on the roads... cows in India, Yellowstone buffalo, Australia sheep. In California we have deer that at times get frightened and jump in front of you.
5. The Sierras when they storm, they really really storm. Once when I was an avid skier, I avoided I80 and took a side road up. Well the road was open and I had 4WD and it was a lonely road, but it was near impossible to see

Google has been working on this for years and driving around mostly Mountain View for years trying to get to this last 1%.

Autonomous driving doesn't interest me all that much unless I can drive asleep or at least surfing the web. For me its hard to pay attention if I am not at least looking at the road. I drive on mental autopilot most of the time.

AP 2 has to learn to handle all these situations at least as well as humans do and it will take billions of miles...but Tesla will log billions of miles even the first year and more every year after.

To be really safe requires no human drivers and all vehicles on the road networked. Motorcycles and bicycles would have to just be banned since they can't realistically fit the standard.

Driving asleep or web surfing (or vehicles with no humans aboard) is exactly what it's about. Anything short of that is just driver assist.
 
Saghost wrote:
.

That may be; I certainly don't know. What I can share with you is that experienced North Country drivers do a lot of winter driving by, in effect, the seat of our pants. We "feel" the road - where the crown is, what the slope of our...and the opposite....lane seems to be, subtle differences in wear patterns of the pavement. And, obviously, the difference between what is the road surface and what is the edge, and the no-brainer "ga-dump ga-dump" of center lane bots, of which there are approximately zero on the roads I'm discussing.
All that made the more subtle by snow/ice coverings. I'll be the first one to congratulate programmers who include those kinds of experiences into their algorithms.

Meanwhile, back in the blizzard -

So where were we? Hearts in our mouths, soft stuff in our pants, foot on the brake pedal and wondering what could have caused the truck's lights to disappear. And then - looming out of the blinding snow just a few feet from our pickup's hood -

a bison. Crossing the road, directly behind the truck. Others of its herd had, we later learned when both truck and we stopped for conversation, begun to cross not in front of but effectively "alongside" the leading truck; he saw those apparitions next to him and braked; as the bison entered the road his massive bulk extinguished the truck's lights from our vision. And as those lights had been the only distinguishable object in our diminished world, when they went out so did everything.

Now, a bison is the very largest creature anyone, anywhere outside Africa, can encounter on the world's highways. And few hippos or elephants wander about in blizzards. Twice the mass of a moose, no vehicle is a match for a bison other than the largest Class 8s under just the right circumstances. We were very, very lucky.

Back to autonomous driving - can today's non-visual hardware properly anticipate such a situation? Can it parse the back of a semi with a monster passing by it? And react appropriately? Remember: the truck braking was not the truck stopping. It was the driver gut-reacting ex-post to an event that was, for its driver, already finished - his foot was on the brake pedal only momentarily. Fortunately for us, that was enough for us also to react and slow ourselves for some unknown event.

The first word out of my mouth was holy and the second one wasn't.

I think Tesla's radar would have seen the bison and known it's distance from the truck through the snowstorm. I think that's a good example of a situation Elon would cite where the system would outperform a good human driver (and lidar wouldn't help much).
 
how does the instructor tell the car - I want you to turn left, right. Pull over safely and reverse parallel park, when I tap the dash do an emergency stop :)

Pretty straightforward? Tests are already planned in advance so...

Plot a test route in the sat nav.

The car will self-park at the end of the route anyway, just make sure the only space is parallel.

Arrange for an emergency braking situation along the route.
 
Not sure if this is relevant, but i feel self-driving cars should be programmed with defensive driving techniques. In some situations, where a collision is imminent, using defensive techniques (accelerating away from danger, swerving to neighbouring lane, etc) could mean the difference between escaping a potential accident unscathed.
 
I agree it's going to probably soon be provable to be safe.
But there's A = safe getting back to Autopilot version 1 (level 2?). There's B - allowing level 5 in your country. There's C - having confidence in your asset to let you want to send it out around the country as a taxi / summon across country (not on private land). It's mad to have confidence that B or C will come soon.
I'd add it shows poor statistical knowledge of Elon Musk to state that paraphrashing "based on 100m miles and 1 fatality that the car is already better than the average" when the average is aprox 1 death every 90m miles. You can't do anything accurate off 1 sample. And actually it's probably 2 or 3 deaths. The 2nd being the one that I think happened 1st (reported 2nd) where in China the guy drove into the back of the streetsweeper. Both deaths seem to be attributed to lack of attention caused by extreme over-confidence. I work in programming supporting statisticians in the medical trial industry evaluating drug safety / efficacy.

No argument there. It will indeed take a lot more data (and, I'd think, from multiple carmakers) before we can have enough confidence in these systems to let them loose on arbitrary streets. (There have been some proposals to allocate some areas, like sections of inner-city business zones, for exclusive use by autonomous vehicles).

Here are some factors we have to consider:

  • Elon does often indicate more confidence in his solutions than facts would seem to imply. He has a "reality distortion zone" like Steve Jobs did - maybe this is part of being a visionary. Also, I think he sees things through "California-tinted glasses," where he thinks the whole world thinks like folks in California- this is certainly not the case!
  • There will be regulatory differences between states (and countries too of course). In the US, state-by-state regulations were harmonized for commercial vehicles by requiring states to adopt the federal codes; for autonomous vehicles, I think the same will be needed.
  • The whole field is a moving target. Example: OK, Tesla collected 150 million miles (or whatever) of autopilot driving data using its current suite of sensors. Now, they are building their cars with a new set of sensors. Will the data be comparable, or apples-to-oranges?
  • There is an additional set of technologies available which we often forget to consider on autonomous vehicles: V2V and V2I - communications between vehicles and to infrastructure - these can make huge improvements to autonomous vehicle safety performance, because everything and everyone has total situational awareness. (Yes it will take years to get the bulk of the fleet using this, but I expect it will happen).

I work in the field of public safety software R&D, so this whole field excites me with the idea of dramatically reduced crashes and fatalities.
 
  • Like
Reactions: Owner
No argument there. It will indeed take a lot more data (and, I'd think, from multiple carmakers) before we can have enough confidence in these systems to let them loose on arbitrary streets.
  • The whole field is a moving target. Example: OK, Tesla collected 150 million miles (or whatever) of autopilot driving data using its current suite of sensors. Now, they are building their cars with a new set of sensors. Will the data be comparable, or apples-to-oranges?

Two things, note that ALL future Tesla cars will have the hardware and be operating in shadow mode (constantly collecting data). When the Model 3 comes out you will see a dramatic increase in miles of data available that iterations of enhanced DNN models will be fairly short. There is no proposal to even turn on level 5 autonomy until late next year. By the end of 2017 that might equate to up to 200,000 cars with the new hardware (model S/X and 3) averaging 30 miles per day (about 10,000 miles per year each) or totalling 6 million miles per day worth of training data. Late 2018 this may be as high as 21 million miles per day if Tesla hits production targets. It'd be possible to hit 1 billion miles in two months alone at that point.

To put this into perspective Google's car just hit 2 million miles after years of work.

It really doesn't matter if they can use all or even a portion of the currently collected AP 1.0 data. Once Model 3 is released the data is going to come pouring in. They could probably use the AP 1.0 data as a base model for the enhanced autopilot, but it'll need a lot more training for full autonomy.

For now, Tesla can remain safely behind the guise of waiting for regulatory approval while the autopilot DNN models get better and better over the next year.
 
What will be interesting is how they will clean up the fleet learning data. Will every rule be country & state-specific? If so, you may find the car performs better in US/CA than it does in (e.g.) CA/BC, due to the size of the fleet in Cali.

If not, how would they determine between country rules and state rules?

For example, turning right on a red light - when the fleet learns this behaviour, is it identified by the DNN optimiser (the DGX-1 back at the mothership that generates the DNN updates before they get pushed out to the fleet) to be reviewed, classified and released by a human? Can imagine the vast amount of work needed to do this manually for every change in behaviour... but then if it is done automatically, how to prevent it from learning bad habits?
 
What will be interesting is how they will clean up the fleet learning data. Will every rule be country & state-specific? If so, you may find the car performs better in US/CA than it does in (e.g.) CA/BC, due to the size of the fleet in Cali.

If not, how would they determine between country rules and state rules?

For example, turning right on a red light - when the fleet learns this behaviour, is it identified by the DNN optimiser (the DGX-1 back at the mothership that generates the DNN updates before they get pushed out to the fleet) to be reviewed, classified and released by a human? Can imagine the vast amount of work needed to do this manually for every change in behaviour... but then if it is done automatically, how to prevent it from learning bad habits?

They are pushing for national standards and regulations and Elon was saying that he expects Europe to have its own single standard as well.

I imagine changes to the DNN are validated against real world data and results can be measuring in negative or positive outcomes. It won't learn bad habits necessarily if those are considered a negative or undesirable outcome. It has to be fairly automated...

Al changes are then tested by a QA engineering team on a track, then by Elon himself and a small group of alpha users. Then it goes to the early access program with about 1000 owners worldwide. If that's good then it goes to the fleet in shadow mode. It records possible positive and negative outcomes until there's a big enough sample set, when it's proven to be safe, then it's enabled.
 
Last edited: