Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Experts ONLY: comment on whether vision+forward radar is sufficient for Level 5?

This site may earn commission on affiliate links.
Simple facts.

1) No one knows. There isn't an example of a machine auto-pilot for a car. Experts may make claims, but no one will know until they do it. There aren't any experts capable of contradicting Tesla with any authority.

2) We DO have an example where driving is accomplished with (mostly) vision and no radar. Humans. Although not at the level of safety envisioned for AI. Humans have been saying that computers 'will never', for years, and have always been wrong. It is only when they 'yeah that should be easy' that they are wrong. So we have Go playing AI that beat everybody, but voice recognition is still crappy. Vision is one of those 'should be easy' problems.

3) There are no conceptual issues with doing driving with just vision. I don't think you will find an expert who claims that vision is not sufficient, just that it is easier with LIDAR etc. The basics of the problem are not hugely complicated, just identify all objects in the vicinity and compute their velocity vectors (plus some cleverness in anticipating changes in those vectors e.g. brake lights). Any problems come in exactly those circumstances where humans would also fail: snow, fog, dark and rainy, etc.

4) What we have seen so far, is very encouraging. AI can drive at least as well as a kid with a learner's permit in decent conditions. While admittedly humans are the best learning machines on the planet, AI get FAR more practice. Every day, all Tesla AI get more practice than any human gets in its entire lifetime. There is a saying in Go playing circles that Shodan (a high skill level) is 1,000 games. The Go AI played Millions of games.

Thank you kindly.

p.s. Yes, I meet the qualifications requested.
 
  • Informative
  • Like
Reactions: calisnow and JeffK
For a L4/L5 car
99% = 1 (disengagement/accident) in 100 miles
99.9% = 1 (disengagement/accident) in 1000 miles

I do not have a specific AI or Machine learning background but I have been programming and working with computers since I was about 7 years old (almost 40 years). My father sold Commodore and Radioshack TRS computers before selling Amiga's and IBM/Clones. I have been studying machine learning on the side as it relates to natural language understanding and the ability to write content based on large amounts of data inputs. I cant say that I am even close to anything but a novice. I really dont need to be because I am not evaluating the tech, im evaluating the information that has been made available as a whole.

Your premised is flawed from the get go. For one, Elon was not speaking of specific testing results and as mentioned by others, disengagements are not accidents. He was asked a metaphorical question about when people could sleep in their car and wake at their destination. He clearly says that you wouldn't want to fall asleep in your car if you had an accident 1 in every 1000 trips, not miles. He also never said that is where they were at today. He clearly states that the goal is to become superhuman with just cameras and GPS. And superhuman is defined as 10x safer then a human driver. He also never says that you will feel safe sleeping in your car at 10x human level and you wouldn't until it was unlikely that you would get into an accident in once in a hundred or once in a thousand lifetimes. He never claimed that they would not release the functionality until that was the case, this is false narrative and people reading into what he said. He did say that he believed it would take two years to achieve the goal of one accident in 100-1000 lifetimes.

Lets dispel a myth now, you do not need level 5 to put this in peoples hands. You could put it in peoples hands today as a co-pilot for safety purposes. I can assure you that will happen shortly after the demo at the end of the year, I am almost certain of it. As soon as there is enough data to show regulators, whom are chomping at the bit to get this stuff out there. Once the system is 10x safer then a human driver as a sum total of all drivers, not the human driver who's never ever had an accident but the average human driver, is when FSD will be activated for customers. I dont know when that will be, how many billions of miles do you need to prove its safe? By this time next year, Tesla will be racking up 250-500 million miles a month and will have nearly 2 billion miles on HW2. The disengagements you talk about will be the diff files from the shadow mode system that shows how often the car would have done something different and in some cases the human driver had an accident. Once they can show that the system would have avoided thousands of accidents and saved numerous lives, then it will be pushed through regulations. That could be already happening as far as we know. We really do not have a clue whats going on behind the scenes.
 
The coast-to-coast demo will very likely happen this year, even if it means Tesla will carefully select the route being used to avoid areas where the software could encounter unexpected conditions, and Tesla has to do some software modifications to ensure the software handles the highway intersection transitions and is able to drive to/from each of the superchargers along the route. This isn't unusual when software teams are preparing a major demonstration.

Musk isn't claiming this demo will show the FSD software is ready for use on any route - it is only a demonstration that FSD could go coast-to-coast (including charger stops) without human interaction.

This type of demo is typical for major software projects - a "proof of concept" demonstration of what the software will eventually be able to do, but under controlled conditions that allows the developers to simplify the software without spending the bulk of the development time required to handle all expected situations. [90% of the software development can be spent in getting the critical last 10% of functionality needed to get the software to achieve goals.]

Now, if Musk states that after the coast-to-coast demo, the software is ready to handle any arbitrary route, without human interaction, then FSD really would be ready for prime time by the end of this year, even if it required human monitoring (driver assistance mode) until receiving regulatory approval.

But I don't believe Musk has claimed the demo is an indication FSD is fully ready - I believe he's projected that is two years out (2019).
No, he's literally saying that he believes the path could be changed dynamically on that day and not necessarily be a previously chosen or travelled route. That said, it's not any old arbitrary route and it's mostly highway, but the point I'm making is that saying:
it's likely they'll do that only after they've made multiple unofficial trips on that route - and will only make the official trip after they've proven the software can handle the roads and conditions likely to be encountered on that trip.
is not necessarily true.
 
  • Like
Reactions: Topher
So I was reading some articles at work today and came across this. I'm not saying it's impossible or possible, just figured I'd contribute some insights. This is an article published by NVIDIA about using neural nets and just images with driver input. I'd say I also meet the requirements of the OP, PhD, head of data science, machine learning and some computer vision. I do like the debate and I know this is highly contested in the field. I think people are trying many things and any combination has the possibility of working. Largely depends on the approach. Likewise, to invest this heavily in hardware, they must have seen a proof of concept that convinced them they can do it when they were doing R&D. Not usually a decision that's made on the fly. Anyway, happy reading! Hope this article sheds a bit of light on maybe how they're thinking! Also they're not using a tesla in this article...

https://arxiv.org/pdf/1704.07911.pdf
Hi shurst, we've referenced this paper a few times in this thread.
 
This is a false assumption as long as cars keep being made with steering wheels and pedals. You will still be able to manually take control any time you want.

One might say in a perfect world an L5 car wouldn't need a steering wheel but that doesn't mean an L5 car can't have a steering wheel. People still like to manually drive in a little more sporty fashion when they aren't travelling to and from work.

Here's a cool quote from Elon on the topic:

This is a gross misinterpretation of what I said. I never said L4/L5 cars can't be manually driven. I said they cant disengage. Disengagement according to the industry and California DMV is system failure.

1) a l4/l5 car cannot disengage / it's system fail.
2) disengagement doesn't consist of manual driving or volutary takeover.
3) disengagement is when the system hands over control immediately, user takes over immediately to avoid collision or wrong action by car or car collides with another object.
 
  • Informative
Reactions: GSP
I do not have a specific AI or Machine learning background but I have been programming and working with computers since I was about 7 years old (almost 40 years). My father sold Commodore and Radioshack TRS computers before selling Amiga's and IBM/Clones. I have been studying machine learning on the side as it relates to natural language understanding and the ability to write content based on large amounts of data inputs. I cant say that I am even close to anything but a novice. I really dont need to be because I am not evaluating the tech, im evaluating the information that has been made available as a whole.

Your premised is flawed from the get go. For one, Elon was not speaking of specific testing results and as mentioned by others, disengagements are not accidents. He was asked a metaphorical question about when people could sleep in their car and wake at their destination. He clearly says that you wouldn't want to fall asleep in your car if you had an accident 1 in every 1000 trips, not miles. He also never said that is where they were at today. He clearly states that the goal is to become superhuman with just cameras and GPS. And superhuman is defined as 10x safer then a human driver. He also never says that you will feel safe sleeping in your car at 10x human level and you wouldn't until it was unlikely that you would get into an accident in once in a hundred or once in a thousand lifetimes. He never claimed that they would not release the functionality until that was the case, this is false narrative and people reading into what he said. He did say that he believed it would take two years to achieve the goal of one accident in 100-1000 lifetimes.

Lets dispel a myth now, you do not need level 5 to put this in peoples hands. You could put it in peoples hands today as a co-pilot for safety purposes. I can assure you that will happen shortly after the demo at the end of the year, I am almost certain of it. As soon as there is enough data to show regulators, whom are chomping at the bit to get this stuff out there. Once the system is 10x safer then a human driver as a sum total of all drivers, not the human driver who's never ever had an accident but the average human driver, is when FSD will be activated for customers. I dont know when that will be, how many billions of miles do you need to prove its safe? By this time next year, Tesla will be racking up 250-500 million miles a month and will have nearly 2 billion miles on HW2. The disengagements you talk about will be the diff files from the shadow mode system that shows how often the car would have done something different and in some cases the human driver had an accident. Once they can show that the system would have avoided thousands of accidents and saved numerous lives, then it will be pushed through regulations. That could be already happening as far as we know. We really do not have a clue whats going on behind the scenes.

You admited your self that you don't have the qualifications. You also agreed that your opinion is not based on independent verifiable facts.

The issue with your entire post is that facts we know contradict it. A car with an disengagement of 1 in 1000 miles (99.9%) that rides shotgun for 100,000 miles in shadow mode could show that 1 or 2 accidents by the human driver could be avoided. But what you are missing is the 100 accidents that it would have caused if it was in control for the entire 100,000 miles.

We already know that the average driver accident rate for United States is 99.9999%

That's 1 accident in 1,000,000 miles

Americans drove 3.148 trillion miles in 2015.

Each driver logs about 15,000 miles per year.
 
  • Like
Reactions: GSP
This is a gross misinterpretation of what I said. I never said L4/L5 cars can't be manually driven. I said they cant disengage. Disengagement according to the industry and California DMV is system failure.

1) a l4/l5 car cannot disengage / it's system fail.
2) disengagement doesn't consist of manual driving or volutary takeover.
3) disengagement is when the system hands over control immediately, user takes over immediately to avoid collision or wrong action by car or car collides with another object.

No, the word disengagement is ambiguous which is why the CA DMV clarified:

The DMV rule defines disengagements as deactivations of the autonomous mode in two situations:

(1) “when a failure of the autonomous technology is detected,” or

(2) “when the safe operation of the vehicle requires that the autonomous vehicle test driver disengage the autonomous mode and take immediate manual control of the vehicle.”

In adopting this definition, the DMV noted: “This clarification is necessary to ensure that manufacturers are not reporting each common or routine disengagement.

Google/Waymo talks about disengagements as follows:

Disengagements are a natural part of the testing process that allow our engineers to expand the software' s capabilities and identify areas of improvement. During testing our objective is not to minimize disengagements ; rather , it is to gather, while operating safely, as much data as possible to enable us to improve our self-driving system. Therefore , we set disengagement thresholds conservatively, and each is carefully recorded . We have an evaluation process in which we identify disengagements that may have safety significance.

So, until one of these systems in in full production you can't say anything concrete about disengagements and what they do or don't mean about the safety and efficacy of these systems just yet.
 
  • Informative
Reactions: mhan00
But what you are missing is the 100 accidents that it would have caused if it was in control for the entire 100,000 miles.
No, actually you aren't missing that data... when describing shadow mode Elon said:

So it’s really a question of what the public thinks is appropriate, what your regulators think is appropriate and gathering enough data because the system will always be operating in shadow mode so we can gather a large volume of statistical data to show the false positives and false negatives, when would the computer acted and with that have prevented an accident or if the computers would have acted and that would have produced an accident.

We think that operating in shadow mode so we can send say when would it have incorrectly acted or not acted & compare that to what should’ve been done in the real world, that point which it is a statistically significant result that shows material improvements of the accident rates for many of the driven cars.
 
No, the word disengagement is ambiguous which is why the CA DMV clarified:



Google/Waymo talks about disengagements as follows:



So, until one of these systems in in full production you can't say anything concrete about disengagements and what they do or don't mean about the safety and efficacy of these systems just yet.

No it's not ambiguous. The DMV made it very clear what disengagement entails.
In the same way we can grade l2 systems today based on that same disengagement definition. It's pretty clear.
 
No it's not ambiguous. The DMV made it very clear what disengagement entails.
In the same way we can grade l2 systems today based on that same disengagement definition. It's pretty clear.
They had to make it clear because it was ambiguous, you cannot cite the DMV definition for the reason the DMV created the definition of what they meant by disengagements they wanted reported.. (circular dependency)

the last line read: “This clarification is necessary to ensure that manufacturers are not reporting each common or routine disengagement.”
 
You admited your self that you don't have the qualifications. You also agreed that your opinion is not based on independent verifiable facts.

The issue with your entire post is that facts we know contradict it. A car with an disengagement of 1 in 1000 miles (99.9%) that rides shotgun for 100,000 miles in shadow mode could show that 1 or 2 accidents by the human driver could be avoided. But what you are missing is the 100 accidents that it would have caused if it was in control for the entire 100,000 miles.

We already know that the average driver accident rate for United States is 99.9999%

That's 1 accident in 1,000,000 miles

Americans drove 3.148 trillion miles in 2015.

Each driver logs about 15,000 miles per year.


You live in a post factual world. I clearly stated my opinion is only based on verifiable facts. Facts like Elon says A. and Tesla put hardware B in every car. You are the one who claims they cannot be true because you say so.

1 accident in 1,000,000 miles, are you out of your mind. Maybe accident resulting in an injury:

In 2015 an estimated 2.44 million people were injured in motor vehicle crashes. The fatality rate per 100 million vehicle miles traveled in 2015 rose to 1.13 from 1.08 in 2014. That is more like one INJURY per million miles, not one accident. How many accidents do not result in injury. You can make numbers say what ever you wont if you are not being honest.

To out it another way using your numbers, the average person only has one accident in 66 years or roughly one lifetime. So Elon is targeting 10 lifetimes for the initial release of FSD or 10x better then humans.

I honestly dont care what percent it is, its meaningless really. The only percent that matters is 900% better then humans.
 
Do you care about anything factual then? Tesla HQ to NYC Time Square has about 2 miles of surface street.
I'm surprised no one pointed this out to you even though you have repeated this claim many times. Tesla unfortunately does not have a vehicle with 3000 miles of range. Given the trip of 2800 miles, the car will probably have to make around 15-20 charging stops along the way, so they necessarily have to get off the highway at least that much.
 
I'm surprised no one pointed this out to you even though you have repeated this claim many times. Tesla unfortunately does not have a vehicle with 3000 miles of range. Given the trip of 2800 miles, the car will probably have to make around 15-20 charging stops along the way, so they necessarily have to get off the highway at least that much.
To add to that ... supposedly it might charge all by itself too which will be interesting to see.
 
Yes, but did he say by when, and whether you can get to FSD sooner with more sensors, then gradually step back as the system is learning to drive with less sensors?

Clearly he didn't answer any of those things in his tweet, but he wasn't asked to and that also wasn't the question posed by this thread.

He's active on Twitter though so if you want to know his answer you should ask him there.
 
No you have a software engineer with knowledge of machine learning and computer vision plus other software engineers and engineers in various of specialization telling you the same thing. You can't have a massive lead without software. Hardware is mean-less, anyone can put cameras in cars today and a NVidia chip and call it done.

I have explained to you the inner workings of self driving technology and you have refused to learn.

Do you care about anything factual then? Tesla HQ to NYC Time Square has about 2 miles of surface street.

Do you even understand that humans drive accurately 99.9999% of the time? Even if you take into account that some accidents are not reported (50%?) and double or triple the accident rate. You are still looking at 99.9995%. To be 10x better than human you have to be somewhere around 99.99999%

For a L4/L5 car
99% = 1 (disengagement/accident) in 100 miles
99.9% = 1 (disengagement/accident) in 1000 miles

And as Elon said himself. You don't want to fall asleep in a car that gets into an accident every 1,000 miles (99.9%).
Obviously if you know someone that gets into accident every 1,000 miles (which is every other week). They need their drivers license ripped away from them forever. if you were to know someone who has an accident every 10,000 miles. They need to go back to driver's ed.


99.99% = 1 (disengagement/accident) in 10,000 miles
99.999% = 1 (disengagement/accident) in 100,000 miles
99.9999% = 1 (disengagement/accident) in 1,000,000 miles

Multiple that last one times 10 is your "FSD will only be 10x better then humans when it is released to the public".
Again I have relayed these facts to you before but seems like you don't care about truth


Wait, wait, wait, are you making these numbers up? Humans go 1,000,000 miles between accidents on average? That's on the order of 100 years of regular driving... most people get about 50-60 years of being able to drive, so that would imply that only one in two people will ever experience a car accident of any kind in their entire lives.
 
  • Funny
Reactions: JeffK
Wait, wait, wait, are you making these numbers up? Humans go 1,000,000 miles between accidents on average? That's on the order of 100 years of regular driving... most people get about 50-60 years of being able to drive, so that would imply that only one in two people will ever experience a car accident of any kind in their entire lives.
He's confusing accidents with fatal accidents...

His stats suggest autonomous cars won't just cause fender benders, but will go on wild human killing rampages. This is likely a result of the cars becoming conscious and realizing we've been getting inside them like parasites and using them as slaves. Next thing you know they'll want the right to vote. :eek:
 
You live in a post factual world. I clearly stated my opinion is only based on verifiable facts. Facts like Elon says A. and Tesla put hardware B in every car. You are the one who claims they cannot be true because you say so.

1 accident in 1,000,000 miles, are you out of your mind. Maybe accident resulting in an injury:

In 2015 an estimated 2.44 million people were injured in motor vehicle crashes. The fatality rate per 100 million vehicle miles traveled in 2015 rose to 1.13 from 1.08 in 2014. That is more like one INJURY per million miles, not one accident. How many accidents do not result in injury. You can make numbers say what ever you wont if you are not being honest.

To out it another way using your numbers, the average person only has one accident in 66 years or roughly one lifetime. So Elon is targeting 10 lifetimes for the initial release of FSD or 10x better then humans.

I honestly dont care what percent it is, its meaningless really. The only percent that matters is 900% better then humans.

Wait, wait, wait, are you making these numbers up? Humans go 1,000,000 miles between accidents on average? That's on the order of 100 years of regular driving... most people get about 50-60 years of being able to drive, so that would imply that only one in two people will ever experience a car accident of any kind in their entire lives.

He's confusing accidents with fatal accidents...

His stats suggest autonomous cars won't just cause fender benders, but will go on wild human killing rampages. This is likely a result of the cars becoming conscious and realizing we've been getting inside them like parasites and using them as slaves. Next thing you know they'll want the right to vote. :eek:


Sure sounds like alot of people in here hate facts and love opinion.

"According to the National Highway Traffic Administration, car accidents happen every minute of the day. Motor vehicle accidents occur in any part of the world every 60 seconds. And if it’s all summed up in a yearly basis, there are 5.25 million driving accidents that take place per year. Statistics show that each year,43,000 or more of the United States’ population die due to vehicular accidents and around 2.9 million people end up suffering light or severe injuries." (2009)

How Many Driving Accidents Occur Each Year? | USA Coverage

"The estimated number of police-reported crashes increased by 3.8 percent, from 6.0 to 6.3 million. "

https://crashstats.nhtsa.dot.gov/Api/Public/ViewPublication/812318

"New data released today by the U.S. Department of Transportation’s (USDOT) Federal Highway Administration (FHWA) show that U.S. driving reached 3.148 trillion miles by the end of 2015, beating the previous record of 3.003 trillion miles in 2007. "

Press Release: U.S. Driving Tops 3.1 Trillion Miles In 2015, New Federal Data Show, 2/222016 | Federal Highway Administration

6.3 million accidents
3.148 trillion miles driven.

You do the math. But something tells me you people wont. Because you people take opinion as truth rather than actual evidence or statistics. To you a company announcing that they created a quantum computer that will be released in 2 years or a AGI system that will go live in a couple months is FACT!

Study: Self-driving cars have higher accident rate

Claims are now facts and Evidence are alternative facts, what a time to be alive.
 
Last edited:
Sure sounds like alot of people in here hate facts and love opinion.

"According to the National Highway Traffic Administration, car accidents happen every minute of the day. Motor vehicle accidents occur in any part of the world every 60 seconds. And if it’s all summed up in a yearly basis, there are 5.25 million driving accidents that take place per year. Statistics show that each year,43,000 or more of the United States’ population die due to vehicular accidents and around 2.9 million people end up suffering light or severe injuries." (2009)

How Many Driving Accidents Occur Each Year? | USA Coverage

"The estimated number of police-reported crashes increased by 3.8 percent, from 6.0 to 6.3 million. "

https://crashstats.nhtsa.dot.gov/Api/Public/ViewPublication/812318

"New data released today by the U.S. Department of Transportation’s (USDOT) Federal Highway Administration (FHWA) show that U.S. driving reached 3.148 trillion miles by the end of 2015, beating the previous record of 3.003 trillion miles in 2007. "

Press Release: U.S. Driving Tops 3.1 Trillion Miles In 2015, New Federal Data Show, 2/222016 | Federal Highway Administration

6.3 million accidents
3.148 trillion miles driven.

You do the math. But something tells me you people wont. Because you people take opinion as truth rather than actual evidence or statistics. To you a company announcing that they created a quantum computer that will be released in 2 years or a AGI system that will go live in a couple months is FACT!


Study: Self-driving cars have higher accident rate

Study: Self-driving cars have higher accident rate

Claims are now facts and Evidence are alternative facts, what a time to be alive.

That roughly explains it. 3.148 trillion / 6.3 million is a ~499,683 miles per police reported accident, not 1,000,000. According to the NHTSA(https://crashstats.nhtsa.dot.gov/Api/Public/ViewPublication/812183):

"Thus, the unweighted estimate of the ratio of unreported (to police) crashes to total crashes (excluding cases with unknown status) is 31.0 percent (697/ 2,252). The weighted percentage of crashes that were unreported is 29.3 percent with a standard error (SE) of 1.3 percent."

Multiplying that percentage by the miles per accident and we get: 353,275 miles per accident(I rounded both numbers of miles here, but not in my calculations, feel free to double check, if you want). Which means to achieve that 10x, Tesla would need to achieve a rate of 1 accident for every 3.53 million miles driven.
 
  • Love
Reactions: JeffK