Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Autonomous Car Progress

This site may earn commission on affiliate links.
This is a misconception. A deep net has a capacity that is a function of the number of weights.

The number of weights is constrained by the amount of compute available on the car. Today’s deep nets have ~100M weights. Larger nets take too long to run, and cars need quick reaction times.

A net with 100M weights cannot learn everything there is to learn from a dataset with billions of examples. It will necessarily forget most of what you teach it.

I agree that a deep net has a capacity that is a function of the number of weights. Plus you could never realistically create a trained dataset with billions of examples. Except if you worked for the Chinese government of course.

But, that doesn't take away from the need to have a lot of examples. The number of classifications is pre-determined (it's defined by your training set) so all the examples are doing is tuning the weights. For a large dataset the most important thing I think would be to have a diverse dataset, and not the same thing over and over. This is where Tesla really can make a difference, but I haven't seen any evidence that they're using any of the data they've collected.

Tesla is fully reliant on visual based detection yet they fail some basic pedestrian detection tasks. Like they didn't ace the IIHS test, and it seems likely to me that it's because their training set wasn't large enough. It wasn't a remarkable difficult test in any way. They did okay at detecting an adult, but they failed at detecting+stopping for a child.

Tesla doesn't have such a massive number of objects that they're trying to classify that I expect the weight limit to really play a large role.

I want to make sure it's clear that I'm not an expert at Deep Neural Networks. I have some experience training object segmentation networks using other peoples networks, but I've done very little work customizing networks.
 
Last edited:
  • Disagree
Reactions: CarlK
This is a misconception. A deep net has a capacity that is a function of the number of weights.

The number of weights is constrained by the amount of compute available on the car. Today’s deep nets have ~100M weights. Larger nets take too long to run, and cars need quick reaction times.

A net with 100M weights cannot learn everything there is to learn from a dataset with billions of examples. It will necessarily forget most of what you teach it.

That is very informative. Thanks!
 
Tesla is fully reliant on visual based detection yet they fail some basic pedestrian detection tasks. Like they didn't ace the IIHS test, and it seems likely to me that it's because their training set wasn't large enough. It wasn't a remarkable difficult test in any way. They did okay at detecting an adult, but they failed at detecting+stopping for a child.

Slight clarification: The Model 3 received missed Superior (aceing) and received an Advanced rating because it did detect, but only had a 5 MPH speed reduction. 2020 Tesla Model 3 4-door sedan
 
  • Informative
Reactions: AlanSubie4Life
Slight clarification: The Model 3 received missed Superior (aceing) and received an Advanced rating because it did detect, but only had a 5 MPH speed reduction. 2020 Tesla Model 3 4-door sedan

The fact that it only had a 5mph speed reduction leads me to believe it was a late detection.

Which seems reasonable considering a child is physically smaller, and is more difficult to detect with certainty from further away.

In any case I considered it a failure of "detecting+stopping" as I didn't know exactly where the failure was.
 
  • Helpful
Reactions: mongo
The fact that it only had a 5mph speed reduction leads me to believe it was a late detection.

Which seems reasonable considering a child is physically smaller, and is more difficult to detect with certainty from further away.

In any case I considered it a failure of "detecting+stopping" as I didn't know exactly where the failure was.

Yar, I was only clarifying that there was some level of detection and linking the test results for reference. Would of course be better to stop fully.
 
Yar, I was only clarifying that there was some level of detection and linking the test results for reference. Would of course be better to stop fully.

Thanks for linking to the source. I probably should have included it as its very relevant to the discussion.

Tesla does have a history of taking IIHS testing very seriously so I expect them to fairly quickly resolve this.
 
  • Like
Reactions: mongo
I agree that a deep net has a capacity that is a function of the number of weights. Plus you could never realistically create a trained dataset with billions of examples. Except if you worked for the Chinese government of course.

But, that doesn't take away from the need to have a lot of examples. The number of classifications is pre-determined (it's defined by your training set) so all the examples are doing is tuning the weights. For a large dataset the most important thing I think would be to have a diverse dataset, and not the same thing over and over. This is where Tesla really can make a difference, but I haven't seen any evidence that they're using any of the data they've collected.

Tesla is fully reliant on visual based detection yet they fail some basic pedestrian detection tasks. Like they didn't ace the IIHS test, and it seems likely to me that it's because their training set wasn't large enough. It wasn't a remarkable difficult test in any way. They did okay at detecting an adult, but they failed at detecting+stopping for a child.

Tesla doesn't have such a massive number of objects that they're trying to classify that I expect the weight limit to really play a large role.

I want to make sure it's clear that I'm not an expert at Deep Neural Networks. I have some experience training object segmentation networks using other peoples networks, but I've done very little work customizing networks.

Slight clarification: The Model 3 received missed Superior (aceing) and received an Advanced rating because it did detect, but only had a 5 MPH speed reduction. 2020 Tesla Model 3 4-door sedan

Pedestrian detection systems don’t work very well, AAA finds

How did the cars do on the easy test?

Unfortunately, the results of the tests were very much a mixed bag. For the Chevy Malibu, while it detected the adult pedestrian at 20mph (32km/h) an average of 2.1 seconds and 63 feet (19.2m) before impact, in five tests it failed to actually apply the brakes enough to reduce the speed significantly before each collision took place. The Tesla Model 3 managed little better; it also hit the pedestrian dummy in each of five runs.

On average, the Chevy slowed by 2.8mph (4.5km/h) and alerted the driver on average 1.4 seconds and 41.7 feet (12.7m) before impact. In two runs, there was no braking at all, even though the system detected the pedestrian dummy.

The Honda Accord performed better. Although it notified the driver much closer to the pedestrian (time-to-collision at 0.7 seconds, distance 32 feet/9.7m), it also prevented the impact from occurring in three of five runs and slowed the car to 0.6mph (1km/h) in a fourth.

Best of all was the Toyota Camry. It gave a visual notification at 1.2 seconds and 35.5 feet (10.8m) before impact. But the Camry also stopped completely before reaching the dummy in each of five runs.

Screen-Shot-2019-10-07-at-1.23.20-PM.png
 
  • Informative
Reactions: 1 person
Pedestrian detection systems don’t work very well, AAA finds

How did the cars do on the easy test?

Unfortunately, the results of the tests were very much a mixed bag. For the Chevy Malibu, while it detected the adult pedestrian at 20mph (32km/h) an average of 2.1 seconds and 63 feet (19.2m) before impact, in five tests it failed to actually apply the brakes enough to reduce the speed significantly before each collision took place. The Tesla Model 3 managed little better; it also hit the pedestrian dummy in each of five runs.

On average, the Chevy slowed by 2.8mph (4.5km/h) and alerted the driver on average 1.4 seconds and 41.7 feet (12.7m) before impact. In two runs, there was no braking at all, even though the system detected the pedestrian dummy.

The Honda Accord performed better. Although it notified the driver much closer to the pedestrian (time-to-collision at 0.7 seconds, distance 32 feet/9.7m), it also prevented the impact from occurring in three of five runs and slowed the car to 0.6mph (1km/h) in a fourth.

Best of all was the Toyota Camry. It gave a visual notification at 1.2 seconds and 35.5 feet (10.8m) before impact. But the Camry also stopped completely before reaching the dummy in each of five runs.

Screen-Shot-2019-10-07-at-1.23.20-PM.png

The fact that Tesla couldn't even Ace the IIHS test made it pointless to even bring up the even more disastrous showing on the AAA test.

The IIHS test is a very easy straight line test during the day with no visibility issues.

The AAA test seemed to be done by people who actually expect pedestrian detection to be useful so they tested things like in the dark or when turning. Cases where it might come in handy.

I really wish they would have tested a Subaru Eyesight because I'm really curious about how that does with more challenging testing.
 
This is a misconception. A deep net has a capacity that is a function of the number of weights.

The number of weights is constrained by the amount of compute available on the car. Today’s deep nets have ~100M weights. Larger nets take too long to run, and cars need quick reaction times.

A net with 100M weights cannot learn everything there is to learn from a dataset with billions of examples. It will necessarily forget most of what you teach it.

Not sure what you're talking about but watch some of those Fridman's audio to learn please.
 
  • Funny
Reactions: Doggydogworld
I agree that a deep net has a capacity that is a function of the number of weights. Plus you could never realistically create a trained dataset with billions of examples. Except if you worked for the Chinese government of course.

But, that doesn't take away from the need to have a lot of examples. The number of classifications is pre-determined (it's defined by your training set) so all the examples are doing is tuning the weights. For a large dataset the most important thing I think would be to have a diverse dataset, and not the same thing over and over. This is where Tesla really can make a difference, but I haven't seen any evidence that they're using any of the data they've collected.

Tesla is fully reliant on visual based detection yet they fail some basic pedestrian detection tasks. Like they didn't ace the IIHS test, and it seems likely to me that it's because their training set wasn't large enough. It wasn't a remarkable difficult test in any way. They did okay at detecting an adult, but they failed at detecting+stopping for a child.

Tesla doesn't have such a massive number of objects that they're trying to classify that I expect the weight limit to really play a large role.

I want to make sure it's clear that I'm not an expert at Deep Neural Networks. I have some experience training object segmentation networks using other peoples networks, but I've done very little work customizing networks.

Start to learn basics and stop to make your own invention. Deep learning does not learn the same thing over and over again. Soon as a case is solved then it's done unless situation changed. A large data set if only needed to find more edge cases to solve. At least watch Karpathy's presentation to learn how it's done. Tesla cars now only send required/requested data but not redundant data to the mother ship. Sad that that are still people like you two trying to reinvent established science in your dreams. No wonder we got all those flat earthers and climate change deniers.
 
Last edited:
Start to learn basics and stop to make your own invention. Deep learning does not learn the same thing over and over again. Soon as a case is solved then it's done unless situation changed. A large data set if only needed to find more edge cases to solve. At least watch Karpathy's presentation to learn how it's done. Tesla cars now only send required/requested data but not redundant data to the mother ship. Sad that that are still people like you two trying to reinvent established science in your dreams. No wonder we got all those flat earthers and climate change deniers.

When I was talking about the same data over, and over I was talking about not having a diverse enough dataset.
https://fortune.com/2019/04/23/artificial-intelligence-diversity-crisis/

I think you'd be surprised about how much trained data is reused. Like for example if I take a picture of an apple, and segment around that apple I'll be able to create 10+ images/labels from that one photo itself. I do that through things like rotating the apple along with the labeling data. Or I can crop it. But, at the end of the day it's all based on that one photo.

I've watched Karpathy presentation, and it all sounded very exciting at the time. But, there is a difference between what someone says during an autonomy day for investors versus what's on the car that can be tested.

Pedestrian detection is purely Deep Neural network based. It's the easiest thing we have to grade Karpathy's work in HW2/HW2.5

I'm not aware of any presentation he's given that explains why it doesn't work yet.
 
  • Disagree
  • Like
Reactions: CarlK and mongo
When I was talking about the same data over, and over I was talking about not having a diverse enough dataset.
Eye on A.I.— How To Fix A.I.'s Diversity Crisis

I think you'd be surprised about how much trained data is reused. Like for example if I take a picture of an apple, and segment around that apple I'll be able to create 10+ images/labels from that one photo itself. I do that through things like rotating the apple along with the labeling data. Or I can crop it. But, at the end of the day it's all based on that one photo.

I've watched Karpathy presentation, and it all sounded very exciting at the time. But, there is a difference between what someone says during an autonomy day for investors versus what's on the car that can be tested.

Pedestrian detection is purely Deep Neural network based. It's the easiest thing we have to grade Karpathy's work in HW2/HW2.5

I'm not aware of any presentation he's given that explains why it doesn't work yet.

(general silliness, not directed at you)

Huh, Tesla needs more pedestrian data?? I wonder where they could collect that? Where on earth they could find a lot of instances of cars and people in close quarters, and how could they get people to identify them while accepting low speed travel (just in case). :D
 
  • Like
Reactions: CarlK
Unless your position is that NoA/ AP is not autonomous in which case what are you referring to by the zero in 2018? (And I'm sure you are aware other states/ cities have looser or no reporting requirement).

As I already said in the very post you quoted, AP is level 2. Note how Tesla only list the number of accidents, not the number of disengagements.

Every time AP disengages or the user has to take over it's a failure. Any more than 0 failures and they can't match Waymo's level 4 because they need a driver.

They are just trying to mislead you by not stating the number of disengagements so you think it's better than it is, but actually the gulf between "didn't crash with loud warning alarms and an attentive human driver ready to take over" and "drives itself without crashing" it vast.

Even the new summon manages to crash at extremely low speeds. I wonder if that will put their accident rate up this year.
 
  • Disagree
Reactions: CarlK
As I already said in the very post you quoted, AP is level 2. Note how Tesla only list the number of accidents, not the number of disengagements.

Every time AP disengages or the user has to take over it's a failure. Any more than 0 failures and they can't match Waymo's level 4 because they need a driver.

They are just trying to mislead you by not stating the number of disengagements so you think it's better than it is, but actually the gulf between "didn't crash with loud warning alarms and an attentive human driver ready to take over" and "drives itself without crashing" it vast.

Even the new summon manages to crash at extremely low speeds. I wonder if that will put their accident rate up this year.
First off, I quoted your entire post:

Waymo has a lot more autonomous miles on the clock than Tesla. According to Tesla they did zero in 2018.
So your claim
As I already said in the very post you quoted, AP is level 2.
Is false (you are thinking of a different post)

To the matter at hand, is your position that:

  1. Waymo is ahead because Tesla has no autonomous driving (originally quoted post) based on CA reporting.
  2. Waymo is ahead because Tesla potentially has more event Waymo would classify as disengagements including areas Waymo does not operate in (parking lots, multiple states) (recent post)
  3. Other
    ?
 
  • Like
Reactions: CarlK
First off, I quoted your entire post:


So your claim

Is false (you are thinking of a different post)

To the matter at hand, is your position that:

  1. Waymo is ahead because Tesla has no autonomous driving (originally quoted post) based on CA reporting.
  2. Waymo is ahead because Tesla potentially has more event Waymo would classify as disengagements including areas Waymo does not operate in (parking lots, multiple states) (recent post)
  3. Other
    ?

Tesla does not need to report to CA DMV because it can run cars under the "shadow mode" while car is still driven by a real driver. It is even more powerful than those test cars Waymo and others are running. You can do everything other test cars do, and even more, without sending final output to steering wheel and pedals. You will still be able to judge if that decision is a success or failure by comparing to actual driver actions. Way more sophisticated and informative than by just measuring "disengagement", a primitive idea thought of by government bureaucracy.

Good that you mentioned parking lot. Many people many not recognize it but parking lot is one of the most challenging areas for self driving cars. There are only loose traffic rules in parking lot many cars don't even follow. A lot of "body language" observation is needed to negotiate in there. That's the perfect scenario for using deep learning. No way your Lidar + mapping could work in there. Again Karpathy's presentation have briefly touch that. There are always people with zero technical understanding but just like to argue with some superficial stuff. My advice is not to waste time on him. I'm sure you know the saying about wrestle with a pig.