Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Seems like FSD is a complete crock

This site may earn commission on affiliate links.
The thing is that no one knows whether this will actually work with high enough sensitivity and specificity. As I said, I believe it can work for animals and such, most of the time. However, for road debris I'm not convinced, given the massive array of shapes and sizes of debris that match pre-existing harmless marks, that exist all over the road today.

To be clear, I'm not convinced there is any set of sensors out there that will really be good enough in the near term. Obviously in an abstract sense, it is possible.

Karpathy mentioned during Autonomy Investor Day that Tesla is working on this. They are collecting images of road debris from the fleet and annotating them by hand and teaching the machine to recognize them. I am confident that it will work, yes. You just need a big enough sample that adequately covers all the variation of different sizes, colors etc of road debris.

upload_2019-8-19_18-12-30.png
 
I am confident that it will work, yes. You just need a big enough sample that adequately covers all the variation of different sizes, colors etc of road debris.

Easy to say, much harder to do. I believe it wouldn't be incredibly difficult to implement a system that identifies every piece of road debris that exists using the NN framework; it's amazing how well they work!

The problem with that system would be all the identifications of things that were not road debris.

EDIT: would be interesting to see all their fleet data of identified road debris. I'd like to see ALL the pictures of road debris that the system identified as such!
 
Easy to say, much harder to do. I believe it wouldn't be incredibly difficult to implement a system that identifies every piece of road debris that exists using the NN framework; it's amazing how well they work!

The problem with that system would be all the identifications of things that were not road debris.

I am not saying it would be easy. It is indeed an incredibly super difficult task. It will take a long time. It's why Tesla has not finished yet. It's why we get phantom braking: the camera vision gets confused. In fact, it is probably why companies use lidar because lidar is highly accurate and perfect for detecting small objects like road debris. With lidar, you have an additional sensor to help detect things like road debris so the camera vision does not have to. But I don't think it is impossible.
 
The thing is that no one knows whether this will actually work with high enough sensitivity and specificity. As I said, I believe it can work for animals and such, most of the time. However, for road debris I'm not convinced, given the massive array of shapes and sizes of debris that match pre-existing harmless marks, that exist all over the road today.

To be clear, I'm not convinced there is any set of sensors out there that will really be good enough in the near term. Obviously in an abstract sense, it is possible.

How does the brute force pattern matching (“deep machine learning”) by throwing millions of pictures actually distill this into limited memory/number of neurons/connections? Won’t this run into physical memory problems fairly quickly?

Also, pattern matching is just a small part of object recognition and classification. There’s a world of a difference between a painting of an object and the actual object. It’s about identifying the “danger” of running over an object. Personally I doubt that a NN vision system without a vast expert system database that evaluates what the NN “sees” will be good enough to reach a balance between phantom braking and object avoidance that exceeds a sober, alert human.
 
How does the brute force pattern matching (“deep machine learning”) by throwing millions of pictures actually distill this into limited memory/number of neurons/connections? Won’t this run into physical memory problems fairly quickly?

With the same neural network, the memory required to run images through the network after training 1 image is the same as after training on 1 million images, since all that changes in that example is the values of the weights of the network itself.

The question of whether using whatever network they are currently developing is enough or if Tesla will need a much larger network that will have different physical memory constraints is a separate question.
 
Karpathy mentioned during Autonomy Investor Day that Tesla is working on this. They are collecting images of road debris from the fleet and annotating them by hand and teaching the machine to recognize them. I am confident that it will work, yes. You just need a big enough sample that adequately covers all the variation of different sizes, colors etc of road debris.

View attachment 443997
They don't need to actually identify all the debris - they just need to identify debri as "safe to drive over" or "not safe to drive over". May be a couple of more categories so that the car doesn't hit another car instead of driving over something that it should normally avoid, but better to drive over than hit a car.

These are really the edge cases they have to deal with after they get to true FC (not the MVP they are working towards).
 
  • Like
Reactions: diplomat33
Does the car even autopark?

Does adaptive cruise control work well or phantom break a lot?

I don’t care much about the car. I have cars. I want the tech Elon is selling. I think he’s duped me into wanting this thing.
You may be a troll, but I will bite. Buy a model 3 with FSD. It is awesome already. It will change your perspective. It drives me 90% of the time. Keeps lane and speed way better than I do. Never phantom breaks for me. (I agree that the tech is more important than the car and that is why i bothered responding. He has not duped you. The tech is great.)
 
You may be a troll, but I will bite. Buy a model 3 with FSD. It is awesome already. It will change your perspective. It drives me 90% of the time. Keeps lane and speed way better than I do. Never phantom breaks for me. (I agree that the tech is more important than the car and that is why i bothered responding. He has not duped you. The tech is great.)
Same here. I'm on AP1, by the way.
 
They don't need to actually identify all the debris - they just need to identify debri as "safe to drive over" or "not safe to drive over". May be a couple of more categories so that the car doesn't hit another car instead of driving over something that it should normally avoid, but better to drive over than hit a car.

These are really the edge cases they have to deal with after they get to true FC (not the MVP they are working towards).
One would expect those labeling type functions to get even better with faster hardware.
 
Arguably, cut off a semi around 7:45. Needs to understand closing speeds and give it some extra go pedal in such scenarios.

I had not noticed. It was kinda hard to see in the rear view camera. But certainly, this would absolutely be something that Tesla needs to tweak to make NOA better. Hopefully, Tesla will be adding more driving policy going forward to make NOA react more intelligently in these situations.
 
Besides 3 unique but repeatable errors NOA presents on every single commute, my biggest concern right now is NOA's incessant desire to drive in other peoples blind spots when there is plenty of room to just seed up a bit to pass and resume speed or slow down a bit an resume speed to NOT be in other vehicles / drivers blind spots. This one behavior absolutely drives other passengers with me crazy enough for me to have to override get in front of back off a bit to fix.
 
  • Like
Reactions: AlanSubie4Life
Besides 3 unique but repeatable errors NOA presents on every single commute, my biggest concern right now is NOA's incessant desire to drive in other peoples blind spots when there is plenty of room to just seed up a bit to pass and resume speed or slow down a bit an resume speed to NOT be in other vehicles / drivers blind spots. This one behavior absolutely drives other passengers with me crazy enough for me to have to override get in front of back off a bit to fix.

Yes! This issue is one that Tesla could relatively easily fix, too. Would improve their safety stats most likely. So probably eventually they'll get to it.
 
  • Like
Reactions: boonedocks
Perception is hard but it is doable. It just requires "grinding" through machine learning: you collect a ton of images, then annotate the images by hand and feed them into your computer until it "learns" what the images are. This works with everything. You can collect a lot of images of potholes and feed them into the machine and it will learn to recognize potholes. You can collect a lot of images of various road debris and teach the machine to recognize debris. You can use this method for literally everything, debris, potholes, deer, road markings etc... It's what Tesla is doing but it takes time. In fact, Tesla has already used this method successfully with lane lines, cars, trucks, pedestrians, cyclists, traffic lights and probably more. It's just not finished yet.

Sure you can learn the computer to identify objects but that doesn't make it a good driver. When you see a deer for instance you really don't want know if it is a deer or some other animal. You want to know what is is going to do. If it is just standing there eating and doing its thing or is it running towards the road, that is a big difference. You can upload all the images you want but you won't learn the computer anticipating certain situations that way.
 
Sure you can learn the computer to identify objects but that doesn't make it a good driver. When you see a deer for instance you really don't want know if it is a deer or some other animal. You want to know what is is going to do. If it is just standing there eating and doing its thing or is it running towards the road, that is a big difference. You can upload all the images you want but you won't learn the computer anticipating certain situations that way.

Not true. You can teach computers to predict movement. Tesla already did it for predicting when cars will cut in font of you. The same technique could be applied to deer. Basically, you feed video clips into the computer and teach it to recognize the signs that the deer is about to run in front of you.
 
Tesla already did it for predicting when cars will cut in font of you.

Is there any evidence they are doing this yet? I don't have enough opportunities to use NoA to know whether it ever does this. Certainly last time I used it over the weekend it seemed to be exhibiting its standard "shocked" behavior when someone finally completely moved into the lane of travel in front of me. As far as I can tell there is still zero anticipation. I would have eased off 2 seconds or so before the car did.
 
Is there any evidence they are doing this yet? I don't have enough opportunities to use NoA to know whether it ever does this. Certainly last time I used it over the weekend it seemed to be exhibiting its standard "shocked" behavior when someone finally completely moved into the lane of travel in front of me. As far as I can tell there is still zero anticipation. I would have eased off 2 seconds or so before the car did.

I had a situation just today where a pick up truck passed me and veered a little bit over the middle line and my car on AP braked automatically. So yes, I have seen the cut in prediction work.