Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla FSD/Autopilot under attack

This site may earn commission on affiliate links.
You definitely need cameras. I'm not sure how much labeling is done on LIDAR data. Obviously you can feed both LIDAR data and camera data into the same perception neural net. I'm not an expert but certain types of decision making are very simple (i.e. don't run into the side of the semi truck trailer that the vision neural net didn't see!).
I agree. The problem is that you need the semi truck properly labeled and located by both the LIDAR and the camera streams before giving instructions to avoid it.
I'll also point out that Tesla has a neural net that creates a Vidar data stream (exactly the same type of depth data that LIDAR does, just with lower accuracy) so they'll have to figure out how to merge those two streams for decision making too.
People often say LIDAR causes all sorts of problems but if you look at the actual collisions that AVs with LIDAR have they always seem to have nothing to do with LIDAR.
I don't disagree that LIDARs can work just fine with cameras, but that will require significantly more resources. If the system can be simplified, we should always go for it.
 
I agree. The problem is that you need the semi truck properly labeled and located by both the LIDAR and the camera streams before giving instructions to avoid it.
Not sure why it has to be labeled, just don't run into large objects, there's no need to know what it is. Someone is probably going to bring up the plastic bag example but I'm pretty sure you don't need to train neural net to tell the difference between a plastic bag and a giant object.
I don't disagree that LIDARs can work just fine with cameras, but that will require significantly more resources. If the system can be simplified, we should always go for it.
Of course, that's why a lot of companies are getting rid of RADAR for their driver assist systems. Computer vision has gotten good enough that cheap RADAR doesn't add enough to justify the cost. Unfortunately depth information from computer vision systems has not reached parity with humans (note how often FSD Beta users hit curbs). Right now the only technology that can match human performance is LIDAR though that certainly may change in the future.
 
I guess I'll take #3. LIDAR data is very simple and far easier to process than camera data (Samsung makes a robot vacuum that uses LIDAR!). It does not require neural networks at all.
That depends entirely on the use case. Camera data is easy to process if all you need to do is adjust the contrast. Far harder if you are trying to figure out if the object "over there" is a car or not.

In fact LIDAR data is far less voluminous, in that the number of data points is a couple orders of magnitude less that camera data. Also, since LIDAR cannot identify an object (its far too low resolution), essentially it is auxiliary data. It tells you "something in this direction is this far away and is moving with this velocity" but can't tell you much more without the assistance of an NN to integrate and recognize boundaries (which can only really be done in conjunction with camera data).

I suggest you bolt your robot vacuum to the front of your car and see how well it navigates city streets.
 
  • Like
Reactions: alexgr
That depends entirely on the use case. Camera data is easy to process if all you need to do is adjust the contrast. Far harder if you are trying to figure out if the object "over there" is a car or not.
Obviously we're talking about computer vision in this forum, not image processing.
In fact LIDAR data is far less voluminous, in that the number of data points is a couple orders of magnitude less that camera data. Also, since LIDAR cannot identify an object (its far too low resolution), essentially it is auxiliary data. It tells you "something in this direction is this far away and is moving with this velocity" but can't tell you much more without the assistance of an NN to integrate and recognize boundaries (which can only really be done in conjunction with camera data).
Yes, it produces less data (that was part of misinformation you were asking about). However, the depth data it produces cannot reliably be produced by computer vision yet.
I suggest you bolt your robot vacuum to the front of your car and see how well it navigates city streets.
Probably about as well as Smart Summon. :p
 
Certainly, you have no clue. Ford has Co-Pilot360 for a long time that allowed drivers to go hands free for short time even on country roads, and it had no camera monitoring of the driver.
This video may help you to learn more about the real functionality of Fords autopilot.

I think the confusion is with Co-Pilot360.

I assumed you meant the system that is shipped with Blue cruise as that's the top tier system that we should be comparing with Autopilot. But, I think that's actually called Co-Pilot360 Active 2.0

Technically Co-pilot360 is only a lane-keep assist package, and that's not lane steering. So why would you mean that one?

I think what you really meant to say is Co-Pilot360 Assist+ as that offers lane-steering.

In any case that top systems these days (BlueCruise and Supercruise) use proper Driver Monitoring.

There are lane-steering systems with torque sensors as that used to be the way to do it, but they're antiquated systems.

As to how well Blue Cruise performs its tough to tell. On the video you posted it never shows "hands free" icon like other videos show, but I don't know why that is. Maybe the tester wasn't using it right or maybe it got an update since the video was taken. Reading from some posts on the MachE forum it seems like BlueCruise still hasn't shipped, and most reviews seem to suggest that it purposely pushes controls back to the driver during curves. Something that is of note because its the same type of curves that the systems without the hands free works just fine with. So it seems like Ford is being overly cautious with their hands free version.
 
Last edited:
That depends entirely on the use case. Camera data is easy to process if all you need to do is adjust the contrast. Far harder if you are trying to figure out if the object "over there" is a car or not.

In fact LIDAR data is far less voluminous, in that the number of data points is a couple orders of magnitude less that camera data. Also, since LIDAR cannot identify an object (its far too low resolution), essentially it is auxiliary data. It tells you "something in this direction is this far away and is moving with this velocity" but can't tell you much more without the assistance of an NN to integrate and recognize boundaries (which can only really be done in conjunction with camera data).

I suggest you bolt your robot vacuum to the front of your car and see how well it navigates city streets.

No one in the autonomous driving world goes with Lidar without having cameras.

The simple fact is the two sensing methods are complementary.

The Lidar does a great job at telling you that something is there, and how fast its traveling. It also works great at night.

The Camera can tell you what it is AS LONG as the neural network has been trained on it. Single Sensor camera systems don't do so great at distance detection. This is why you'll notice cars bouncing around on the UI. This is especially true of semi-trucks.

If you simply threw a bunch of junk down on the freeway what are the odds of a Tesla avoiding it or braking for it? A couple months ago I was driving on a Mountain road at night with autopilot where I felt like it was doing just wanted it to do. I wanted it to supplement my own vision as it was one of those dark nights with poor lane markings. But, then I noticed a huge rock appearing on the right side of the rain. AP made no attempt to maneuver around the rock so I took over, and made sure to avoid the rock.

Now I had no expectations that it would detect the rock. The current version isn't trained to recognize a rock, and doesn't seem to have any generic object detection. But, if it had Frontal Lidar not only would it have detected the rough shape/outline of the rock but its likely that other Tesla's would have detected rocks to the point where the vision system was trained on thousands of rocks on the road. So it's likely both systems would have detected the rock, and avoided it.
 
  • Like
Reactions: daktari
Could you please name a couple "basic safety issues"? Thank you.

I just got the most modern, most updated FSD beta 10.2:

The system plans well to go straight but it erroneously uses the left-hand lane only to do that:

Lett_Turn_Only.jpg


Here's another intersection:

Z6SOIHk.jpg



The steering wheel sometimes automatically swerves really bad for what I thought was "no reason" but for this case, it just wanted to catch up with its planned route but it was too late and it would hit the pole on the left if not manually taken over:



Left_Toward_Pole.jpg


I expect these imperfections because I signed up as a beta tester. However, many consumers don't know that they are beta testers and might not be ready to react timely.
 
  • Like
  • Informative
Reactions: alexgr and daktari