Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Autonomous Car Progress

This site may earn commission on affiliate links.
Just my intuition, but it seems a lot of these "small" scale FSD approaches are vulnerable to overfitting their models and driving policies.

Maybe. Hard to say.

But it's not like Cruise has fitted their models and driving policies to only 1000 autonomous miles. That would be overfitting. Cruise has millions of miles of real data and probably more with simulation data. So overfitting should be less. In fact, one reason why they continue to test in SF. If they encounter situations that the car does not handle right, they adjust their models accordingly to prevent overfitting. So I would say that they are probably aware of this problem and trying to address it by collecting more data.
 
What do people here think about how Waymo are doing? Some people are suggesting they have hit a wall on self-driving.

They seem to have scaled back their ambitions massively, you don't hear any talk of timelines for bringing stuff to market anymore. They seem to be planning to use backup drivers for years to come. They don't even seem to have plans to move outside of pheonix. And I don't think their are any recent demo videos showing off recent progress.

Have they hit a wall? Have they found parts of the driving process that are just not amenable to current ML techniques?

Opinions?
 
What do people here think about how Waymo are doing? Some people are suggesting they have hit a wall on self-driving.

They seem to have scaled back their ambitions massively, you don't hear any talk of timelines for bringing stuff to market anymore. They seem to be planning to use backup drivers for years to come. They don't even seem to have plans to move outside of pheonix. And I don't think their are any recent demo videos showing off recent progress.

Have they hit a wall? Have they found parts of the driving process that are just not amenable to current ML techniques?

Waymo is still considered the leader in FSD. They have over 20M autonomous miles. They have deployed a robotaxi service in Phoenix. And they have removed the safety drivers for some public rides. And prior to the covid pandemic, they were testing in over 25 cities. So I think they are doing pretty well.

However, the covid pandemic and the resulting shelter-in-place order did temporarily pause Waymo's operations. So that will make it appear like they are not doing anything.

It should also be noted that achieving the last 10% to get to L5 is 10,000x harder than achieving L4 because of all the seemingly never-ending edge cases to solve.

Basically, achieving FSD that works in one location (L4) is a lot easier than achieving FSD that works everywhere, all the time and with no restrictions (L5).

So in that sense, yes, there is a "wall" between L4 and L5 that is hard to climb.
 
  • Disagree
Reactions: mikes_fsd
I wonder, outside of the robotaxi model, or perhaps a time when most cars on the road are robotaxies / autonomous, will it be feasible to have all these autonomous cars using Lidar? You could easily have 2 types of Lidar interference. 1) Your car's Lidar reading a different car's Lidar pulse hitting the same object giving an incorrect time of flight / distance, and 2) Another car's Lidar pulse directly hitting and possibly blinding your Lidar's sensor.

I suppose at that point, Vehicle to Vehicle communication could coordinate which vehicles could be actively sensing to cut down on interference.
 
I wonder, outside of the robotaxi model, or perhaps a time when most cars on the road are robotaxies / autonomous, will it be feasible to have all these autonomous cars using Lidar? You could easily have 2 types of Lidar interference. 1) Your car's Lidar reading a different car's Lidar pulse hitting the same object giving an incorrect time of flight / distance, and 2) Another car's Lidar pulse directly hitting and possibly blinding your Lidar's sensor.

If there were a large number of the cars with the spinning lidar on the roof, it could potentially be a problem. But I think it is less of a problem with the solid state lidar in bumpers since the field of view is smaller and the range is shorter. Obviously, we are not there since the number of lidar equipped cars on the road is very small.
 
  • Disagree
Reactions: mikes_fsd
Cruise just posted this 17 mn video from one of their presentations that discusses how Cruise uses machine learning to make path predictions. It's very informative:


I think Cruise is further along in self driving than Tesla, but their approach is wrong, so they will never get there.

1. Test data primarily in San Francisco. This leaves out an incredibly amount of corner cases that they don't have to deal with around the continent or around the world.
2. Use of simulation. This should not be done, IMHO. That's like saying a dude is qualified to fly a 747 after spending months on Microsoft Flight Simulator.
3. Not using vision data. Their way of dealing with path prediction of other cars in an intersection does not appear to take into account of turn signals and direction of their front wheels.
4. They don't seem to be running classifier on their front facing video stream.
 
2. Use of simulation. This should not be done, IMHO. That's like saying a dude is qualified to fly a 747 after spending months on Microsoft Flight Simulator.

I think you might be misunderstanding the use of simulation. Cruise, like other companies, uses simulation to test the software prior to testing in the real-world. They still do a lot of real world testing. They are not just using simulations and going "it worked in the sim so our FSD is done."
 
I wonder, outside of the robotaxi model, or perhaps a time when most cars on the road are robotaxies / autonomous, will it be feasible to have all these autonomous cars using Lidar? You could easily have 2 types of Lidar interference. 1) Your car's Lidar reading a different car's Lidar pulse hitting the same object giving an incorrect time of flight / distance, and 2) Another car's Lidar pulse directly hitting and possibly blinding your Lidar's sensor.

I suppose at that point, Vehicle to Vehicle communication could coordinate which vehicles could be actively sensing to cut down on interference.

This is something I looked into a while ago, and the consensus is that yes, a LIDAR array can accidentally intercept pulses from a different LIDAR array, but the rate of pulses is so high that interference can almost always be filtered out.

Some arrays operate upwards of 200,000 pulses per second, so if you get 100 pulses saying an object is 10 feet away and 10,000 pulses saying an object is 100 feet away, you can filter out the 10 feet interference.
 
I wonder, outside of the robotaxi model, or perhaps a time when most cars on the road are robotaxies / autonomous, will it be feasible to have all these autonomous cars using Lidar? You could easily have 2 types of Lidar interference. 1) Your car's Lidar reading a different car's Lidar pulse hitting the same object giving an incorrect time of flight / distance, and 2) Another car's Lidar pulse directly hitting and possibly blinding your Lidar's sensor.

I suppose at that point, Vehicle to Vehicle communication could coordinate which vehicles could be actively sensing to cut down on interference.

I think you make an excellent point. If lidar solutions are used at scale, and a bunch of them are on a highway, they would be effectively driving blind.

Vehicle to Vehicle communication should never be used, IMHO, for there is always the possibility of bad actors, or bad software written by other companies.
 
But I think it is less of a problem with the solid state lidar in bumpers since the field of view is smaller and the range is shorter.

Long range Lidar will still be needed for a Lidar dependent driving system.

This is something I looked into a while ago, and the consensus is that yes, a LIDAR array can accidentally intercept pulses from a different LIDAR array, but the rate of pulses is so high that interference can almost always be filtered out.

Some arrays operate upwards of 200,000 pulses per second, so if you get 100 pulses saying an object is 10 feet away and 10,000 pulses saying an object is 100 feet away, you can filter out the 10 feet interference.

That's good to hear.

Doing some math here, at 20 hertz sampling rate (1200 rpm), that's 10,000 pulses per rotation (360°). Velodyne Lidars have a 300 meter range, which comes up to 1885 m circumference, or 5.3 pulses per meter (pulse every 19 cm). That's obviously at the extreme range. At 50 meters, we get 314 m circumference, or 32 pulses per meter (pulse every 3.14 cm).

Obviously, multiple samples over the same object during the course of that second should average things out fairly well. Just an interesting mental exercise.
 
... Vehicle to Vehicle communication should never be used, IMHO, for there is always the possibility of bad actors, or bad software written by other companies.

There's always the possibility of failure in any system. If you demand perfection you'll never get there. Saying that V2V communication should never be used because there's a possibility of failure misses the whole point that the objective is not perfection, but merely to be safer than human drivers.

"The perfect is the enemy of the good."

V2V communication will provide an added layer of safety. Of course, that's far in the future when most or all cars are autonomous.
 
Long range Lidar will still be needed for a Lidar dependent driving system.



That's good to hear.

Doing some math here, at 20 hertz sampling rate (1200 rpm), that's 10,000 pulses per rotation (360°). Velodyne Lidars have a 300 meter range, which comes up to 1885 m circumference, or 5.3 pulses per meter (pulse every 19 cm). That's obviously at the extreme range. At 50 meters, we get 314 m circumference, or 32 pulses per meter (pulse every 3.14 cm).

Obviously, multiple samples over the same object during the course of that second should average things out fairly well. Just an interesting mental exercise.
You're missing the most important number. Looks like the pulse length is only about 10ns (10 billionths of a second). And with a 100 meter range the receiver is only looking for the pulse within a 330ns window. I'd imagine that this makes significant interference from other LIDAR devices impossible.
Just about every new car has RADAR and that's also not a problem.
 
  • Like
Reactions: Cmos2000
2. Use of simulation. This should not be done, IMHO. That's like saying a dude is qualified to fly a 747 after spending months on Microsoft Flight Simulator.

I personally i think the opposite. If I was running a self driving company I would concentrate everything on making sure the sim was good enough.

Teaching in a simulation is so much more efficient. You can train the equivalent of 100 years in days.
All the real miles my company did would be about creating a better simulation not training directly.
 
  • Funny
Reactions: willow_hiller
Teaching in a simulation is so much more efficient. You can train the equivalent of 100 years in days.
All the real miles my company did would be about creating a better simulation not training directly.

There are varying degrees of simulation. The one we're talking about is computer generated graphics, but Karpathy has also talked about validating changes to the neural net using a test suite of simulations generated by stitching together real inputs from the cameras. Simulation is an important part of validating self-driving code.
 
Last edited:
"The perfect is the enemy of the good."
Ironically, this statement applies to adding extra sensors (like V2V) to achieve FSD.

No, it does not. Because the statement speaks about levels of confidence, not about methods. Someone said that V2V should never be used because it's not perfect. I replied that nothing is perfect, and that if we only use perfect systems we'll never reach our goal. It doesn't apply the other way around because the object of V2V is not to try to achieve perfection, but merely to provide additional useful data.
 
You're missing the most important number. Looks like the pulse length is only about 10ns (10 billionths of a second). And with a 100 meter range the receiver is only looking for the pulse within a 330ns window. I'd imagine that this makes significant interference from other LIDAR devices impossible.
Just about every new car has RADAR and that's also not a problem.

Good point.

There are varying degrees of simulation. The one we're talking about is computer generated graphics, but Karpathy has also talked about validating changes to the neural net using a test suite of simulations generated by stitching together real inputs from the cameras. Simulation is an important part of validating self-driving code.

Something else to consider for simulation data, accurately annotated data for training of a visual system. For example, you can have an object in a simulation be tagged as a van, and train the visual NN to recognize it as a van. Similarly, you can have an exact distance to the object from the simulation data, and train the NN to determine the distance visually. You can't get exact data like that from the real world without a Lidar layer (there are datasets like this). Though I'd guess it's less of an issue for Cruise / Waymo at this point, especially with Lidar.