Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Waymo tests imitation learning for path planning

This site may earn commission on affiliate links.
Yes, this is a good point. With supervised learning, hand labelling is a bottleneck to building a useable training dataset, much more so than simply collecting the data.

That is not my point at all. Using simulation to build the data set means no need for hand labelling. The real effort is to make the simulation accurate.

For example, none of the cars in this video seem to have a fixed or predictable trajectory. Probably easier to define exact paths in the simulation foir this case rather than use dynamically computed behaviours (i.e. GTA5).


But so far nobody has been able to show a case where Tesla could use their volume consumer data to actually train the neural networks in any significant advantage generating manner or that it would make sense.

Verygreen mentioned that Tesla were able to define triggers which would send snapshots back to the mothership. I guess this would be useful for spotting edge cases and scenarios where they felt the system needed improvement.

It is also clear that the fleet provided HD data for the map tiles at some stage, which might have been useful for building accurate simulation models.
 
Verygreen mentioned that Tesla were able to define triggers which would send snapshots back to the mothership. I guess this would be useful for spotting edge cases and scenarios where they felt the system needed improvement.

It is also clear that the fleet provided HD data for the map tiles at some stage, which might have been useful for building accurate simulation models.

Mapping, of course. Validation and testing, of course. There is a lot Tesla can do with a software updateable platform, especially if their sensor suite ends up being sufficient for the eventual task at hand.

Using 10 billion miles of consumer driving data to actually train neural networks as @strangecosmos is suggesting as a unique Tesla advantage, though? That seems questionable to me.

What @strangecosmos suggests is a unique volume advantage for Tesla in training their neural networks that nobody else would have — hence his excitement for Tesla again in post #1 of this thread. It sounds good too, Tesla gathering 10 billion miles of driving data nobody else has access to and feeding that to a machine learning process.

The problem is, the only things we’ve seen so far that make any sense are merely limited volume labelling at best, which many others are doing likely at similar or even larger volumes. It is questionable if those 10 billion miles (the advantage, that is) can actually usefully be used to train networks. Everyone does simulation. MobilEye is likely ahead already in mapping and gathering driving policy testing data from EyeQ4 etc.
 
Last edited:
That is not my point at all. Using simulation to build the data set means no need for hand labelling. The real effort is to make the simulation accurate.

For example, none of the cars in this video seem to have a fixed or predictable trajectory. Probably easier to define exact paths in the simulation foir this case rather than use dynamically computed behaviours (i.e. GTA5).

In the blog post and paper, Waymo says that one potential application of supervised imitation learning is to train a neural network to drive like a human, so that a simulation can be populated with realistically behaving drivers that accurately reflect real world driving.

The training dataset for supervised imitation learning is compiled from real world driving by humans. But it still needs no hand labelling.

I wonder if it would work to not bother simulating the behaviour of other drivers, but, as you suggested, simply “re-play” the mid-level representations collected from a real life driving situation. Hm. The worry is that it wouldn’t generalize, but maybe it would if you had billions or trillions of situations.
 
What @strangecosmos suggests is a unique volume advantage for Tesla in training their neural networks that nobody else would have — hence his excitement for Tesla again in post #1 of this thread. It sounds good too, Tesla gathering 10 billion miles of driving data nobody else has access to and feeding that to a machine learning process.

Yeah, a little misguided. It likely IS an advantage, but IMO not for any of the reasons stated in that post. For me, the advantage is in validation, which of course feeds back into future training sets.

In the blog post and paper, Waymo says that one potential application of supervised imitation learning is to train a neural network to drive like a human, so that a simulation can be populated with realistically behaving drivers that accurately reflect real world driving.

The training dataset for supervised imitation learning is compiled from real world driving by humans. But it still needs no hand labelling.

I wonder if it would work to not bother simulating the behaviour of other drivers, but, as you suggested, simply “re-play” the mid-level representations collected from a real life driving situation. Hm. The worry is that it wouldn’t generalize, but maybe it would if you had billions or trillions of situations.

What does "drive like a human" mean? This is unquantifiable.

In the video I posted, every single driver has a different driving style, different motivations, different concepts of how much they can bully other drivers into giving way, different levels of attentiveness, etc.

You don't need "billions or trillions" to generalise. What exactly are you trying to generalise anyway? Driving like a human? LOL

Finally, I don't buy your comment regarding "no hand labelling". Where is the context going to come from? What if the real-world diving doesn't include a roundabout? Or the only roundabout in the set was taken by a driver who missed the exit 1st time round? Do you want the car to imitate this mistake?
 
I haven’t posted this key info in the thread yet. Here are the results of Waymo’s simulated tests. It sounds like ChauffeurNet drove correctly 100% of the time when tested on a few types of situation, because they didn’t give numbers and just say:

“Here, we present results from experiments using the various models in the closed-loop simulation setup. We first evaluated all the models on simple situations such as stopping for stop-signs and red traffic lights, and lane following along straight and curved roads by creating 20 scenarios for each situation, and found that all the models worked well in these simple cases. Therefore, we will focus below on specific complex situations that highlight the differences between these models.”
On the three more complex types of scenario, ChauffeurNet still drove correctly 85%+ of the time in its final iteration:
mufI5NZr.jpg

The 10% crash rate for nudging around a parked car is better than it first appears:

“Note that in this scenario, we generate several variations by changing the starting speed of the agent relative to the parked car. This creates situations of increasing difficulty, where the agent approaches the parked car at very high relative speed and thus does not have enough time to nudge around the car given the dynamic constraints. A 10% collision rate for M4 is thus not a measure of the absolute performance of the model since we do not have a perfect driver which could have performed well at all the scenarios here.”
This is also true for slowing down for a slow car:

“For the variation with the largest relative speed, there isn’t enough time for most models to stop the agent in time, thus leading to a collision.”
It would be interesting to put humans in driving simulators (similar to the flight simulators that pilots train on) and see how humans perform on the same situations. How close is ChauffeurNet to human performance in these situations? I wish every paper on autonomous vehicles included a human benchmark section.

Waymo ends the paper by saying this:

“...the model is not yet fully competitive with motion planning approaches but we feel that this is a good step forward for machine learned driving models. There is room for improvement: comparing to end-to-end approaches, and investigating alternatives to imitation dropout are among them. But most importantly, we believe that augmenting the expert demonstrations with a thorough exploration of rare and difficult scenarios in simulation, perhaps within a reinforcement learning framework, will be the key to improving the performance of these models especially for highly interactive scenarios.”​
 
Related thread: Levandowski flips the table on L4 narrative

Anthony Levandowski’s new company Pronto AI seems to pursuing a machine learning approach to path planning.

For a Level 2 driver’s aid where the driver is the crutch and responsible for any mistakes the car makes certainly this can be done, even the end to end part of as Nvidia has shown earlier. How long until that can mature into car responsible driving though?

Of course Levandowski did all this without 10 billion miles of training data because training neural networks doesn’t really work that way.
 
Okay there have been some pretty bad personal attacks in this thread. Those posts are being moved to snippiness. Further actions will also be taken.

Please note that all content with a personal attack or quoting a personal attack will be moved. If useful babies are moved with the bath water, that's your own fault. You can repost it.
 
  • Like
Reactions: strangecosmos
Okay there have been some pretty bad personal attacks in this thread. Those posts are being moved to snippiness. Further actions will also be taken.

Please note that all content with a personal attack or quoting a personal attack will be moved. If useful babies are moved with the bath water, that's your own fault. You can repost it.
So, does it mean we are now left with pretty much a single serial poster who, because he decided to block a bunch of people he was disagreeing with, pretty much drives a one-way communication with mostly himself?

That, to me, is the real passive-aggressive behavior. But it seems "language-aggressive" is the real benchmark as to what is ok vs. not. That's a shame. Oh, and the serial-poster is advertising his own forums on TMC and trying to get people there in a not-so-subtle way.
 
So, does it mean we are now left with pretty much a single serial poster who, because he decided to block a bunch of people he was disagreeing with, pretty much drives a one-way communication with mostly himself?

That, to me, is the real passive-aggressive behavior. But it seems "language-aggressive" is the real benchmark as to what is ok vs. not. That's a shame. Oh, and the serial-poster is advertising his own forums on TMC and trying to get people there in a not-so-subtle way.

No it just means there are consequences for violating the terms of service. The language used was not subtle. Ad hominem attacks just lower the tone and undermine your own arguments.

Everyone is welcome to repost constructive content, as long as it is polite and respectful.
 
  • Like
Reactions: strangecosmos
P.P.S. Those who are interested in good faith discussion: I would encourage you to put everyone who’s replied to this thread, except heltok, on your ignore list, and/or to check out Gradient Descent which is a moderated autonomous car forum.

strangecosmos said:
gradientdescent.co — a place for deep, detailed, and respectful discussion of autonomous car technology (a complement to TMC)

PSA: Threads on Gradient Descent will never be derailed by repetitive debates about Mobileye. There is careful moderation to prevent trolling and sockpuppeting. The forum guidelines are kindness, open-mindedness, and advancing the discussion from a technical perspective. Oh, and jimmy_d has mod powers. :D

21. Competing Sites, Businesses, Individuals and Organizations
You agree not to attempt to gain traffic from TMC in any way, including but not limited to, links in posts, signatures, profile information, PM's, or emails. This applies to, but is not limited to competing sites, businesses, individuals or organizations. You accept that TMC has the right to erase any information displayed/posted on TeslaMotorsClub.com that may push traffic towards competitors.

Would you @Doug_G consider @strangecosmos continued advertising of a competing forum he created to move a portion of the conversation over as violation of terms of service? Including in his signature and over several threads on Autonomous Vehicles?
 
So long, TMC.

When a forum's moderation is so incompetent that it sides with the crackpots over the experts, it's a waste of time to participate.

(NB: I used to run a science forum with 10M+ daily views -- the first one ever to allow LaTeX mathematics in posts. The crackpots were attracted to us like moths to a bug light, because they craved the legitimacy of being seen on our site. The best thing we ever did was to establish a firm set of rules that allowed us to get rid of them.)
 
Last edited:
Interesting finding from Baidu.

As we know, with 1 million training images (1,000 per semantic class), you can get superhuman accuracy on ImageNet.

But with 25,000 training images (25 per semantic class), Baidu found you get ~0% accuracy:

“For small training sets—less than roughly 25 images per class—these error metrics are roughly equal to the model random guessing...”​

There are some training datasets that are too small to make any progress on solving a problem. Once you cross a certain threshold of dataset size, you suddenly begin to make progress. This is something to keep in mind with neural networks. If a problem seems intractable, one possible cause is that the dataset is just too small.

In some cases, increasing a dataset 40x can be the difference between “it doesn’t work at all” and “it’s superhuman”.

Similarly, from 25,000 to 1 million training images, convolutional neural networks go from “it works a bit” to “it’s superhuman”. Sometimes increasing a dataset less than 40x is enough to solve a problem. You can’t automatically conclude in every instance “it only works a bit, and no additional amount of training data will make it work”. Sometimes that will be true, and sometimes it won’t.

With Waymo’s imitation learning paper, on the types of situation that ChauffeurNet was trained and tested on, it apparently performed perfectly on some, or at least so well that Waymo didn’t bother to publish the test results. In one of the types of situation Waymo did report (random perturbances), ChauffeurNet had a 100% success rate. In the other two, it is difficult to assess, since we don’t have a human benchmark, and Waymo says a human driver may not have been able to perform better. Put people in driving simulators, yo!

Waymo concludes the paper by saying/implying ChauffeurNet is not yet fully competitive with Waymo’s current hybrid machine learning/explicit reasoning system, but they don’t share what led them to that conclusion. Did they run tests? Was it their qualitative assessment? I wish they had expanded on this more, to let us know what the weaknesses of ChauffeurNet are relative to Waymo’s current system. Perhaps they didn’t want to reveal any information about Waymo’s proprietary technology.

The most insight we get is under the Failure Modes section on pages 16-17:

“At our ground resolution of 20 cm/pixel, the agent currently sees 64 m in front and 40 m on the sides and this limits the model’s ability to perform merges on T-junctions and turns from a high-speed road. Specific situations like U-turns and cul-de-sacs are also not currently handled, and will require sampling enough training data. The model occasionally gets stuck in some low speed nudging situations. It sometimes outputs turn geometries that make the specific turn infeasible (e.g. large turning radius). We also see some cases where the model gets over aggressive in novel and rare situations for example by trying to pass a slow moving vehicle. We believe that adequate simulated exploration may be needed for highly interactive or rare situations.”
It’s important to note that Waymo attributes some of these failure modes to insufficient training data, and suggests adding more real or synthetic training data as a potential solution. The relevant measure of training data here is probably number of training examples per type of situation, but with driving there may be a long tail of an indefinite number of types of situations. One advantage of collecting 1 billion, or 10 billion, or 25 billion miles of mid-level representations data from real world driving is that you would be able to train a neural network on all kinds of rare situations that human engineers might never think to simulate. Including thousands of crashes and near-crashes.

So, Waymo believes that one way to fix some of the failure modes and improve the system is more training data. This highlights the importance of training data for machine learning approaches to path planning. Don’t take my word for it — just read what Waymo wrote.

Andrew Ng’s advice to deep learning engineers is: “no matter where you’re stuck, with modern deep learning tools we have a clear path for making progress... In particular, no matter what your problem is — overfitting or underfitting, really high bias or high variance or maybe both — you always have at least one action you can take, which is: bigger model or more data. So, in the deep learning era... it feels like we more often have a way out of whatever problem we’re stuck in.”

This is something I keep in mind when thinking about neural networks.
 
Last edited:
  • Disagree
Reactions: caligula666
So, Waymo believes that one way to fix some of the failure modes and improve the system is more training data. This highlights the importance of training data for machine learning approaches to path planning. Don’t take my word for it — just read what Waymo wrote.

This is what Waymo wrote:

“We believe that adequate simulated exploration may be needed for highly interactive or rare situations.”

"Adequate" is a comment on quality, not quantity.
 
Waymo believes that one way to fix some of the failure modes and improve the system is more training data. This highlights the importance of training data for machine learning approaches to path planning. Don’t take my word for it — just read what Waymo wrote.

I certainly hope nobody takes your word for it, since Waymo said no such thing.

You seem obsessed with the amount of any training data (no wonder because Tesla gathers generic training data), when it is the nature of the data that counts much more.

Waymo makes it clear specific training data generated by eg simulators could be a solution to improving NN performance, not adding more and more generic training data from a consumer fleet going about their daily business.