What do you think they are doing with this Project Dojo?
One simple answer might be they will just upload a crap ton of data, without doing any manual labeling. And they will just assume the driver's input is the True condition.
E.g. the driver's input is the label. Even though in some cases the driver will do the wrong thing, this is such a small % of the data that they expect to get most of the way to good accuracy without worrying about that.
BTW I surmise the reason they are going to do this is they are still using the NNs mainly for perception. Which may be good enough. But I would guess the next gen chip is expected to do more end-to-end deep learning for driving - this requires much more data and compute power (both training and inference).
That's their next-level goal.
One simple answer might be they will just upload a crap ton of data, without doing any manual labeling. And they will just assume the driver's input is the True condition.
E.g. the driver's input is the label. Even though in some cases the driver will do the wrong thing, this is such a small % of the data that they expect to get most of the way to good accuracy without worrying about that.
BTW I surmise the reason they are going to do this is they are still using the NNs mainly for perception. Which may be good enough. But I would guess the next gen chip is expected to do more end-to-end deep learning for driving - this requires much more data and compute power (both training and inference).
That's their next-level goal.