The way
Andrej Karpathy tells it, making HW2 Teslas autonomous is mostly a matter of neural networks, and neural networks are mostly a matter of creating data sets. He says neural network architecture is much less of a focus for him than data sets. In the video, Karpathy is specifically talking about deep supervised learning for computer vision neural networks.
If a company wanted to use production fleet data to train a neural network for path planning, it might be able to use sensor data to capture real world situations that can be re-created in simulation. This would help the simulated cars, pedestrians, bikes, etc. reflect the behaviour of the real entities. They could then use reinforcement learning to train the neural network on path planning in simulation.
A company could use driver disengagements from ADAS like Autopilot to flag the real world situations where path planning fails and needs additional training in simulation, or in structured tests on private roads.
What do you think are the main bottlenecks in developing full autonomy?
I realize what Andrej Karpathy ha said, and I agree perhaps the main focus of the autopilot team right now is building the neural networks.
"it might be able to use sensor data to capture real world situations that can be re-created in simulation"
yes this is true. However, they cannot simulate the affects of NN in the environment of the recorded data.... they could use find cases from their fleet to create more scenarios in their test fleet sure... but thats not really harvesting data for NN training.
"A company could use driver disengagements from ADAS like Autopilot to flag the real world situations where path planning fails and needs additional training in simulation, or in structured tests on private roads."
Yes they could
"What do you think are the main bottlenecks in developing full autonomy?"
this question is complicated and not sure what you are asking.... if you are asking what will limit the rollout and availability of fully autonomous vehicles in the next 3-5 years, I would say:
building and maintaining HD maps, mass producing sensors, and other components, and integrating them into high volume vehicles and doing this at economical price points,
If you are asking what takes a long time to develop for any certain player to ready their fully autonomous system, the answer would be something different.
Woah, looks like even pedestrians will be visualized in v9:
Marc Benton on Twitter
really cool
I respect an objective and constructive criticisms of any system but what you produce is simply FUD. You're a Mobileye fan-boy.
I do not see
@Bladerskb as a mobileye fanboy,
and Mobileye's AP 1 system too limited to (reliably) recognize non-moving or cross-moving vehicles or objects.
because that was not their intention.... they designed the EyeQ3 in 2011-2013 to be a system for L1 adas vehicles and hands on L2 systems for inactive lane keep assist, acc, and automatic brakes, because that was the demand at that time by automakers... (and still is honestly)
that a lot of accidents and the majority of deaths were a result of an inattentive driver
There are 3 AP deaths Wikipedia knows of, 2 happened with Mobileye's AP 1!
List of self-driving car fatalities - Wikipedia If you can prove there are 1 to 2 more, please provide a reference. Same goes to the hundreds of AP accidents, prove it or the real numbers are in the tens and not hundreds.
I know, you're now going to blame Tesla for misusing Mobileye's system, blabla...
I do not blame Tesla nor mobileye... although Tesla did what mobileye told them not to do... because mobileye thought it was a safety concern...
blame goes only to the driver of the cars.
Do you need HD Maps to drive your car? No?! Me neither, what a surprise. Maybe that's Tesla's angle, they try to only use vision (maybe with a little help from radar and ultrasonic) like humans do. We will see if they succeed with that. I'm a little sceptic about that, but let's just wait and see.
No that is not Tesla's angle. Tesla is working on HD maps. and every company that plans to deploy or sell commercially available self driving cars, utilizes HD maps.
Humans do not need HD maps to drive yes, but self driving cars to not drive and think the same way that humans do.
AP1 used a reference design from Mobileye at launch. 2 mins on Youtube and you'll see other EyeQ3 implementations being driven that way.
and that reference design was to disengage immediately after driver takes hands off wheel. Which Tesla did not do
sorry I don't know what this means?
So I just tried drive on nav with automatic lane changes and I am super impressed! The car even negotiates with other cars for lane changes when the lane is not free. The only downside is exits are often indicated at some ridiculously low speeds like 30mph and the car obeys that annoying everybody around.
Now to figure out how to stitch viedo from 8 cameras into a single picture I guess.
Wow this is sweet! Awesome that it is negotiating with other cars for lane changes! Can't wait to try.
So you can definitely tell all 8 cameras are being used.?
it's not marked as an early access, does not have any early access disclosures either.
That and it also turns on the turn signal, if the other car does not yield, it lets it pass and tries again, slowing down if necessary if it's an exit we are trying to make and want to not blow past it at big speed.
Wow also very impressive! I am wondering what geographic location are you testing this in and making these observations?