Most people have assumed that Tesla will use some deep convolutional nets for Tesla Vision, but not for end-to-end actual control of the car. Indeed it can be seen as dangerous. However the lead of Tesla's AI division, Andrej Karpathy, recently wrote this blog post. An interesting take on the future of software. The example I wanted to focus on was the robotics example: A self-driving Tesla is essentially a robot. All those blocks are what are done now for robots / self driving. Andrej is clearly insinuating that eventually Software 2.0 will take over all of self-driving code, but is he hinting that would be the case even in the next year or so at Tesla? A person who recently had a discussion about this with Andrej also wrote a blog post. Again very clearly arguing that deep nets (or something similar) taking over the entire program. It seems Andrej believes this. Will he implement this belief at Tesla? If so, will it be sooner than we think?
True, though he doesn't say it will fail, but will need much more data than the modular approach. Tesla 'might' be able to handle that.
You seem to imply this quote is in the referenced blog post, but it is not. Did you link to the wrong post?
Yay, zombie thread. Hadn't read that Mobileye propaganda before. Perhaps they should have gone for a more holistic approach and their system wouldn't have said everything is great while decapitating someone.
That is what I assume, until further notice. I have a hard time believing a single neural network can drive the car and also respect traffic laws etc. Sure, it works in a closed circuit just for steering, like the NVIDIA demos.
Just a small point, I think they will end up closer to end to end, but not necessarily with one network, might be a series of them.