Nope, this is all Tesla. I believe Tesla is able to do magical things with their NN because they own the entire stack. They can use the car's sensors (steering position, accelerometers, odometers, GPS, etc.) to measure all sorts of road characteristics derived from vision. Because they are able to control every aspect of their sensor positioning and angles, focal lengths, etc. etc., along with their huge fleet, they can label large amounts of data using "future" labels to label past images. That's how they're able to achieve other cars' positions and speed despite the cameras being partially blocked by a front car here:
https://twitter.com/teslaownersSV/status/1320098691680665600
Edit: who am I kidding? I'm probably not even scratching the surface of what Tesla is doing to achieve mindboggling NN predictions.
This is the same old hydranet from 2018 with some improvements that Tesla came out with to finally match Mobileye in neural networks deployed. You are telling me that finally having what someone else has always had means you are 10 years ahead and what you are doing is mindboggling and magical?
Green "same hydranet pretty much"
https://twitter.com/greentheonly/status/1301170668428578818
https://twitter.com/greentheonly/status/1321134418300489728
https://twitter.com/greentheonly/status/1318344088253571072
As green said 'The outputs of the hydranets are then ingested into BEV NN(s)"
https://twitter.com/greentheonly/status/1320947823530123265
Has anyone found any potentially FSD-breaking issues with this beta?
At this point, I can only see them solving everything to average human level within the next 1-2 years at most.
Basically, it looks like Tesla has won the race to a widely deployed FSD system. Just my insight.
Let me get this straight. Tesla releases an update which primarily contains software 1.0, classical c++ conventional control algorithms that everyone is using. Infact some others are using way more ML and NN in their driving policy stack. But by using their 2 years old hydranet which they deployed to catch up with Mobileye EyeQ4 from 2017. They won self driving?
Wait what? So
https://twitter.com/greentheonly/status/1301170668428578818
From what I’m seeing of the FSD beta, Tesla is very close to a level 5 system that is nowhere as safe as a human but *is capable* of navigating through all of the routinely driven roads.
So that means that Mobileye has already achieved level 5 back in 2017 and we are not even talking about their EyeQ5 NNs yet.
It's crazy I even think this, but 6-9 months lol.
Do you even know that most lidar classification uses BEV networks? And that BEV networks are industry standard and even andrej said the same thing?
Here's a BEV network from 2018 and 2020 for example
BirdNet: a 3D Object Detection Framework from LiDAR information
https://arxiv.org/pdf/2003.04188.pdf
The old 2018 hydranets outputs literally feds into their BEV networks. Outputs such as detected objects (Cars, peds, cyclist, etc), road edge and markings and path prediction.
Literally Mobileye could use the outputs from their EyeQ4 from 2017 if they wanted to create their own BEV network back then. (Figure #1 contains real world example of their 3d all angle vehice detection)
Even more is the case with their next gen EyeQ5.
Yet to you tying a bunch of networks that Mobileye has had since 2017 into a run of the mill industry standard BEV network and writing some classical conventional control algorithm to you is Level 5 and human level disengagement stats will be achieved in just over 6 month.
This is prime example of willful ignorance.
We already have BMW releasing software that stops and goes at traffic light/stop signs. Which works anywhere (Entire EU, uk, canada, china, US, etc).
This is a control algorithm from a dev team (BMW/ Tier 1 ZF) that isn't anywhere near the cream of the crop.
Yet you think, turning at intersections and overtaking is somehow means Level 5 at safety level near, at or better than human will be achived in 6 months.
EDIT:
Figure #1
NIO
BMW
VW
NIssan