What's the reason then that GM not doing this way but instead need to rely on LIDAR generated maps for limited roads? Why it wants to do more work with less return? I'm not going to blame EyeQ3, which is only providing vision recognition, on this but GM's ability to add the rest. Remember Tesla did everything other than vision for the AP1 which is still better than the supercruise now. Not to mention in the begining it did not even have hand check or eye check, the only thing supercruise could use to brag.
Supercruise has won every head to head match up vs AP1 and AP2. It is truly stunning. alot of the reviews talked about the wobbling and ping ponging of AP and how SC is like being on a train track. Will post all the reviews sometime when i have free time.
Secondly GM's Ushr maps were not used for direct control. They use the outputs from EyeQ3. Lastly, GM is ditching Ushr in their supercruise 2.0 coming out late this year.
The biggest differece between them, aside from that Mobileye is late to the NN game
Mobileye is the first to use NN in a consumer chip (eyeq3).
They were also a first to ship a NN processor (eyeQ4). And their EyeQ5 that will be avail first half of 2019 is the most efficient NN processor for sdc.
, is Tesla is doing the full solution while Mobileye is only providing a part of it.
That's not true anymore, while Mobileye has previously only provided the perception (in respect to ADAS). For SDC, t hey are now providing the full stack aswell. That includes perception, mapping and driving policy. OEM partners can chose either a full AV Kit or a single node like perception or driving policy, etc. They also provide their own sensor hardware stack (camera, radar & lidar).
Then there is the most critical issue of who is going to be responsible for the machine learning? The chip makers? The car makers? Or they do it together in a slow process? Don't think they could ever catch Tesla that way.This is applied to the Tesla FSD which I'd say the writing is on the wall even without thinking how far ahead Tesla already is.
How about tesla catch mobileye first? The current NN in AP is ridiculously inaccurate in detecting and tracking objects like cars. What the last gen eyeQ3 from 2014 was able to do effortlessly.
Infact Tesla's current NN can't detect road markings, barriers, curbs, concrete barriers, light poles, traffic signs, road signs, pot holes, traffic lights, intersections, stop line, traffic light relevancy, driveable paths, road debris, general objects and a lot more. Nor does it do HD Map harvesting. Yet eyeQ4 does all of this and was released late 2017. Sure the automakers are slow to integrate and their slowness is actually in their waterfall approach and lack of OTA. But that is changing. There are a number of Level 2+ & Level 3 systems coming out this year using EyeQ4.
The point is, mobileye's eyeq4 has far more greater capability, accuracy and efficiency than tesla's current NN. Its not even close and that's with Mobileye's NN being done in 2017 and we are in 2019!!!!!
When you add the advancement eyeQ5 will bring, its not even fair anymore.
I'd say FSD will not have the nag but could still ask you for confirmation when in certain situations it's not 100% sure of, OK 99.9999% sure of. FSD with (constant) nag is meaningless. Tesla just have to bite the bullet and do it and forget about the legal implications. Everything needs to start somewhere.
That's not FSD. FSD is level 5, start to finish, with no driver, as confirmed by Elon would be here late 2017 and early 2018.