Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Mobileye CES presentation

This site may earn commission on affiliate links.

DanCar

Active Member
Oct 2, 2013
3,089
4,209
SF Bay Area
At the 6 minute mark says Level 3/4/5 coming to regular passenger cars in 2025.
At 5 minute mark says they will charge $5K or less for it.
6:45 talks about level 2 plus: uses Mobileye version of HD maps, which others call low res maps. Conditional self driving. Driver is resonsible, but not always required to grab the steering wheel.
 
At the 6 minute mark says Level 3/4/5 coming to regular passenger cars in 2025.
At 5 minute mark says they will charge $5K or less for it.
6:45 talks about level 2 plus: uses Mobileye version of HD maps, which others call low res maps. Conditional self driving. Driver is resonsible, but not always required to grab the steering wheel.

Dude. I just posted this video in the other Mobileye thread right below this one.
 
  • Funny
Reactions: boonedocks
6:45 talks about level 2 plus: uses Mobileye version of HD maps, which others call low res maps. Conditional self driving. Driver is resonsible, but not always required to grab the steering wheel.
Sounds like GM Super Cruise (which uses maps and does not require hands on wheel) that works in more places.

EDIT: Yeah, they consider Cadillac Super Cruise Level 2+.
 
Currently watching the talk. The most interesting part so far is the section around 20 minutes in on perceptual redundancy within the computer vision (i.e. camera-based perception subsystem). Slides:

9FSFd5z.jpg


HrcTwv3.jpg
 
Overall, the stuff in the talk on computer vision is fascinating and the 20-minute demo video is very cool. Impressive that the whole system is running on 8 cameras and an EyeQ5 chip.

EyeQ5 does 24 TOPS. That's 1/3 the 72 TOPS that Tesla's FSD Computer does (or 144 TOPS total if a redundant copy of the software is running). Tesla should have enough compute to run software as good as what Mobileye demoed or better.

If Tesla's sensors or compute turn out to be inadequate, I think Tesla always has the option of going to Plan B. This would entail:
  • Retrofit 100 or so HW3/HW4 Teslas with high-grade lidar.
  • Put a few Nvidia Pegasuses (320 TOPS each) in the trunk. Start testing the cars around Palo Alto.
  • Start creating hand-annotated, lidar-based HD maps.
  • Start developing lidar perception software and incorporate lidar into the sensor fusion software. (Optional: acquire a startup that's been working on this software for a long time.)
With Plan B, everything except lidar perception and lidar-based HD maps would still benefit from Tesla's large-scale fleet learning.

There is really no reason Tesla can't pursue Plan B and it would still have a fleet data advantage if it chose to do so.

Plan B could even be carried out (at least initially) in some secretive testing location like GoMentum Station or a private testing area owned by Tesla à la Waymo's Castle.

If Plan B doesn't work, there is no harm done by trying it. If Plan B does work, it gives Tesla a path to a viable robotaxi business.

Customers may be pissed that they were told their HW3 Teslas would become robotaxis, but if Tesla can scale Plan B robotaxis to a real business in multiple cities, then I strongly suspect it will have the funds necessary to negotiate a satisfactory arrangement with Tesla owners. Tesla could do car buybacks, offer free retrofits contingent on the car being operated as a robotaxi, or simply give each unhappy FSD purchaser a lump sum of cash. Or a choice between the three.

I don't think the future tech roadmaps of different companies is as binary as it first appears on the surface. GM could come to Jesus and start doing large-scale fleet learning as well and thereby turn Cruise into its “Plan B”. I think that's GM's best possible move.

For now, I think Tesla's best move is just to keep pursuing Plan A and not think about Plan B until they have a better reason to. But Plan B is always in Tesla's back pocket.

Really excited to see the HW3-generation computer vision NNs start running live for Navigate on Autopilot and Summon (assuming they aren't already). Hopefully the result is something even halfway as good as Mobileye's demo (demos are easy).
 
Last edited:
Mobileye just posted this little youtube video to explain Roadbook (TM) which is how Mobileye is using crowdsourcing to build their HD maps.


Essentially what Roadbook does is collect images from the cameras on the existing fleet of L1-L2 cars on the roads today and uses those images to construct a very precise HD map of that area.