TMC is an independent, primarily volunteer organization that relies on ad revenue to cover its operating costs. Please consider whitelisting TMC on your ad blocker and becoming a Supporting Member. For more info: Support TMC
  1. TMC is currently READ ONLY.
    Click here for more info.

Blog Musk Touts 'Quantum Leap" in Full Self-Driving Performance

Discussion in 'Autopilot & Autonomous/FSD' started by TMC Staff, Aug 15, 2020.

  1. DanCar

    DanCar Active Member

    Joined:
    Oct 2, 2013
    Messages:
    1,670
    Location:
    SF Bay Area
    Yeah, agree, I like to call it PSD, partial self driving, or ADOCS, automatic driving on city streets.
    Since you are in Gilbert (right next to Chandler), wondering what you think of Waymo? Know anyone who has ridden in a driverless waymo?
     
  2. tmoz

    tmoz S85D, Prius PiP

    Joined:
    Aug 16, 2015
    Messages:
    853
    Location:
    Gilbert, Arizona
    I do see a handful of them. I probably see more Teslas now than Waymos. I think there is always a human sitting behind the wheel, so I never know how much intervention is needed. I've never ridden in one either. They have not caused any traffic issues in the cases when I've seen them - not much info on my part.
     
    • Like x 2
  3. Bladerskb

    Bladerskb Senior Software Engineer

    Joined:
    Oct 24, 2016
    Messages:
    2,068
    Location:
    Michigan
    #943 Bladerskb, Nov 9, 2020
    Last edited: Nov 9, 2020

    I proved several things

    1. Tesla is using the same NNs from 2018
    2. These NN were deployed to catch up to Mobileye's EyeQ4 from 2017. Example networks includes 3D vehicle detection, Semantic Free Space, Path Planning, Road Edge detection, etc.
    3. [​IMG]
    4. Birds-eye network and architecture is very easy and is widely used in the industry, this is also confirmed by Andrej himself.
    5. Crowd-sourced REM Map makes error prone (false positive/negative galore) Bird-eye intersection detection none and void.
    6. [​IMG]
    7. The name of the game is near zero false positive and false negative not getting something that works most of the time.
    8. Mobileye has been doing 3d and stitching since 2017 and now has several neural networks that does 360 stitching
    9. [​IMG]
    10. FSD Beta today is practically the result of good c++ conventional & traditional control and decision making algorithm while using their previous Neural networks. The same way their NOA worked. Nothing ground-breaking or futuristic. Definitely none of the stuff you and @mspisars have been spewing.

    Lastly you need to watch this video under category of (SENSING).
    This is a 100% must watch and i mean EVERY SINGLE SECOND.

    Especially if you have been pushing people to watch the videos from Andrej.
    Time to get out of the bubble and see what's going on out there.
    You need to watch this as it explains Mobileye's sensing and neural network tech and how they got to this point.

    Mobileye Sensing Status and Road Map (Dr. Gaby Hayon Presentation) 39:06
    Autonomous Driving at Intel
     
    • Disagree x 3
    • Informative x 1
    • Funny x 1
  4. powertoold

    powertoold Active Member

    Joined:
    Oct 10, 2014
    Messages:
    1,901
    Location:
    USA
    Yup, what's going on is Mobileye still doesn't have a single deployed traffic light feature (that they apparently had since 2017 per you).
     
    • Like x 2
  5. masterxel

    masterxel Member

    Joined:
    Oct 30, 2020
    Messages:
    34
    Location:
    CA
    [​IMG]

    Do you have any diagrams for how Mobileye compares to this? All I've seen is lists of features supported by EyeQ4, not how it's built. Just because two systems result in similar features doesn't mean they're functionally identical. Will watch the sensing video you linked in a bit.
     
  6. Bladerskb

    Bladerskb Senior Software Engineer

    Joined:
    Oct 24, 2016
    Messages:
    2,068
    Location:
    Michigan
    #946 Bladerskb, Nov 9, 2020
    Last edited: Nov 9, 2020
    so you refuse to watch the vid and be educated for once. per for course.
    Continue living in your bubble delusional fantasy land where you believe you will have level 5 in under 6 months. No different from your predecessors who believed their model 3 will self deliver themselves when deliveries started. Don't let me be your buzz kill.
     
  7. pilotSteve

    pilotSteve Active Member

    Joined:
    Jul 14, 2012
    Messages:
    1,456
    Location:
    Prescott Az
    Yeah, ever since Elon touted FULL self driving in 2017 (and I bought) I felt he was unnecessarily using a superlative term (full) when any level of "self driving" would have been sellable and, quite frankly, amazing.

    But here we are, three plus years later, and while it's getting close I still think "full" is an unnecessary and redundant term.
     
    • Like x 2
  8. masterxel

    masterxel Member

    Joined:
    Oct 30, 2020
    Messages:
    34
    Location:
    CA
    The video is great and I love the level of detail. However I think if anything it actually proves that Tesla is taking a different approach. Direct link to their pdf: https://newsroom.intel.com/wp-content/uploads/sites/11/2019/11/Mobileye-Investor-sensing-status-presentation.pdf

    Mobileye is very focused on redundancy and their system interprets results from the vision, radar and lidar separately. Each of the 4 categories of features (Road Geometry, Road Boundaries, Road Users, Road Semantics) are covered by multiple processing engines.

    Unlike Tesla, none of their different processing engines (Object Detection DNNs, Lanes detections DNN, Semantic Segmentation engine, Single view Parallax-net elevation map, Multi-view Depth network, Generalized-HPP, Wheels DNN, Road Semantic Networks) depend on using a Birds Eye View map for detecting features. They mention an occupancy grid but that's actually just for Road Boundaries. They use video processing but only for pseudo-lidar as far as I could tell, not for labelling.

    From what we've heard Tesla rewrote labelling of all features (RG, RB, RU, RS in Mobileye's terms) to be driven by video data. Based on the diagram I posted earlier they also route all perception through a single BEV Net now that outputs all feature types (compared to the 8 separate other approaches listed above). I don't see where Mobileye is doing these things.
     
    • Like x 3
  9. powertoold

    powertoold Active Member

    Joined:
    Oct 10, 2014
    Messages:
    1,901
    Location:
    USA
    Mobileye is definitely capable of creating a BEV from vision, but bladerskb is ignoring the obvious point, which is there's a huge difference between "having" and deploying a feature to consumers.

    Anyone working on software knows it's easy to code the first 80% of a software product (and get it roughly working), but getting that last 20% is much more difficult. Many on this forum say that achieving 99.9% reliable FSD is probably 10x more difficult than 99% reliable FSD. Same thing applies to my point regarding Mobileye.

    Mobileye has RB AK CA ZD EW QK BJ OE CD features, all not good enough yet for deployment. Case in point: Mobileye has YET to deploy any traffic light / stop sign feature in ANY car. Not only that, their lane keeping is way worse than Tesla's. If someone has a consumer-created video of Mobileye's L2+ lane keeping on a mountain road, please show.
     
    • Like x 3
  10. tmoz

    tmoz S85D, Prius PiP

    Joined:
    Aug 16, 2015
    Messages:
    853
    Location:
    Gilbert, Arizona
    Well, I hope both approaches succeed. One may come out sooner, but I sure hope both succeed as opposed to hoping that both fail, or cheering for one at the demise of the other. Of course, I like Tesla cars but don't have one with recent hardware, so I'm cheering for both of them as someday I will have to replace my S85D, but that day is still a solid 5 yrs away.
     
    • Like x 4
  11. heltok

    heltok Active Member

    Joined:
    Aug 12, 2014
    Messages:
    1,135
    Location:
    Sweden
    Thanks for sharing! Very fun to follow, I once supervised a master thesis doing lidar localization not too differently for what Mobileye are doing with pseudo-lidar.

    Anyway I think it must be pretty frustrating for mobileye to once a year hear that Tesla are changing their stack away from what Mobileye have implemented. So many things in the paper just screams “feature engineering” and George Hotz would be laughing at what they are doing. And so many of their problems they are trying to address are removed when switching to 4D, like Elon says, the best design is to remove parts. And mainly, the way I see it is, is that Deep Learning in 2015-2020 was mostly about scaling up the dataset rather than being clever with algorithms and feature engineering. Mobileye are now wasting time labelling a dataset and doing feature engineering for a system that pretty soon will be replaced by a similar 4D stack as Tesla are doing that are more efficient at Labeling. Then in a few years when Mobileye have switched to a 4D labelling system Karpathy and his team will announce that they now are doing end2end using transformers, GANs and metalearning and Mobileye will again have to scrap what they are doing to catch up...
     
    • Love x 3
  12. diplomat33

    diplomat33 Well-Known Member

    Joined:
    Aug 3, 2017
    Messages:
    6,832
    Location:
    Terre Haute, IN USA
    I'm pretty sure Mobileye already has 4D. Can you show me where they don't?

    How can Mobileye do this drive on camera-only without 4D? I don't think it is possible.



    @Bladerskb Do you know?
     
    • Disagree x 2
    • Like x 1
  13. Bladerskb

    Bladerskb Senior Software Engineer

    Joined:
    Oct 24, 2016
    Messages:
    2,068
    Location:
    Michigan
    The only difference between Tesla and mobileye right now is that Tesla is using a BEV network from the outputs of their perception networks and Mobileye is rather using a crowd sourced hd map. Everything else from the perception stack Tesla copied from mobileye. In the driving policy, Mobileye uses reinforcement learning and currently Tesla is still mostly traditional algorithms.

    Tesla's BEV which is industry standard does not detect features it takes output of their other NNs.

    The occupany grid is a redundant NN engine that lets them know where to drive and where they can't drive. It also allows them to do with occlusions from static/dynamic objects, actors or road structure. For example if they are in a intersection trying to make a turn and need to inch forward to see or move slightly to the side to see beyond the car ahead.

    [​IMG]

    You clearly don't know what you are talking about. Tesla played catch up with the industry standard and finally started doing what the industry had already been doing with labeling. Leave it to Elon to try to hype up as though its new and misrepresent it and for Tesla fans to lap it up.

    "4d labeling" is already industry standard and used widely. For example cruise details it here:

     
    • Like x 2
    • Funny x 1
  14. Bladerskb

    Bladerskb Senior Software Engineer

    Joined:
    Oct 24, 2016
    Messages:
    2,068
    Location:
    Michigan

    You just like most tesla fan are so clueless its ridiculous. Yet you push your lack of knowledge as facts when in reality Tesla played catch up to industry standard and now finally started doing what the industry had already been doing with labeling. Elon comes out and spews absolute none-sense and misrepresents things left and right. But leave it to Tesla fans to be the only ones that actually believe what's already industry standard and used everywhere is new and only being used by Tesla and is a "game changer" and "quantum leap" blah blah blah. Just like you people believed that Tesla were the only one using neural network. Crazy.

    Like how naïve does a group of people have to be to not even research or look up anything and just lap it up. Like seriously? How does that work? This is flat earther level.

    Like I said above and in the past, "4d labeling" is already industry standard and used widely. For example cruise details it here:
    16:10m
     
  15. heltok

    heltok Active Member

    Joined:
    Aug 12, 2014
    Messages:
    1,135
    Location:
    Sweden
    In the pdf you can clearly see that they are doing 2D(2.5D) labelling. I would guess that the output of the neural network is in 2D which is later visualized in 3D. Fwiw I think Tesla are mainly getting output in 2D also, 4D is mainly for the labelling. Maybe they are doing some pseudolidar stuff from 3D, not really sure where and how the depth from video is being used.
     
  16. diplomat33

    diplomat33 Well-Known Member

    Joined:
    Aug 3, 2017
    Messages:
    6,832
    Location:
    Terre Haute, IN USA
    What pdf?
     
  17. heltok

    heltok Active Member

    Joined:
    Aug 12, 2014
    Messages:
    1,135
    Location:
    Sweden
  18. diplomat33

    diplomat33 Well-Known Member

    Joined:
    Aug 3, 2017
    Messages:
    6,832
    Location:
    Terre Haute, IN USA
  19. masterxel

    masterxel Member

    Joined:
    Oct 30, 2020
    Messages:
    34
    Location:
    CA
    I appreciate the video and information but there's no reason for insults. Just trying to have a conversation and learn more. You say I'm wrong about Mobileye but posted a video from Cruise. They only briefly mention labelling multiple frames at 16:45 and it doesn't seem that relevant. If you can explain further please go ahead.
     
    • Like x 1
  20. heltok

    heltok Active Member

    Joined:
    Aug 12, 2014
    Messages:
    1,135
    Location:
    Sweden
    It is also what Tesla was doing with their old 2.5D labelling. 4D is making a 4D point cloud of the entire video from all cameras to label all frames at once. Not labelling each camera in 2D and stitching them together(in 2D).
     
    • Like x 1

Share This Page

  • About Us

    Formed in 2006, Tesla Motors Club (TMC) was the first independent online Tesla community. Today it remains the largest and most dynamic community of Tesla enthusiasts. Learn more.
  • Do you value your experience at TMC? Consider becoming a Supporting Member of Tesla Motors Club. As a thank you for your contribution, you'll get nearly no ads in the Community and Groups sections. Additional perks are available depending on the level of contribution. Please visit the Account Upgrades page for more details.


    SUPPORT TMC