TMC is an independent, primarily volunteer organization that relies on ad revenue to cover its operating costs. Please consider whitelisting TMC on your ad blocker and becoming a Supporting Member. For more info: Support TMC

Seeing the world in autopilot, part deux

Discussion in 'Autonomous Vehicles' started by verygreen, Sep 25, 2018.

  1. verygreen

    verygreen Curious member

    Joined:
    Jan 16, 2017
    Messages:
    2,365
    Location:
    TN
    Greetings from first international Tesla hacking conference in Paris. It’s been awhile since our last report, lots of things happened since then.

    First of all, credit where credit is due, all the awesome visualizations you’ll see below are thanks to @DamianXVI.

    One of the more important happenings is by a sheer stroke of luck the hw2.5 ape we bought off Ebay for research turned out to be a fully unlocked developer version.

    The importance of this development is that due to security overhaul on the autopilot computer by Tesla, since about end of 2017, it became near impossible to maintain any sort of presence there. And ever since I lost my original model X with the rooted APE, information about inner workings has been quite sparse.

    Now fully unlocked unit changes all of this since it allows any modifications to be performed and so it’s really pure gold from research perspective. But there’s more, developer firmware it came with while old, also provided some important insights into various data collection and “dashcam operations”.

    [​IMG]

    [​IMG]

    You could just plug an SSD into the usb-c port on the side and record footage:

    [​IMG]

    So we proceeded to gather a bunch of footage from volunteer cars around the world and certain metadata and then @DamianXVI found ways to correlate some of the metadata with real world meanings and came up with code to paint internal autopilot state (the parts we understand) on top of camera footage (development firmware the unit came with did not include its own visualizer binary). So keep in mind our visualizations are not what Tesla devs see out of their car footage and we do not fully understand all the values either (though we have decent visibility into the system now as you can see). Since we don’t know anybody inside Tesla development, we don’t even know what sort of visual output their tools have.

    Footage we present has been recorded on firmware 18.34 from the main camera. The green fill at the bottom represents “possible driving space”, lines denote various detected lane and road boundaries (colors represent different types, actual meaning is unknown for now). Various objects detected are enumerated by type and have coordinates in 3D space and depth information (also 2D bounding box, but we have not identified enough data for a 3D one), correlated radar data (if present) and various other properties. If you are somebody in the know about all of the extra values and can give us a hand – shoot me a mail/PM, we have many questions (Thanks!)

    While there are various traces of code to work with “localized maps” and recognition of traffic control devices, their state, stop lines and so on – none of that seems to be enabled in 18.34. Also side cameras are not really used it appears, other than for light level detection.

    A note about colors – cameras are not fully color, so there’s some interpolation color so that you just have a bit of an idea how things look like, though they are quite a bit off most likely. The autopilot itself does not really care about the colors.

    Periodic picture breakage is due to the racy way we access the camera image buffers so sometimes it changes while we get it.

    I am presenting two kinds of footage today, they are kind of long, but we wanted to show diverse situations so bear with us.

    Crazy Paris streets:




    Highlights if you don’t want to see all of it:
    01:17 – traffic cones shape driveable space
    01:31 – construction equipment recognized as a truck (shows they have quite a deep library of objects they train against? Though it’s not perfect, we saw some common objects not detected too. Notably a pedestrian pushing a cart (not present in this video)
    02:23 – false positive, a container mistaken for a vehicle
    03:31 – a pedestrian in red(dish?) jacket is not detected at all. (note to self, don’t wear red jackets in Norway and California, where Teslas are everywhere)
    04:12 – one example of lines showing right turn while there are no road markings of it
    06:52 – another false positive – poster mistaken for a pedestrian
    08:10 – another more prominent example of showing left turn lane with no actual road markings.
    09:25 – close up cyclist
    11:44 – roller skater
    14:00 – we nearly got into accident with that car on the left. AP did not warn
    19:48 – 20 pedestrians at once (not that there was shortage of them before of course)


    Paris highways:



    Highlights if you don’t want to see all of it:
    3:55 – even on “highways” gore area is apparently considered driveable? While technically true it’s probably not something that should be attempted.
    4:08 – gore zone surrounded by bollards is correctly showing up as undriveable.
    11:47 – you can see a bit of a hill crest with the path over it (Paris is not super hilly it appears so hard to demonstrate this on this particular footage)

    Object type is shown in text and also by the box color (to easier tell object types far away) – purple of truck, yellow for pedestrian, green for bicycle, blue for motorcycle and red for a general “vehicle” object types. The percentage value after the type is some sort of a confidence, probably confidence in that the object is what the software thinks it is? The lane location information and distance seems to come from the vision network and is sometimes wrong. The moving state of the object comes from radar. It should be noted that the distance and relative speed are detected by pure visual means though, since they are pretty accurate even for objects without radar return.

    Orange transparent “ribbon” denotes idea of how the autopilot thinks it should continue forward from there on (or so we think, it’s some sort of path planning apparently. It is given in 3D space and we @DamianXVI tried to overlaid it on 2D space as closely as could be done currently with the limited knowledge we have) – this is really advanced, note on the hills it goes up and down – this is why AP2+ is better on hilly roads than AP1.

    The lane and direction information is only shown for objects that are closer than 60m not to overclutter the screen in busy settings.

    Also while this does not look like super deep progress to some I am sure (like no 3D boxes and all that stuff being a frequent complaint), keep in mind this is the first 3rd party independent verification of any self-driving system ever (except for comma.ai perhaps?), sure we all saw great PR videos from Waymo, MobilEye and such, but we also saw an FSD video from Tesla in 2016 that turned out to be mostly a PR stunt. The image does show various advanced features (The path planning, the lines on the roads are not just from the markings on the pavement – keep close attention and you’ll see various turn lanes are detected before the markings are shown, though there are false positives too).

    Additionally, we thought others might have ideas about some interesting scenarios to test and we might be able to take some requests. Things like stopped firetrucks and the like perhaps?

    For people that are interested in testing something in particular settings on a particular firmware version and are willing to provide a test car (mcu1 with preferably hw2.5, we might be able to make hw2.5 unit work in hw2.0 car, but so far this is a theoretical possibility that was not tried) – we can install any necessary firmware version and record test footage like the one above, feel free to contact us as well!

    It's too bad Tesla is so secretive about their progress in the area and that we need to resort to these measures to shed at least some light on the progress. Hopefully this will prompt Tesla to also make some official footage available?

    To be continued with further research?
     
    • Informative x 33
    • Love x 31
    • Like x 10
    • Helpful x 2
  2. widodh

    widodh Model S 85 and 100D

    Joined:
    Jan 23, 2011
    Messages:
    6,324
    Location:
    Middelburg / Venlo, NL
    Damn, this is cool! I'll keep an eye out on your research. Awesome work!
     
    • Like x 3
  3. Pale_Rider

    Pale_Rider Member

    Joined:
    Jul 28, 2016
    Messages:
    437
    Location:
    Houston, TX
    Awesome work as always folks. Like you said, it is ashame they are so secretive, what y’all are able shed light on really adds some credibility to Tesla’s vision of a self-driving future and the progress they have made in the last year has been pretty remarkable. I think a lot of people would be less skeptical if they provided periodic updates like this... Luckily we have y’all digging into it!
     
    • Like x 3
  4. Peteski

    Peteski Active Member

    Joined:
    Oct 2, 2017
    Messages:
    2,112
    Location:
    UK, Milton Keynes
    Very interesting indeed, many thanks for sharing!
     
    • Like x 1
  5. verygreen

    verygreen Curious member

    Joined:
    Jan 16, 2017
    Messages:
    2,365
    Location:
    TN
    I guess important note - this is a human driving the car, and autopilot just analyzing the surrounding.
     
    • Informative x 12
    • Helpful x 4
    • Like x 1
  6. BigD0g

    BigD0g Active Member

    Joined:
    Jan 12, 2017
    Messages:
    1,859
    Location:
    Somewhere
    This is amazing work, really great to see the capabilities of the system!
     
    • Like x 3
  7. lunitiks

    lunitiks Cool James & Black Teacher

    Joined:
    Nov 19, 2016
    Messages:
    2,708
    Location:
    Prawn Island, VC
    Awesome-o-rama!

    Hey @verygreen, who managed to misspell "vehicle"? :)
     
    • Funny x 2
    • Like x 1
    • Love x 1
  8. daktari

    daktari Member

    Joined:
    Jan 21, 2017
    Messages:
    661
    Location:
    Norway
    Thanks, Would have been great to synchronize this with my usual test roads, but car is soon to be sold.

    But, all those skipping lanes and boxes that flicker around - is that artifacts from your presentation or the system itself that loose the "lock" on the object or the lane?
     
  9. verygreen

    verygreen Curious member

    Joined:
    Jan 16, 2017
    Messages:
    2,365
    Location:
    TN
    "vehicle" is too long so I just shortened it some. The label is internally misspelled to say "vehcile" too.
     
    • Funny x 4
  10. verygreen

    verygreen Curious member

    Joined:
    Jan 16, 2017
    Messages:
    2,365
    Location:
    TN
    Flickering boxes are NOT artifacts. Keep attention to the"ID" number - when the box disappears and reappears it comes at the different id as another object. Same thing with the flickering lines (and also you see the same thing on IC) - that's how the system sees things basically. at times quite a lack of continuity.
     
    • Informative x 2
  11. rohan3au

    rohan3au Member

    Joined:
    Oct 27, 2017
    Messages:
    170
    Location:
    Newcastle, Australia
    So amazing. Well done! This is seriously interesting.
     
  12. lunitiks

    lunitiks Cool James & Black Teacher

    Joined:
    Nov 19, 2016
    Messages:
    2,708
    Location:
    Prawn Island, VC
    Any idea what the "thick" bounding box means? Is it perhaps the car that your AP/TACC would follow (if on)?

    And: How is the drivable freespace carpet determined?

    Perhaps @DamianXVI is the right one to ask about this, or maybe you know?
     
  13. BigD0g

    BigD0g Active Member

    Joined:
    Jan 12, 2017
    Messages:
    1,859
    Location:
    Somewhere
    Which probably explains why none of this has been released yet, they need that continuity I suspect, I can’t wait for @jimmy_d to see this and maybe he can correlate it back to the NN’s discovered.
     
    • Love x 3
    • Like x 2
  14. verygreen

    verygreen Curious member

    Joined:
    Jan 16, 2017
    Messages:
    2,365
    Location:
    TN
    thick bounding box - "vehicle right in front of us".

    driveable space is from internal variables we were able to isolate. no idea how the code arrives at it, though.
     
    • Informative x 6
  15. bjornb

    bjornb Member

    Joined:
    Feb 12, 2015
    Messages:
    181
    Location:
    Norway
    Wow, great work!
    It is apparent that the car sees and classify much more than is shown in the IC display today.
    If Tesla wants too it seems like they could show trucks, motorcycles, bikes etc with AP2 today (as AP1 does).
     
    • Informative x 1
    • Like x 1
  16. mitchellh3

    mitchellh3 Member

    Joined:
    Nov 18, 2016
    Messages:
    158
    Location:
    Los Angeles, CA
    This is amazing.

    It would be fun (and probably sad/hilarious) to see what AP2 saw before 2018.10 if that was possible (when the big NN updates were). I expect that the quality improved dramatically but it'd add some good validation to that. I'm not sure if going back to an old person is possible or if you'd even want to do that, but just an idea.

    Hopefully you're able to work with v9 when it comes out and see what changes they've made there too!
     
  17. rpm001

    rpm001 Member

    Joined:
    Mar 26, 2017
    Messages:
    262
    Location:
    Toronto
    This is very interesting. Thanks so much for sharing!
     
  18. Cirrus MS100D

    Cirrus MS100D Member

    Joined:
    Jul 6, 2017
    Messages:
    355
    Location:
    Pennsylvania, USA
    Amazing! I can’t imagine how much work you all have put into this, but for dolts like me who are super interested in this stuff, you and those like you are a primary source of my Tesla excitement.

    Thank you!!
     
  19. croman

    croman Active Member

    Joined:
    Nov 21, 2016
    Messages:
    3,713
    Location:
    Chicago, IL
    As always, big thanks to @verygreen and @DamianXVI ! You guys need to get royalties or copyright licensing fees from electrek.
     
    • Like x 5
    • Love x 3
  20. J1mbo

    J1mbo Active Member

    Joined:
    Aug 20, 2013
    Messages:
    1,398
    Location:
    UK
    This is so good, thank you! Shadow mode: confirmed :)

    Very impressed with the classification. If this is the RAW feed, the system seems to be better at classifying things than I am! For example, at 2.22, there is a pedestrian-shaped bollard at the side of the road. My first thought was it is was actually a pedestrian, but the system just removes it from the drivable area.

    The yellow boundary seems a bit off at times, it seems to go through parked vehicles even when the vehicles themselves have been correctly boxed.

    Still watching, but if not already in the video, would be interested to see the car arriving and going around a roundabout, in particular, what does the path prediction look like.
     
    • Love x 2

Share This Page

  • About Us

    Formed in 2006, Tesla Motors Club (TMC) was the first independent online Tesla community. Today it remains the largest and most dynamic community of Tesla enthusiasts. Learn more.
  • Do you value your experience at TMC? Consider becoming a Supporting Member of Tesla Motors Club. As a thank you for your contribution, you'll get nearly no ads in the Community and Groups sections. Additional perks are available depending on the level of contribution. Please visit the Account Upgrades page for more details.


    SUPPORT TMC