TMC is an independent, primarily volunteer organization that relies on ad revenue to cover its operating costs. Please consider whitelisting TMC on your ad blocker or making a Paypal contribution here: paypal.me/SupportTMC

What hardware will be needed for autopilot to "see" stopped cars?

Discussion in 'Model S' started by calisnow, May 28, 2016.

  1. calisnow

    calisnow Active Member

    Joined:
    Oct 11, 2014
    Messages:
    1,103
    Location:
    Los Angeles
    In several threads it's mentioned that the current long range radar and monocular camera are not sufficient for autopilot to be able to detect cars in front of you which are completely stopped (unless those cars were moving when the Model S first spotted them) and successfully distinguish them from the background.

    What do we need to "see" stopped cars? Stereo camera? Or is it possible to accomplish this in software with the existing hardware?
     
  2. Saghost

    Saghost Active Member

    Joined:
    Oct 9, 2013
    Messages:
    2,933
    Location:
    Delaware
    The current hardware is in theory capable of seeing stopped cars. I'm not certain if the current computing architecture is.

    The obvious tool to find cars with is the radar. However, in order to avoid reacting to all of the random clutter - joints in the road, guard rails, etc, it uses pulse doppler technology to disregard stationary objects. The chops the data set to easily manageable size - but it also dropped any stopped cars out of the set.

    The data you need is in the basic return, but it has to be extracted. In principle what you need the radar to do is look at everything that's near the car's future path and hand back those returns, whether they are moving or not.

    To make that happen, the computer deciding which returns matter has to know the car's future path - which presumably means it needs to integrate the lane lines from the camera feed or the processed steering corridor that came from those lines.

    Right now I believe the processor at the radar is sorting the returns - you'd either need to send a lot more data from there back to another processing location, or you'd need to pass the routing data down to the radar processor to use as a filter.

    From the AP retrofit thread we know that the lane lines (and everything else camera related) are processed by the EyeQ system right behind the mirror, so you'd need to change that firmware to send more data - either the image of the expected path.

    There are also camera based emergency braking routines (like the one on 2013+ Volts) - AP is already doing object recognition against the camera for a variety of purposes, including driving the automatic highbeams.

    I don't know if adding a AEB system to the camera side is feasible with the current hardware or not. In theory, you could have the camera processor say "this looks like a car, what is the distance and velocity of the object at 355 degrees azimuth," and the radar hand back a range and velocity vector - classic sensor fusion.
     
    • Informative x 9
    • Like x 2
  3. brkaus

    brkaus Member

    Joined:
    Jul 8, 2014
    Messages:
    935
    Can't top above answer, but agree with it. I believe it's a simplified analysis because the path isn't known.

    One might be able to narrow the radar, but would still have trouble with a sweeping turn that had a car parked on the side of the road.
     
    • Like x 2
  4. bhzmark

    bhzmark Active Member

    Joined:
    Jul 21, 2013
    Messages:
    1,052
    Exactly right. Only with full AP using auto steer it can calculate path in a lane. But just TACC doesn't know if the stopped car dead ahead is its path or not, because the car isn't steering. It shouldn't come to a screeching halt when there is a car stopped dead ahead that is actually just stopped on the shoulder on a curve.
     
    • Like x 1
  5. calisnow

    calisnow Active Member

    Joined:
    Oct 11, 2014
    Messages:
    1,103
    Location:
    Los Angeles
    Interesting. This is probably a stupid question on my part, but what is the advantage to fusing radar and camera to do this calculation, vs only using the camera (assuming clear weather)? Couldn't the camera determine distance and velocity of the object at 355 degrees azimuth all on its own, simply by comparing visual information in successive image frames?
     
  6. kort677

    kort677 Active Member

    Joined:
    Sep 17, 2015
    Messages:
    1,857
    Location:
    florida.
    do you own a tesla equipped with TACC and AP? mine sees stopped cars and slows and stops as necessary
     
  7. calisnow

    calisnow Active Member

    Joined:
    Oct 11, 2014
    Messages:
    1,103
    Location:
    Los Angeles
    Yep, as my signature says, 2016 70D. According to multiple recent threads on AEB and a couple highly publicized incidents - and apparently what is in the instruction manual itself - your Tesla does not see all stopped cars if they were not moving in the first place. Read the first reply to my post for a detailed explanation on the challenges by Saghost.
     
  8. garygid

    garygid Member

    Joined:
    Aug 11, 2014
    Messages:
    612
    Location:
    Laguna Hills, Orange County, CA
    TACC must be tracking your lane ahead to slow for the vehicles in your lane, even on a curve, rather than just slowing for any object straight ahead.
     
  9. kort677

    kort677 Active Member

    Joined:
    Sep 17, 2015
    Messages:
    1,857
    Location:
    florida.
    repeat after me, IT'S ONLY A BETA, then repeat, YOU NEED TO ALWAYS BE READY TO ASSUME CONTROL.
    the point is that while the AP is a great system it isn't perfect, it isn't an autonomous system.
    glitches can and will appear, if this story isn't fud, it should be passed along to tesla and made part of data sets that they are collecting about the performance of the system.
     
    • Dislike x 7
  10. calisnow

    calisnow Active Member

    Joined:
    Oct 11, 2014
    Messages:
    1,103
    Location:
    Los Angeles
    #10 calisnow, May 28, 2016
    Last edited: May 28, 2016
    This thread is a discussion of the underlying technology and specifically what is needed for the cars to reliably detect stopped cars which were not initially moving - a capability they do not currently have.

    This thread is not a discussion of whether or not AP is in beta or one needs to be in control. But you're free to stand on the sidelines shouting about beta status if you enjoy doing so.
     
    • Like x 14
  11. mspohr

    mspohr Active Member

    Joined:
    Jul 27, 2014
    Messages:
    1,787
    Location:
    California
    The camera can see if the car is in your lane. (The AP uses this info)
    Should be a software fix.
     
  12. kort677

    kort677 Active Member

    Joined:
    Sep 17, 2015
    Messages:
    1,857
    Location:
    florida.
    you are talking about problems caused by exceeding the capabilities currently in the car. sorry if reminding you that the system is only a beta doesn't fit YOUR definition of the discussion
     
    • Dislike x 7
  13. Krugerrand

    Krugerrand Active Member

    Joined:
    Jul 13, 2012
    Messages:
    4,253
    Location:
    California
    Just throwing this out there as a point of discussion: what if all cars sent out signals of what they were doing, ie braking, turning, changing lanes, coming to a stop etc... so that all the cars around it would know and could thusly 'react' accordingly? That's really broad and simplified, but wouldn't that help?
     
  14. chillaban

    chillaban Member

    Joined:
    May 5, 2016
    Messages:
    577
    Location:
    San Jose, CA

    Maybe some day in the future, but right now I don't know of any camera system, even stereoscopic ones, that can offer the same high-bandwidth distance and speed sensing that radar provides (hundreds of updates a second, virtually accurate down to the millimeter and 0.1mph, as long as it can tell what's a car and what's not).

    I think for the foreseeable future, fusing cameras with LIDAR or radar would be necessary. And that's actually another tangential point.... LIDAR can potentially form better 3D images of stopped objects to determine their size and shape, and hence determine whether or not they are a threat to the car. Except LIDAR has its downsides too -- there's a costly moving parts in the sophisticated turrets that projects like the Google self-driving car use. And the cheaper scaled-down versions have more trouble coping with sunlight, nonreflective objects, other interfering sources of LIDAR, etc.


    Note that I've seen the Model S's forward camera detect a stopped Honda Odyssey from 10+ car distances away, where it would've easily stopped in time. I think in this case we are probably referring to that Switzerland TACC accident, where the stopped car was a weirdly painted boxy Ford Transit Connect like van.... I can't help but suspect if the car were more normally shaped, this accident may have turned out a different way.... Which is another disadvantage of camera based distance sensing. Resolving arbitrary weird objects to their bounding box is still a fundamentally difficult computer problem to solve.
     
  15. Bruin1996

    Bruin1996 Member

    Joined:
    Mar 3, 2016
    Messages:
    130
    Location:
    Southern California
    I was driving my 2 month old S 90D on a freeway interchange with stopped traffic while on Autopilot. I had to manually brake to avoid hitting the truck at the end of the stopped line of vehicles. The truck turned red color on my Autopilot sensor so I assumed that If I let Autopilot keep control that it might hit the truck. With how long it takes to get repairs on Teslas I was not going to take that chance.
     
  16. ElectricTundra

    Joined:
    Feb 5, 2015
    Messages:
    531
    Location:
    Tundra
    Even on perfectly straight roads with me staying in one lane there appear to be two current issues; acquisition and reaction.

    Mine often will not indicate that it sees a car stopped in front of it until after I would have begun to slow down. Then it seems to make an assumption that the car in front is traveling at some speed greater than 0 and so it decides to not slow until it is close to whatever following distance I have set.

    I assume the radar itself has high enough resolution that it is providing data that would allow the software to determine that there is a car farther out and so the problem is that the software is not yet making that determination? And second that it is not calculating the closing rate fast enough to react sooner?

    This seems somewhat similar to autofocus lenses that allow you to switch between infinite range and more limited range. More limited range significantly narrows down the possibilities and greatly speeds up the calculations needed to attain focus. Infinite range causes 'hunting' as it is trying to work through a much greater range of possibilities.
     
  17. Saghost

    Saghost Active Member

    Joined:
    Oct 9, 2013
    Messages:
    2,933
    Location:
    Delaware
    How far away are we talking about? I believe most automotive radars reach out either 100m/330ft or 200m/660ft - so it won't be seeing anything beyond that.

    Part of the problem is the radar doesn't understand "car." All it knows is that something on a particular bearing and moving at a particular speed is reflecting radar energy at some intensity.

    That's why the easy solution is to drop any return that appears to be moving within a couple percent of the car's current speed - letting out all of the road signs and bridges and guard rails and rocks - and also stopped cars.

    To reliably identify stopped cars, you need more information - either something else to say "there's a car on this bearing, tell me about it" or "our car will be driving along this curved corridor, tell me about anything within this combination of bearings and ranges" (which will still need some help when it comes to bridges and overhead signs, since AFAIK all car radar is two dimensional.)

    That's where the sensor fusion comes in - either using image recognition to identify the cars, or at least feeding the corridor back to the radar.

    AFAIK, closing rate (and bearing and range) on all returns is calculated ~20 times per second by the radar unit - that's how it is deciding what's relevant and what isn't.
     
  18. mspohr

    mspohr Active Member

    Joined:
    Jul 27, 2014
    Messages:
    1,787
    Location:
    California
    I think this is a good description of what is happening.
    I would think that the camera could provide additional information to help identify a car in your lane.
     
  19. MarkS22

    MarkS22 Member

    Joined:
    Apr 6, 2015
    Messages:
    309
    Location:
    Morris County, NJ
    The simple answer is that one camera is all that's needed, with appropriate software and processing power. Radar and other sensors would simply add redundancy or simplify the software/processing.

    Now, can the Autopilot 1.0's relatively low-resolution camera with EyeQ3 do it reliably? Probably not. I don't think it has the resolution (or FOV) to classify all stopped cars as such. This is why a 3 camera cluster with better cameras and processing (multiple EyeQ3s or EyeQ4 with or without DrivePX), plus radar fusion, is expected on AP 2.0 up front.

    The fact is, a human with one eye can tell there's a stopped car. It can use a variety of clues from perspective, to size, to shadows and more. Software/hardware can do that as well.

    Can a human with two eyes and other sensors (like inner ear, hearing) do it more easily and reliably in all situations? Probably. But a single computer with enough processing power, knowing its forward/lateral speed, can recreate a 3D scene quite accurately. In fact, it can be done with an unknown motion in 3D space in many situations, as is done with 3D camera matching software in visual effects.
     
    • Like x 1
  20. Saghost

    Saghost Active Member

    Joined:
    Oct 9, 2013
    Messages:
    2,933
    Location:
    Delaware
    The other constraints you didn't mention are adequate resolution, update rate, and sensitivity/dynamic range on the camera (though you kinda hinted at the resolution limit in your followup.) In theory it's possible with one camera, in practice I think it'll be quite a while before a practical one camera solution exists.
     

Share This Page