Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

What hardware will be needed for autopilot to "see" stopped cars?

This site may earn commission on affiliate links.
The other constraints you didn't mention are adequate resolution, update rate, and sensitivity/dynamic range on the camera (though you kinda hinted at the resolution limit in your followup.) In theory it's possible with one camera, in practice I think it'll be quite a while before a practical one camera solution exists.

Well, yes, that was my "simple" answer. Resolution, frame rate, and dynamic range are all important. I believe the EyeQ series maxes out at 36fps, but I don't know where the resolution tops out at that frame rate. (I don't know its maximum bitrate.) In terms of dynamic range, that's getting cheaper and better every year. It's one of the reasons I prefer they wait longer for AP 2.0 hardware. If the software/regulatory isn't ready, let the cameras get better in the meantime. For example, 12-14 stops of dynamic range at 4K running at 60fps should be very reasonably priced in two years. I'm talking GoPro level sensors at that point. Not high-end Alexa or RED sensors. (Of course, two lower cost camera sensors, side by side, one with a neutral density filter might be a better interim solution for dynamic range.)

So, I don't disagree but the simple answer is: a single (good enough camera) with the right processing and software can see a stopped car.
 
Could something be attached to a vehicle to make it more clearly "visible" to the sensors? If I was in the broken-down vehicle and had a Tesla bearing down on me with driver asleep / texting / doing make-up / sitting in back seat (I've seen all the videos!!) I'd be very happy to buy & install something stuck on the car, or an "active" device running

I remember when the Recco system first came out (passive radio-frequency reflector used to detect skiers buried in an avalanche) they were sold built into skier clothing; not everyone wanted to pay $$ to upgrade their kit, so stick-on's were available for ski boots. Could that happen and augment ACC / Autopilot detection of vehicle-in-front, even if stationary?
 
I work on systems used in antonymous vehicles which are more advanced than Tesla's current generation AP. For those who are saying one camera is sufficient, let me ask you a question, how difficult do you think it is for a computer with 1 camera and nothing else to tell whether the car is heading towards a small boulder or a floating shopping bag or a newspaper? Do you really want the car to perform emergency braking when a plastic shopping bag floats onto the road? The problem is complex enough with multiple sensors (multiple cameras, radar, lidar) and seems to be best solved with deep learning type systems (those can learn from things such as "hey, the other car hit that object and nothing bad happened" or even, just like humans, can tell based on prior experience whether an object appears to be a light object floating in the wind or a heavy basketball that just rolled onto the road). Who knows by the way, humans can tell with just 2 cameras (eyes) and 2 microphones (ears), so maybe if deep learning gets advanced enough, that will be sufficient in the future.
 
Last edited:
  • Informative
  • Like
Reactions: NoEggs and mblakele
Just throwing this out there as a point of discussion: what if all cars sent out signals of what they were doing, ie braking, turning, changing lanes, coming to a stop etc... so that all the cars around it would know and could thusly 'react' accordingly? That's really broad and simplified, but wouldn't that help?

I think the day will come when all vehicles will have transponders broadcasting their location, speed and intended path. Even classic cars will be required to add an inexpensive transponder box.

Even before that day comes, accidents will be avoided by autopilot and autonomous cars avoiding vehicles with transponders, moving or stopped. However detecting wrecked or completely dead cars, and other large inanimate objects, will still be required, so another means of detection will still be required.

None the less this is a good idea and undoubtably will be used in the near future, saving many lives.

GSP
 
Just throwing this out there as a point of discussion: what if all cars sent out signals of what they were doing, ie braking, turning, changing lanes, coming to a stop etc... so that all the cars around it would know and could thusly 'react' accordingly? That's really broad and simplified, but wouldn't that help?

This is a key enabler for Autopilot swarms - where you get a group of mostly Autonomous cars driving together at closer than human distances, drafting off of one another and avoiding the reaction time driven waves of stop and go traffic.

I'm sure it'll happen eventually, and Tesla has left us an interesting twist - if an adequate protocol can be designed to work with one of the Ss current radios (Bluetooth, WiFi, Cellular, and keyfob,) then a simple firmware update can deliver ~100k swarm capable cars literally overnight. Interesting times.

Having said that, I don't see it as an answer to this sort of problem, because even if the stopped car was equipped with such a system, it wouldn't necessarily be operational, and you wouldn't want your safety to depend on it (and, as another poster said, sometimes it isn't a car - having a fallen tree or boulder in the road is just as dangerous, and none of them have transponders.)
 
Having said that, I don't see it as an answer to this sort of problem, because even if the stopped car was equipped with such a system, it wouldn't necessarily be operational, and you wouldn't want your safety to depend on it (and, as another poster said, sometimes it isn't a car - having a fallen tree or boulder in the road is just as dangerous, and none of them have transponders.)

Obviously I'm talking in terms of all cars (eventually) being equipped with a signal system just as all cars are equipped with headlights, turn signals etc... And of course about it being 'in addition to' current AP systems, so that other objects and people and animals could still be detected.
 
The AP does a good job right now of detecting cars, trucks, motorcycles in my lane and adjacent lanes and displaying this on the dash. I don't see why this information couldn't be used to detect when a car is stopped in front of me.
I can imaging the AP systems thinking... "there's something in front of me but it doesn't seem to be moving... radar tells me it's 100 ft away... is it a rock or a shopping bag?... camera says it's much bigger than that at 100 ft. ... OMG, the camera shows it's getting bigger and it's still right in front of me... perhaps I'd better issue a warning"
I think this could be a software fix.
 
Does anyone know where/what processor in the car runs Tesla's AP algorithms? Something must take in radar input, ultra frequency input, eyeq3 input and then output steering and speed outputs.

I suspect part of the problem with AP 1.0 (beta) is lack of processing speed/resources on whatever compute platform it is running on.
 
humans can tell with just 2 cameras (eyes) and 2 microphones (ears), so maybe if deep learning gets advanced enough, that will be sufficient in the future.

To be fair humans aren't perfect at it. In the hullabaloo about 1 or 2 auto-piloted cars crashing, thousands of human piloted ones did.

***

Wasn't the issue that Tesla was disclaiming that the auto-pilot couldn't see a stopped car in front of the one being followed? That would be tough for humans as well. But if you can't stop in time to avoid that car, you were following way too closely. Same goes for an auto-pilot.

Thank you kindly.
 
Does anyone know where/what processor in the car runs Tesla's AP algorithms? Something must take in radar input, ultra frequency input, eyeq3 input and then output steering and speed outputs.

I suspect part of the problem with AP 1.0 (beta) is lack of processing speed/resources on whatever compute platform it is running on.

If I understood correctly, I think Ingineer said in his retrofit thread that the EyeQ3 behind the mirror is doing that - receiving prefiltered data and issuing commands over CANBus.
 
That's what I suspected. The EyeQ3 allows other programs to run on it. So, again, I suspect that Tesla is running into resource limitations of the EyeQ3.

That's a reasonable thought. Certainly the CANBus is a limitation between the radar and the processor - it means they have to filter the radar return in the radar processor first rather than bringing the raw take up to fuse with the camera (doesn't make fusion impossible - just requires more creativity and changing the programming in the radar too.)
 
Just throwing this out there as a point of discussion: what if all cars sent out signals of what they were doing, ie braking, turning, changing lanes, coming to a stop etc... so that all the cars around it would know and could thusly 'react' accordingly? That's really broad and simplified, but wouldn't that help?

Good idea. Take a look at what happens in aviation and use some of these concepts to put into context. A transponder not only makes the aircraft easier to see on radar, most all transponders these days also broadcast altitude. Substitute speed for altitude and you are providing the other moving objects on a highway with information they can use.

The next step forward in onboard collision avoidance is called TCAS (traffic collision avoidance system). Small planes typically do not have this equipment yet, but all airlines are required to have it. The TCAS units are looking at location and altitude information from other planes and looking for possible collisions. If two TCAS aircraft see a conflict, they talk and decide almost instantaneously the best solution. One TCAS instrument tells the pilot to climb (or stop descent) and the other tells its pilot to descend (or stop climb). If the other aircraft is not equipped with TCAS, then your TCAS makes a quick decision on how to change your altitude to avoid a collision, and then it gives you the instructions. A similar type of instrument installed in cars could be very helpful for sorting out merges at highway onramps, but instead of altitude info you have speed info and the two computers decide which car falls in before the other, then tells one to speed up a bit (or maintain speed) and the other to slow down a bit.
 
Last edited:
  • Informative
Reactions: Krugerrand
Yeah, the CANBus is a real bottleneck - I just looked up the specs. 1 Mbps? That's pretty slow! To do proper sensor fusion, you'd want to have raw input from the radar and cameras to come onto one chip/module.
There are actually quite a few CANbuses in the Tesla:
According to obrien28 who has posted a very useful Instructable:
More to come

CAN 2 - 10 Modules
  • Radio Head Unit
  • Door Control
  • Sunroof
CAN 3 - Powertrain - 9 Modules
  • Thermal Controller
  • DC-DC Converter
  • Charger 1 and 2
  • HV BMS
  • Charge Port
CAN 4 - Body Fault Tolerant
  • RCCM (Remote Climate Control Module)
  • PTC (Positive Temperature Coefficient) Air Heater
  • Memory Seat Module
CAN 6 - Chassis - 14 Modules (depending on options)
  • Power Steering
  • Stability Control and Braking
  • Air Suspension
  • Instrument Cluster and LIN Bus
  • Blind Spot and Parking Aid
  • TPMS
  • EPB(electronic parking break) ECU

 
Yeah, the CANBus is a real bottleneck - I just looked up the specs. 1 Mbps? That's pretty slow! To do proper sensor fusion, you'd want to have raw input from the radar and cameras to come onto one chip/module.

It's worse than that - that bus is also where the commands get passed, so it's safety critical and you can't risk clogging it. I don't know how much data overall gets passed on it, but I'd be surprised if you can safely put 500kbps of radar data across. With 20 updates per second, that's ~25k of information in each update, maximum.
 
I work on systems used in antonymous vehicles which are more advanced than Tesla's current generation AP. For those who are saying one camera is sufficient, let me ask you a question, how difficult do you think it is for a computer with 1 camera and nothing else to tell whether the car is heading towards a small boulder or a floating shopping bag or a newspaper? Do you really want the car to perform emergency braking when a plastic shopping bag floats onto the road? The problem is complex enough with multiple sensors (multiple cameras, radar, lidar) and seems to be best solved with deep learning type systems (those can learn from things such as "hey, the other car hit that object and nothing bad happened" or even, just like humans, can tell based on prior experience whether an object appears to be a light object floating in the wind or a heavy basketball that just rolled onto the road). Who knows by the way, humans can tell with just 2 cameras (eyes) and 2 microphones (ears), so maybe if deep learning gets advanced enough, that will be sufficient in the future.

Nobody said it wouldn't be difficult. But one camera can build 3D scene information (including trajectories of bags and balls) with a good enough camera and software processing. The deep learning you mention is needed to categorize items. Even the best radar, ultrasonic, and LIDAR can't (for example) detect mass without deep learning. How do I know its a real car or just a cardboard modeled one with photos printed on it? How does it know a tumbleweed is safer to hit than a boulder? That's sensor independent. It requires a massive database of every object that could possibly be in front of the car.

But going back to the original post: A good single camera with enough resolution and refresh rate can detect a stopped car. Mobileye does AEB with camera only.

Full autonomy with no/limited mistakes will benefit from redundancy (including a variety of cameras/sensors) and deep learning. For example, I'd love to see FLIR for animal, pedestrian, motor heat detection. Surface temperature could be a great extra valuable for deep learning.