Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

New Tesla Model S Has 2nd Triple Cam For Autopilot & Pedestrian Noise Unit

This site may earn commission on affiliate links.
That's interesting, and cool since my order was confirmed yesterday.

It looks like the cameras are three different types. One looks like a normal angle camera, the second says narrow vision and I would expect that to be aimed straight ahead to focus on cars in front and maintain distance. The third says it's a fish eye lens, which is an ultra wide angle lens that picks up around 180 degrees. A fish eye wouldn't be great at picking up detail, but it could pick up objects moving towards the car from somewhere in the forward region of the car.

I was reading something about the blindspot detection on the current AP and the person was criticizing the short range of the system. It can only detect cars that are in within 5m of the car and doesn't detect to the side very well. It sounds to me mounting some detection equipment at the top of the hatch window would help detection to the rear. I thought the car already had some ultrasonic detectors to detect cars in the next lane.

Looking again at the schematic, nothing indicates where any of those cameras are pointing, it's quite possible the fish eye camera is pointing out the back for better blind spot detection.

You need a pair of cameras for real 3D imaging, but having one set to normal spread and the others much narrower or wider would be very difficult to make into a manageable 3D image.
 
Interesting. My assumption, which I think a lot of us shared, was that Tesla would keep the current front camera and add front/side looking cameras to it.

The schematic says they are instead replacing that one camera with three - a long range monochrome one, the 'main' camera, and a fisheye.

Since all are going to the AP ECU, clearly the fisheye is being used for self driving, presumably in the role of the side looking cameras - checking for cross traffic.

It's also interesting that there's no mention of what most comments here have said is the biggest limitation of the current system - no way to see behind the car beyond ultrasound range (needed to enable fully autonomous lane changes.)
Walter
 
Why this makes sense at this time (doesn't mean it's true):
  • Elon talked about very robust SDC capability in 2018.
  • The Model 3 is expected to have the hardware to support such capability, and it is likely Tesla will want to deliver this in their flagship cars (S/X) before the Model 3.
  • MobilEye CTO mentioned the triple front camera in several technical talks, with 2017 as the launch year.
  • Tesla Autopilot has probably gone as far as it can with the current sensor suit. To advance, they need many more sensors. And Tesla is not known to go slow or conservative. Meaning if Autopilot v1 is done, time to expect signs of v2.
  • Tesla introduced autopilot v1 hardware into the car in October 2014 and only enabled the function in October 2015 - a full year later. So if Tesla wants Autopilot v2 running in the S/X in mid 2017 (a few months before people get it in the Model 3), it is very possible it will start putting the hardware in the cars mid 2016.
 
You need a pair of cameras for real 3D imaging, but having one set to normal spread and the others much narrower or wider would be very difficult to make into a manageable 3D image.

3D imaging can be done with one camera in a driving situation because it's a continuous feed and the camera is moving. The car knows the travel distance between frames, so it can use them similarly to a stereoscopic image set.
 
3D imaging can be done with one camera in a driving situation because it's a continuous feed and the camera is moving. The car knows the travel distance between frames, so it can use them similarly to a stereoscopic image set.

Still, wouldn't forward 3D imaging become simpler if you add two cameras at the corners of the car?

My thinking is: Cameras are cheap. Algorithms are hard. So if a couple of additional cameras can help, it's worth the additional certainty and capability in 3D reconstruction.
 
Still, wouldn't forward 3D imaging become simpler if you add two cameras at the corners of the car?

My thinking is: Cameras are cheap. Algorithms are hard. So if a couple of additional cameras can help, it's worth the additional certainty and capability in 3D reconstruction.

I don't think the algorithms would be easier with two cameras than with one -- geometry detection is pretty straightforward and well-known. It's being done in the MobilEye chip, so any efficiency difference or preference would probably be based on MobilEye's implementation. My guess is that while two cameras may not be a cost issue for Tesla, it was a factor for MobilEye in designing their system to be as affordable or have as small a footprint or power requirements as possible for its intended purpose at the time it was designed. A disadvantage to a one-camera system though is that the car needs to be moving to get a true stereoscopic effect (it may be able to do some clever deduction if other objects in the scene are moving, but whether that's possible and how well it'd work is beyond my knowledge). This is probably fine for highway autopilot but would likely be limiting in other situations where there's a need for something like pedestrian placement, particularly when stopped at a light. It's hard for me to say exactly what the new cameras will and won't help, especially without knowing whether they are accompanied by a new chipset (isn't the current MobilEye chip limited to two camera inputs?).
 
Last edited:
  • Informative
Reactions: Subhuman
Have there been any studies that have proven with any level of validity that quieter EV's are more dangerous? This seems like a solution to a theoretical problem that likely doesn't really exist? I'm also not sure how much quieter an EV is. It seems there are numerous ICE that you can't hear the engine running.

The primary instance I can think of is someone who is blind but those that I know are quite cautious to always use safe crossings where drivers are highly likely to be aware of them. One that I talked to said that in the city she can't go by sound at all because there are so many sounds that any sound from a car simply blends in.
 
  • Helpful
Reactions: TaoJones
I just wonder re. MobilEye's design.

A-priori I would have guessed distance detection for objects ahead on the road (and from that speed estimation for them and collision avoidance, etc) should be a lot easier stereoscopically (especially when the inter-ocular distance is 1.5m). With a single camera you can only measure the change in relative size. With cameras at an angle you also have motion across the field of view as you approach.

But maybe I'm wrong.
 
I don't think the algorithms would be easier with two cameras than with one -- geometry detection is pretty straightforward and well-known. It's being done in the MobilEye chip, so any efficiency difference or preference would probably be based on MobilEye's implementation. My guess is that while two cameras may not be a cost issue for Tesla, it was a factor for MobilEye in designing their system to be as affordable or have as small a footprint or power requirements as possible for its intended purpose at the time it was designed. A disadvantage to a one-camera system though is that the car needs to be moving to get a true stereoscopic effect (it may be able to do some clever deduction if other objects in the scene are moving, but whether that's possible and how well it'd work is beyond my knowledge). This is probably fine for highway autopilot but would likely be limiting in other situations where there's a need for something like pedestrian placement, particularly when stopped at a light. It's hard for me to say exactly what the new cameras will and won't help, especially without knowing whether they are accompanied by a new chipset (isn't the current MobilEye chip limited to two camera inputs?).

Depending on how much preprocessing is done on the radar data, sensor fusion may be an easier solution for a lot of the 3d imaging.

Right now, we know the system is throwing away static returns on the radar, but I don't know if that's at the radar module or at the AP level.

If it is at the AP level, then the AP module can compare the radar data with the camera bearing and get a 3d map for objects directly, without stereo camera math (assuming the radar bore and camera bore are calibrated together, and that the radar gets a detectable return off of the objects.)
 
Wow, interesting news! So are we assuming this is included in all facelift S or being rolled out slowly? The article says "the new Model S is set to have..." So do they all have it, or is it still coming soon?

The article implies it is in all facelift cars. King of Prussia has a facelift S on display as of last week. I didn't really look at the sensors on it.
 
upload_2016-5-4_9-38-2.png