Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Autonomy Investor Day - April 22 at 2pm ET

This site may earn commission on affiliate links.
One thing I haven't seen mentioned was Elon's answer to how the car will deal with the blind spots or lack of coverage at certain angles. He talked about the car turning a little to get the camera's into view and if there's a car barrelling down at them it'll back up? ha

That alone is a scary thought if that's the best we can do with sensors (I know it isn't).
 
One thing I haven't seen mentioned was Elon's answer to how the car will deal with the blind spots or lack of coverage at certain angles. He talked about the car turning a little to get the camera's into view and if there's a car barrelling down at them it'll back up? ha

That alone is a scary thought if that's the best we can do with sensors (I know it isn't).

If you look at the diagram for AP2 sensors, there are no blind spots that I can tell:

tesla-second-gen-autopilot-sensors-suite.png
 
I would consider the sections directly to each side as blind spots. Ultrasonics only.

But where are the B-Pillar cameras on the chart?

Nope, those are actually the B-pillar cameras. They are labeled as "forward looking side cameras". There is actually a little line going from "forward looking side cameras" to the side arcs. The ultrasonics are the tiny little yellow arcs all around the car. But they are tiny because they are only 8 m out.
 
  • Helpful
  • Like
Reactions: ChrML and brkaus
If you look at the diagram for AP2 sensors, there are no blind spots that I can tell:

There are blindspots on the bumper level at front, so a curb or a small child would get missed by the cameras, but you are right there are no blindspots looking around for vehicles. However, a stationary B pillar camera is easily covered and there is no redundancy. All it takes is an unfortunate pole, tree or a building blocking its view from an approaching vehicle or such. There are no side radars to cover this and no secondary sensors of any kind that see far, nor can the camera move without moving the car.

Honestly, when you really look at the diagram, the HW2 coverage is well thought out. Tesla obviously put a lot of thought in it. The coverage is more than adequate for L4, I would say.

Well it is an imitation from MobilEye’s 2016 EyeQ4 8+4 camera setup, Tesla just left out three of the parking cameras (hence the blind spot), and left out the roof camera. Then again this 12 camera setup MobilEye recommends redundant 360 degree radar and 360 degree lidar with, again Tesla has very little in terms of that...

Tesla’s Level 5 plan for this small a sensor suite has always been quite daring but soon we’ll see!
 
  • Like
Reactions: WarpedOne
There are blindspots on the bumper level at front, so a curb or a small child would get missed by the cameras, but you are right there are no blindspots looking around for vehicles. However, a stationary B pillar camera is easily covered and there is no redundancy. All it takes is an unfortunate pole, tree or a building blocking its view from an approaching vehicle or such. There are no side radars to cover this and no secondary sensors of any kind that see far, nor can the camera move without moving the car.

The 360 degree ultrasonics would detect the curb or a child though. I know my ultrasonics go off all the time when I get too close to the curb, or a pole or I get too close to those concrete bumpers they have in parking lots.

Well it is an imitation from MobilEye’s 2016 EyeQ4 8+4 camera setup, Tesla just left out three of the parking cameras (hence the blind spot), and left out the roof camera. Then again this 12 camera setup MobilEye recommends redundant 360 degree radar and 360 degree lidar with, again Tesla has very little in terms of that...

Tesla’s Level 5 plan for this small a sensor suite has always been quite daring but soon we’ll see!

It is bold for sure. Obviously, Tesla went for the smallest amount of hardware possible that could still pass muster. So they kept the essentials, like the front radar, ultrasonics and 12 cameras but ditched the lidar or 360 degree radar in order to keep costs down.
 
The 360 degree ultrasonics would detect the curb or a child though. I know my ultrasonics go off all the time when I get too close to the curb, or a pole or I get too close to those concrete bumpers they have in parking lots.

They might detect a child but they will not detect a curb. Ultrasonics are also very prone on missing narrow poles, bicycles on the ground and those types of objects due to the nature of the technology. Ultrasonics will see walls and other cars of course as long as the speed is low.

It is bold for sure. Obviously, Tesla went for the smallest amount of hardware possible that could still pass muster. So they kept the essentials, like the front radar, ultrasonics and 12 cameras but ditched the lidar or 360 degree radar in order to keep costs down.

I’m pretty sure Tesla might have even left out the forward radar had they not needed to emulate AP1 on AP2 for a year or three first. So yes I agree they have at all times gone towards the minimum, the past of least resistance, easiest way on this and many other things like mapping, Lidar, what have you.
 
I'm pretty skeptical that the side cameras will be able to really handle things like oncoming cross traffic. They have no method of depth or speed perception even theoretically.

Actually, I think they do. During Autonomy Investor Day, Tesla explained how they can use all 8 cameras to create a 3D map of the surroundings that includes depth and speed perception.
 
  • Like
Reactions: CarlK and croman
Not sure if it was ever discussed, I know for the fact , RADAR mounted in front would be able to detect speed etc., What about cars at tail-end ? Not sure if rear or side cameras can measure speed of cars at tail-end, Which would allow safe lane change . Ultrasonics are for peripheral detection i guess .. which are within 10m..
 
I'm pretty skeptical that the side cameras will be able to really handle things like oncoming cross traffic. They have no method of depth or speed perception even theoretically.
There is some research on estimating motion and velocity of objects based on monocular video (basically you exploit how the object's position and size changes from frame to frame). However, AFAIK the accuracy is not very good. Also, the relatively low resolution of the cameras (compared to the human eye) may limit the range. Assuming the max range in the diagram above is correct and accounts for inaccuracies in the motion estimation (which is doubtful, given that they hadn't actually implemented it back then), the 80m for the side cameras may be insufficient. Assume you want to turn onto an expressway or rural freeway from stop sign. If the cross traffic drives at 65mph, this means the vehicles move by about 30m/s. So another vehicle would take less than 3 seconds to travel from a distance just out of range of the camera to your car, which is probably not enough time to safely turn or cross the lane.
 
Actually, I think they do. During Autonomy Investor Day, Tesla explained how they can use all 8 cameras to create a 3D map of the surroundings that includes depth and speed perception.

How well does that work when you're sitting at a stop sign waiting for a break in the cross traffic so that you can pull out? Now how well does that work when a big ass SUV pulls up next to you as they try to make a left turn? Hell, even in my little slice of suburbia there are trees and shrubs and parked cars on every corner that make it a bitch to see oncoming traffic.

It's hard to buy into Elon's idea of "just pull into traffic and if anything looks bad just slam it into reverse". Maybe this is one of the fender bender scenarios that he feels are acceptable?
 
  • Like
Reactions: croman
Actually, I think they do. During Autonomy Investor Day, Tesla explained how they can use all 8 cameras to create a 3D map of the surroundings that includes depth and speed perception.
That capability could also be used to map curbs and other stationary, low-lying objects that the front cameras cannot see when too close, because the car will have started farther away from the object, and it would have seen it. Curbs don't spring up out of the ground after all. Even if the car were parked in front of a curb and turned off, it could remember the previously mapped topography around it and use that to navigate around obstacles when powered back on. People cannot see the ground right in front of (or around most of) their cars anyhow. People driving cars hit curbs and other stationary, as well as moving, objects/debris all the time, and somehow we accept that and move on. The standard does not have to be perfection (at least not yet). Note, I am not suggesting that it will be OK if any given autonomous vehicle hits curbs, etc. all the time.

The proof of the pudding is in the eating, as they say, so we will (soon?) see, as Tesla releases new FSD features, what things work well and what don't. Undoubtedly, there will be tricky edge cases that aren't handled well at first. Which those are and whether they are solvable with the current hardware remains to be seen. I do have a concern with one relatively common scenario: quickly making a right-angle turn to merge into oncoming orthogonally flowing traffic (i.e., making a right/left turn onto a busy, multi-lane thoroughfare where traffic is going, say, 45+ mph). This will make for some pucker-inducing scenarios where it may be very hard to tell if the car will do the right thing, and I don't know if people will be able to take over in any reasonable way if it doesn't. They will really need to get this right. I'm kind of hoping for something obvious on the display indicating what the car is going to do, like highlighting what space in the traffic flow the car will try to fit into (plot path line or show shadow car avatar in the new slot*), maybe some countdown (in seconds) of when it will start to go or a sound indicating "here we go" (Wilhelm scream?), and possibly indicating how fast it might accelerate (by color intensity of the car's intended path line).

*I may be wrong, but if you slow down the latest FSD demo video, it appears to show a shadow car in the new lane, whenever it's about to change lanes. Check at about 0:17 or 0:48, for example. On further review, it might just be a video compression artifact.
 
  • Informative
Reactions: Falkirk and croman
Actually, I think they do. During Autonomy Investor Day, Tesla explained how they can use all 8 cameras to create a 3D map of the surroundings that includes depth and speed perception.

Karpathy said in the presentation NN can be trained to get depth info from even a single vision images. Even that contains a lot of depth info. You can look outside the window with one eye covered and not move you head and still get an idea of distance of near by and far away trees from perpectives and other experiences, for example. He even showed a vision depth view image that is better (resolution-wise) than what can be get from Lidar.

How well does that work when you're sitting at a stop sign waiting for a break in the cross traffic so that you can pull out? Now how well does that work when a big ass SUV pulls up next to you as they try to make a left turn? Hell, even in my little slice of suburbia there are trees and shrubs and parked cars on every corner that make it a bitch to see oncoming traffic.

It's hard to buy into Elon's idea of "just pull into traffic and if anything looks bad just slam it into reverse". Maybe this is one of the fender bender scenarios that he feels are acceptable?

If you want to ask the tough question and get the real answer this could give you some clues.

Deep Visual-Semantic Alignments for Generating Image Descriptions
 
Last edited:
How well does that work when you're sitting at a stop sign waiting for a break in the cross traffic so that you can pull out? Now how well does that work when a big ass SUV pulls up next to you as they try to make a left turn? Hell, even in my little slice of suburbia there are trees and shrubs and parked cars on every corner that make it a bitch to see oncoming traffic.

It's hard to buy into Elon's idea of "just pull into traffic and if anything looks bad just slam it into reverse". Maybe this is one of the fender bender scenarios that he feels are acceptable?

Yes. This exact scenario is probably the worst I can think of too especially because it can include objects travelling at speed.

Another is the rear cross-traffic scenario when backing out from parking lots. The side marker cameras are very low down if cars next to you are blocking the view and while the rear fisheye is wide, it is not all-seeing nor is its quality the best anyway.

And of course the parking maneuvering without a nose camera around the bumper level is an interesting predicament though likely at much lower speed than the first instance.