Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla autopilot HW3

This site may earn commission on affiliate links.
What's the deal with all the gif replies?

9c47336a06df469d1ae3bf0ec783bcb5.gif
 
  • Funny
Reactions: Matias
LiDAR is very reliable at detecting stationary obstacles in planned path at all speeds, and as an active sensor works even against a low blinding sun, which was a reported possible factor in Walter Huang's death.

Thus it very usefully covers a part of the operational spectrum in which Tesla's doppler radar and vision system are currently both liable to fail catastrophically, hence greatly increasing safety in those instances where it is most sorely needed.

Can HW3 and clever programming overcome this sensor gap in a relevant time-frame before competitors render the question moot, say around 2023?

I personally doubt it, but maybe HW4 will do the trick if they start designing it in 2020.




I also sent an email 3 days ago requesting to be hooked up to EAP, no response as yet.

Interesting paper on determining distance to objects from single camera source of unknown type.
https://arxiv.org/pdf/1904.04998.pdf
 
Interesting paper on determining distance to objects from single camera source of unknown type.
https://arxiv.org/pdf/1904.04998.pdf

Sure, but the paper unlikely explains how Tesla should handle situations where the camera (and driver in L2) is blinded by a low sun ahead.

From my own dashcam have seen this produce a pretty uniform orange glow painted widely over the road and sky simultaneously, must dig for better photo without intervening trees but here's the effect I mean:
Screen Shot 2019-04-12 at 06.38.36.png


When this happens at highway speed in:
L2, the driver should of course intervene.
L3, the required response would be to slow down until radar or ultrasonics can read the situation, while calling on the human to assist, otherwise the car could effectively be flying completely blind.
L4, then there is no driver upon whom to fall back, so the AV can be reduced to a dangerous sudden crawl in the middle of the motorway, precisely at a point where any human driver following behind is also likely to be blinded.
 
  • Helpful
Reactions: rnortman
Sure, but the paper unlikely explains how Tesla should handle situations where the camera (and driver in L2) is blinded by a low sun ahead.
Don't compare your run of the mill 8bit camera that then overcompresses everything into some relatively low bitrate h264 stream with 12-14 bit raw stuff Tesla use. Tesla cams are not blinded by the sun and could easily see traffic signal with sun as background where I can not.
 
Don't compare your run of the mill 8bit camera that then overcompresses everything into some relatively low bitrate h264 stream with 12-14 bit raw stuff Tesla use. Tesla cams are not blinded by the sun and could easily see traffic signal with sun as background where I can not.

Sorry if unclear but that snapshot is from the Tesla-cam on my car, as stored by the dashcam feature, so hopefully not a run of the mill 8bit camera.

Good to hear that lights etc can be filtered out of a background sun, still it would be interesting to get the raw Tesla feed of these situations where human eyesight fails, to see how they compare to what the dashcam stores.

Do you think the dashcam encoding may improve with HW3 having plenty of spare computational capacity (within bw limit of USB2 port it writes out to)?
 
Sorry if unclear but that snapshot is from the Tesla-cam on my car, as stored by the dashcam feature, so hopefully not a run of the mill 8bit camera.

@verygreen still has a point: the dashcam file is compressed and dynamic range is the first to go.

That said I am shocked — shocked I tell you — that @verygreen sold his soul to Tesla’s paid bounty program instead of spilling us the latest beans. Elon Musk was right: everything can be incentivized. :)
 
Sorry if unclear but that snapshot is from the Tesla-cam on my car, as stored by the dashcam feature, so hopefully not a run of the mill 8bit camera.

Good to hear that lights etc can be filtered out of a background sun, still it would be interesting to get the raw Tesla feed of these situations where human eyesight fails, to see how they compare to what the dashcam stores.

Well, I'm sure it isn't compressed with a video codec, so the difference would be huge. Assuming they used the same camera as HW2.0 did, but with different color filters, the chips will do up to 14-bit native, or up to 20-bit HDR output at either 45 fps at full resolution or 60 fps at 720p (the difference being the limitation of bus bandwidth, I think).

The dashcam content, by contrast, is encoded with H.264 baseline profile, which has only 8-bit depth.


If they're using the non-HDR mode, the difference between 8-bit and 14-bit means that each value in the compressed video represents one of 64 (2^6) possible values in the input signal. That's a huge difference — similar to the difference between shooting JPEG on a DSLR and shooting RAW. The HDR adds that much extra headroom/footroom again on top of that, so each value in that compressed video represents potentially one of 4096 (2^12) possible values in the input signal. That's just mind-blowing.

Mind you, I have no idea how many of those extra bits actually contain useful data and how many are pure noise. :)


Do you think the dashcam encoding may improve with HW3 having plenty of spare computational capacity (within bw limit of USB2 port it writes out to)?

I think they're downscaling now, so moving to full res would make a big difference in terms of AutoPilot's ability to detect things at a distance, and I'd expect the dashcam resolution to increase as well, if they do so. But I doubt they'll use a higher-quality encoding. They're probably doing the encoding using a dedicated hardware encoder on the SoC (either on the MCU or the AP computer), and those sorts of parts tend to be designed for limited-quality output, because it usually doesn't matter.
 
  • Informative
Reactions: OPRCE and mhan00
@verygreen still has a point: the dashcam file is compressed and dynamic range is the first to go.

That said I am shocked — shocked I tell you — that @verygreen sold his soul to Tesla’s paid bounty program instead of spilling us the latest beans. Elon Musk was right: everything can be incentivized. :)

Linking to this post for continuity, but responding the the group as a whole.

1. I linked to the paper because it seemed relevant to distance sensing (a common theme here), not because it solved all camera concerns.

2. Since we are on the topic, I'd like to publicly say thank you to @verygreen for not letting the cat(s) out of the bag before the April 22nd event.
Thank you!
 
If you want that developer to get the report, the next time you're in a situation like this hit the voice recognition, say "bug report" and then describe the problem verbally in a sentence or two.

The car will take screenshots and pass them, your voice memo, and I think possibly some log information up to the mothership and out to the software team automatically.

Bug report logs a timestamp but nothing happens to this automatically. There is no red flashing light that goes off in the engineering bay and suddenly all the developers drop what they're doing and investigate your issue. In fact it goes into a black hole unless you also call Tesla and convince somebody to take a look -- which I know from experience has gotten to be almost impossible now because their support lines are completely slammed. They don't want your bug reports, and they don't want your trouble tickets. If by some miracle you do get somebody to pay attention (probably by going through your service center rather than telephone support) then hitting that bug report button and making a note of the approximate time will allow them to find it in the logs.

But -- this is the key thing -- they are not going to care to do this unless the service center believes there is a particular hardware problem with your car. This is not how they fix software bugs for the most part. Believe me, they have more than enough examples of their software doing stupid stuff; they don't need your examples anymore.
 
  • Like
Reactions: mtndrew1 and OPRCE
@verygreen still has a point: the dashcam file is compressed and dynamic range is the first to go.

I agree completely, but can that flat orange glow painted all over the road/sky in Teslacam really be just a compression artefact? That's why I want to see the raw feed for comparison.


I think they're downscaling now, so moving to full res would make a big difference in terms of AutoPilot's ability to detect things at a distance, and I'd expect the dashcam resolution to increase as well, if they do so. But I doubt they'll use a higher-quality encoding. They're probably doing the encoding using a dedicated hardware encoder on the SoC (either on the MCU or the AP computer), and those sorts of parts tend to be designed for limited-quality output, because it usually doesn't matter.

I'm hoping there's a dedicated h.265 encoder in the new APE3, which should permit maximum dashcam quality on the same USB2 port.
 
I agree completely, but can that flat orange glow painted all over the road/sky in Teslacam really be just a compression artefact? That's why I want to see the raw feed for comparison.

Probably not. Indeed its color seems to match the yellowish hue from AP2.5’s color filters but that doesn’t mean detail within that glow isn’t lost during the compression...
 
  • Helpful
Reactions: OPRCE
Sorry if unclear but that snapshot is from the Tesla-cam on my car, as stored by the dashcam feature, so hopefully not a run of the mill 8bit camera.
oh yes, Tesla dasham waaay overcompresses stuff.

Good to hear that lights etc can be filtered out of a background sun, still it would be interesting to get the raw Tesla feed of these situations where human eyesight fails, to see how they compare to what the dashcam stores.
it's in the hw2.0 and hw2.5 cameras thread.

here's the 32bit tiff of the situation: Box
 
Sure, but the paper unlikely explains how Tesla should handle situations where the camera (and driver in L2) is blinded by a low sun ahead.

From my own dashcam have seen this produce a pretty uniform orange glow painted widely over the road and sky simultaneously, must dig for better photo without intervening trees but here's the effect I mean:
View attachment 396034

When this happens at highway speed in:
L2, the driver should of course intervene.
L3, the required response would be to slow down until radar or ultrasonics can read the situation, while calling on the human to assist, otherwise the car could effectively be flying completely blind.
L4, then there is no driver upon whom to fall back, so the AV can be reduced to a dangerous sudden crawl in the middle of the motorway, precisely at a point where any human driver following behind is also likely to be blinded.

I think you would be surprised how well neural networks perform on bad light in images.


As for gathering training data on these situations, they don’t even need to label them, they can just have the car drive forward, use the rear looking camera to gather ground truth and then have more data to add to the dataset...