Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Rumor: HW4 can support up to 13 cameras

This site may earn commission on affiliate links.
I’m an FSD owner so it doesn’t personally affect me, but I believe Tesla owes EAP owners on HW2.5 updates that bring autosteer, NoA, Smart Summon, and Autopark to a completed non-beta state.

EAP is a Level 2 driver assist feature so the usage of the features will always come with disclaimers. Due to the nature of how Tesla uses (or misuses) the Beta tag I don't expect it to ever be removed from L2 products like autosteer.

FSD/HW3 will also always have a beta tag.

When FSD is capable of L3/L4 then the beta tag has to be removed as it would be pretty silly to have a beta tag a robot. It's basically telling people its a killer robot.

I do agree with you in that EAP owners are owed HW3 for different reasons. But, Tesla is never going to admit that.
 
It already does.

It's vastly better than the old ultrasonic-only version we had for years.


Old autopark I used basically 0 times other than once to see how it worked right after I got the car, and a couple times when different passengers specifically asked to see it park itself- because it was so painfully slow compared to manual parking, required cars on both sides, and didn't even notice the spot half the time.

New autopark I've already used more in a couple months than I did old autopark in several years- it finds spots far more often, even with no cars next to you, and does so at speeds approaching an actual (albeit cautious) human.


It's not faster than I am, so I don't use it all the time, but it's good enough to BE useful right now--- and doubly so as a precursor to reverse summon.

When I said it will likely be a lot better I meant in a total encompassing way.

There are usually always tradeoffs. Like one tradeoff with the removal of the radar in Tesla vision is even adaptive cruise control will say "slowing down due to rain" where the old one would continue to work just fine.

Vision autopark is one example of something that definitely is better with HW3. Now its still a bit slow to use in both of our cases, but hopefully within a year or so it will be something we use
 
Sandy says Tesla needs FLIR for self driving because it's useful on bombs.

He then confuses FLIR with night vision which are entirely different things but he thinks they're the same.

Sandy probably needs to stick to stuff he actually knows.


Munroe Live demoed Teledyne's FLIR last year, and the tech is pretty cool. You'd really only need to replace one of the forward-looking cameras with a FLIR imager to get good results, although that sounds pricey.

Sandy is not really a software guy.
 
Sandy says Tesla needs FLIR for self driving because it's useful on bombs.

He then confuses FLIR with night vision which are entirely different things but he thinks they're the same.

Sandy probably needs to stick to stuff he actually knows.
Thanks for highlighting that there is a difference between FLIR and night vision. I never had a need to think about that but after your post I found a vendor's site that clearly explains the huge difference.
 
Pricy... plus you're back to needing to do sensor fusion like you were with radar... and with data that's arguably even harder to understand... and it's unclear

On top of that you need a fairly large sensor to get good range for things like animal-sized objects, which would be one of the more obvious places you'd benefit at all (things like noticing approaching animals to the side on rural roads where there's no lights other than straight ahead from your car)-- that's without getting into the complexity of accurate speed and distance measurement using only a single thermal sensor.
 
They can face directions other than directly forward?

Also they (IR specifically, not NV) can see through fog/snow/rain in ways cameras no matter the lighting simply can not.
Haha, I was just responding and you stole all the points I was making, but it a more simplified manner.

Here is the link I was going to add at the end.

 
Munroe Live demoed Teledyne's FLIR last year, and the tech is pretty cool. You'd really only need to replace one of the forward-looking cameras with a FLIR imager to get good results, although that sounds pricey.
I've linked this before - with normal camera & NN you can get great low light vision. Now that they are doing photon counts, Tesla is already on the right path.


140415-ASUS-PixelMaster.jpg
 
Last edited:
Pricy... plus you're back to needing to do sensor fusion like you were with radar... and with data that's arguably even harder to understand... and it's unclear

On top of that you need a fairly large sensor to get good range for things like animal-sized objects, which would be one of the more obvious places you'd benefit at all (things like noticing approaching animals to the side on rural roads where there's no lights other than straight ahead from your car)-- that's without getting into the complexity of accurate speed and distance measurement using only a single thermal sensor.

"Sensor fusion" is not a problem. The issue with radar is that it's low-res compared to a camera, with false readings.

FLIR would provide an extra dimension of data, letting the computer distinguish between, say, a human and a cardboard cutout. The real problem is the price, since high-quality FLIR cameras cost thousands.

Maybe FLIR would be a good option for the Tesla Semi, where a higher level of safety is warranted, but that wouldn't have the benefit of millions of vehicles on the road for NN training.
 
"Sensor fusion" is not a problem. The issue with radar is that it's low-res compared to a camera, with false readings.
Given the wavelengths they handle are completely different, even if you have a higher res thermal camera, I don't see how it wouldn't require sensor fusion. What you see as a return on a thermal camera in many cases will have no relation with what the visible/IR camera can see, which is the same problem with radar vs camera returns.
FLIR would provide an extra dimension of data, letting the computer distinguish between, say, a human and a cardboard cutout. The real problem is the price, since high-quality FLIR cameras cost thousands.
Isn't that the same issue with Radar vs conventional cameras, where due to the costs, you are going to end up being stuck using a lower res thermal cam instead of a high res one?
Maybe FLIR would be a good option for the Tesla Semi, where a higher level of safety is warranted, but that wouldn't have the benefit of millions of vehicles on the road for NN training.
As pointed out above, the biggest use would be against animals on a pitch black road, but in terms of detecting people, I think a semi truck application makes it less useful as a bulk of the driving would be on highways, where pedestrians are largely non-existent.
 
I've linked this before - with normal camera & NN you can get great low light vision. Now that they are doing photon counts, Tesla is already on the right path.


140415-ASUS-PixelMaster.jpg
Odd that they used that as an example, but none of the actual examples came out anywhere close to this image.
 
In maybe the extra cameras are for birds eye view. What will help with Chuck's situation is the increased resolution of new cameras.

Resolution may help, but more critical imo is cameras in the front bumper facing left and right to provide more visibility, and quicker decision making and commitment to the turn. Most of the time it fails in Chuck’s video is because it identifies a gap, starts creeping, but doesn’t move quick enough.

High resolution cameras will improve things, but I think replacing the existing B pillar cameras with high resolution ones without improving the software and camera placement would yield negligible improvement.