Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

2021.12.25.7

This site may earn commission on affiliate links.
dynamic range on even the best cameras is very poor. I resort to hdr and manually stacked images on my older slr, when I need to capture wide range.

ND filters help with that. do we have them? I dont see any filters on my car that can switch in and out. these are webcams, basically, not even close to slr quality.

they DO need to have much higher res of they are to interpret fine details about our world. and yes, its going to take that level of 'seeing' to be able to drive *safely*.

example: I was recently on a residential street and there was a truck on the right side with its front pointing toward me and its loading-end (rear) away from me. I could see a pair of feet below the truck, as I drove further along, and so I cautioned and slowed down, figuring MAYBE that guy in the truck will enter my path or something. I used more caution than if there was simply a truck there. it took greater vision than our webcams have, I'm pretty sure, and it takes my seeing something in the corner of my eye, wanting to 'zoom in' to that and make *sense* of it, in *context*.

it needs processing power, but it needs really good vision. I cant speak to the processing power inside tesla, but I can speak to the insufficient vision that the m3 (what I drive) has.

having the car inch-up, as its stopped and needs to see around more - that's a hint right there that movable cameras are NEEDED. and even more than that is needed, truth be told.

PTZ cameras that are redundant, self cleaning and have ways to deal with high dyn range - that's really what its going to take in order for this to be as safe as an expert human driver, in city streets.
 
dynamic range on even the best cameras is very poor.

You know color blind people drive right?


Why do you need HDR to drive?


they DO need to have much higher res of they are to interpret fine details about our world. and yes, its going to take that level of 'seeing' to be able to drive *safely*.

Why?

"that is a dog" is needed.

'That is a long-hair half pommeranian, half bichon frise" is not needed.


example: I was recently on a residential street and there was a truck on the right side with its front pointing toward me and its loading-end (rear) away from me. I could see a pair of feet below the truck, as I drove further along, and so I cautioned and slowed down, figuring MAYBE that guy in the truck will enter my path or something. I used more caution than if there was simply a truck there.

Why would the guy loading the truck- with the drivers side away from the street already- randomly run out into the street into your path?

And why do you assume the cars cameras wouldn't be able to see that?



it took greater vision than our webcams have, I'm pretty sure


You seem pretty sure of a remarkable number of things that aren't actually true.


having the car inch-up, as its stopped and needs to see around more - that's a hint right there that movable cameras are NEEDED

Again you're not making any physical sense.

"tilting" the camera wouldn't help here. The view is physically blocked to ALL the cameras that have 360 visibility anyway. How does tilting one of them help there?


There's only 2 fixes for that- neither involves the camera being able to angle.


1) Creep forward until you can see around the obstacle.

2) Add a side/forward angled camera further up on the vehicle (like forward on the front fender for example)


I've been suggesting that second one is a good idea for quite a while now- but Tesla is sticking with #1 for now.

In neither case would making the fixed-mount camera able to "aim" help in any way though.
 
The argument that if it’s not safe for a human to drive in because they can’t see enough is fine and good for arguing that cameras are all we need and this a car shouldn’t be driving with cameras if a human can’t drive in the same scenerio. Expect there are scenarios where something comes on all of a sudden and a human )or cameras) is going to have a hard (and not safe) time trying to stop, pull over, etc where having radar and / or LiDAR could have provided that next level of safety.
Even SAE L4 and L5 have conditions where they will safely pull over and stop. If a human can't see the road well enough to drive it, it's absurd to expect any autonomous vehicle to SAFELY drive those conditions with anything close to available in 2021.
 
  • Like
Reactions: silenteski
the PB's that happen on the level of sw I have kept back is minimal. its one reason I dont want to dump my radar based sw. it works ok enough and I dont want to go thru more bugs just to get back to where I am now. my sw version is as bug free as I've seen in a tesla which is why I'm NOT dumping it. if tesla forces me to upgrade, I'll have no say, but until then, this version is the best one I've seen so far. and yes, its 2020 dated. way before the xmas version, in fact.
I don't know if it is still using radar but it still phantom brakes frequently.
 
You know color blind people drive right?
Why do you need HDR to drive?
HDR has nothing to do with colors. Vision of color blind people still have several magnitudes wider dynamic range than most cameras (SDR). HDR is useful when the important details one wants to resolve on the image are either very bright or very dark compared to the rest of the image. For example a bright red light at night. The auto exposure algorithm on many cameras will saturate the red light into white because even the G and the B filters let through some red light so all 3 color components end up boosted to saturation. One solution is to adjust the AE algorithm to not allow saturation, but then the dark parts of the image lose too much detail. Hence you need HDR to improve the low intensity (dark) detail by a factor of 16 (12 bit) or 64 (14 bit).
 
the feet were in the shadows (as opposed to the highlights). my eye/brain concentrated on that part (zoomed in, if you will) and I was able to 'lighten the scene' in my head enough to make sense of what those objects are and what change in behavior -might- make sense, in light of that.

that's a lot of thought process and I dont think we are anywhere near that level of understanding, in terms of machines. how is it going to know where to focus or zoom in, on, out of all the space in front of you?

the chestnut about 'people just use 2 eyes and they can drive' - they simplify the problem far too much. when I hear that said, I mostly discount the rest of what that person is saying.

NNs arent going to get 'understanding' any time soon, btw. they dont understand a thing. they are just math/stats engines.
 
Even SAE L4 and L5 have conditions where they will safely pull over and stop. If a human can't see the road well enough to drive it, it's absurd to expect any autonomous vehicle to SAFELY drive those conditions with anything close to available in 2021.
I believe that Tesla is making a huge mistake in eliminating radar and using camera vision. If you are in heavy fog but you have enough visibility to see the lane lines, cameras will keep the car within the lane. Cameras cannot see hundreds of feet ahead as radar can, so I believe that a Tesla with cameras will be less safe than Teslas with radar.
 
dynamic range on even the best cameras is very poor. I resort to hdr and manually stacked images on my older slr, when I need to capture wide range.

ND filters help with that. do we have them? I dont see any filters on my car that can switch in and out. these are webcams, basically, not even close to slr quality.

they DO need to have much higher res of they are to interpret fine details about our world. and yes, its going to take that level of 'seeing' to be able to drive *safely*.

example: I was recently on a residential street and there was a truck on the right side with its front pointing toward me and its loading-end (rear) away from me. I could see a pair of feet below the truck, as I drove further along, and so I cautioned and slowed down, figuring MAYBE that guy in the truck will enter my path or something. I used more caution than if there was simply a truck there. it took greater vision than our webcams have, I'm pretty sure, and it takes my seeing something in the corner of my eye, wanting to 'zoom in' to that and make *sense* of it, in *context*.

it needs processing power, but it needs really good vision. I cant speak to the processing power inside tesla, but I can speak to the insufficient vision that the m3 (what I drive) has.

having the car inch-up, as its stopped and needs to see around more - that's a hint right there that movable cameras are NEEDED. and even more than that is needed, truth be told.

PTZ cameras that are redundant, self cleaning and have ways to deal with high dyn range - that's really what its going to take in order for this to be as safe as an expert human driver, in city streets.
I get your point and agree that for the higher automation levels, the ability to discern fine detail will be increasingly important. However, for the level we're at (level 2), what we have at the moment should be sufficient. Level 2 assumes driver supervision, so with your example above involving the truck, the human would still be expected to take the necessary precautions.
 
  • Like
Reactions: jeremymc7
But Musk says if you can’t see then not only should you not be driving, never should the car.
:rolleyes:


I mean- he's right isn't he?

If the fog is so bad you can't see something ahead of you in the road soon enough to stop for it-- (and keep in mind YOUR reaction time would be slower than the computers if IT sees something to stop for)- you probably shouldn't be driving.
 
I mean- he's right isn't he?

If the fog is so bad you can't see something ahead of you in the road soon enough to stop for it-- (and keep in mind YOUR reaction time would be slower than the computers if IT sees something to stop for)- you probably shouldn't be driving.
My counter argument in another thread is that in situations where for example fog rolls in on a narrow mountain pass where it’s dangerous to stop and get hit, try and pull over and get hit / go over the side, or drive to a safe spot it makes more sense to have a better than human system - no so you choose to drive into a hazardous situation but to help you get out of one safely.

But I’ll take what I can get. Especially if it’s a public release comparable to the currrnt better sooner then later.
 
I believe that Tesla is making a huge mistake in eliminating radar and using camera vision. If you are in heavy fog but you have enough visibility to see the lane lines, cameras will keep the car within the lane. Cameras cannot see hundreds of feet ahead as radar can, so I believe that a Tesla with cameras will be less safe than Teslas with radar.
And your opinion should be respected. Not necessarily agreed with, as I believe Tesla will make any improvements to vision (e.g., better cameras) if needed) but point out that Level 5 implies fallback to safe condition where it cannot safely proceed. In other words, if a reasonably prudent human could not drive it, even L5 is not spec'd to do so.

With the problems reconciling radar with cameras, I understand Tesla's decision to emulate what humans "see." And we know that radar reflects phantom obstructions and is blind to stationary obstructions, so (gagging to say this) if augmentation to vision is ultimately needed, it would probably be LIDAR (cough, cough).
 
And your opinion should be respected. Not necessarily agreed with, as I believe Tesla will make any improvements to vision (e.g., better cameras) if needed) but point out that Level 5 implies fallback to safe condition where it cannot safely proceed. In other words, if a reasonably prudent human could not drive it, even L5 is not spec'd to do so.

With the problems reconciling radar with cameras, I understand Tesla's decision to emulate what humans "see." And we know that radar reflects phantom obstructions and is blind to stationary obstructions, so (gagging to say this) if augmentation to vision is ultimately needed, it would probably be LIDAR (cough, cough).
I agree with @jeremymc7 (post 51) where he says you are on a mountain pass and it is too dangerous to stop and possibly get rear ended. There are times you are driving and fog appears or a bad rain storm. Normally it does not last many miles so driving out of it is the best solution. I have driven in New York in the winter time and you you may experience a white out where you cannot see more than a few feet in front of you. If you stop, you get hit, if you keep going you run the risk of running into another car. Radar would see the cars in front of you but cameras would not.

Tesla chose radar and cameras as their full FSD solution, now that the cannot get radar to work, they are now taking the cheapest solution and eliminating radar. I feel that they should have used Lidar, radar, and cameras from the very beginning. You check all three sources and decide which one is giving the most accurate data to safely drive the car.

We are now 4 years away from 2017 when Elon said that a Tesla would drive from California to New York using FSD. Tesla may get FSD working with cameras but I do believe that it will not be as safe if they had used other combined technologies. Just my 2 cents worth...
 
  • Like
Reactions: jeremymc7
I agree with @jeremymc7 (post 51) where he says you are on a mountain pass and it is too dangerous to stop and possibly get rear ended. There are times you are driving and fog appears or a bad rain storm. Normally it does not last many miles so driving out of it is the best solution.

Tesla chose radar and cameras as their full FSD solution, now that the cannot get radar to work, they are now taking the cheapest solution and eliminating radar. I feel that they should have used Lidar, radar, and cameras from the very beginning. You check all three sources and decide which one is giving the most accurate data to safely drive the car.
Thank you. In both accounts.
 
It's a problem in search of a solution. Partly why they switch to Vision.

But besides throwing a blank check at it, how does Waymo handle it.

They pre-map, in HD, the entire driving space (a tiny suburb in Arizona) then check against that.

When something changes- like a traffic cone gets in the way- they call a human for help.

(I exaggerate- but not THAT much)
 
  • Like
Reactions: angus[Y]oung
So now we have a camp that says the minimal safety state should be trudging on in roads where visual observation of the road is impaired, because stopping risks getting hit by the car behind.

So, you guys are basically saying L5 autonomy cannot work. If you can't agree on the safest way to give up in inclement conditions. Geez.
 
So now we have a camp that says the minimal safety state should be trudging on in roads where visual observation of the road is impaired, because stopping risks getting hit by the car behind.

So, you guys are basically saying L5 autonomy cannot work. If you can't agree on the safest way to give up in inclement conditions. Geez.

No. In specific cases, in specific locations. To be addressed in future programming.

It’s for stopping say the risk getting hit and knocks of a narrow, blind corner, barely two lane windy mountain pass with no shoulders down a 1000+ canyon. There are a lot of them here and in various places around the US.

This is a down the road time wise problem that can also be addressed with location aware safety that takes the less risky option and not just a all or nothing approach in all situations.

Sometimes the safest option is not to stop and do nothing. sometimes the safest option is to move and get out of the way using situational awareness

How many people get in an accident on the freeway and have a still drivable car where they think it’s best to just leave it there instead of pulling to the shoulder or off the freeway when no shoulder exists.

Default to cameras and vision only that’s fine. But when the situation calls for it let the car. or the driver, have additional - more advanced tools above vision only to choose to get out of the way when called for.

Have you ever seen the 100 car pileups that can occur out of no where when say fog rolls down a hill onto a freeway, sudden monsoon downpour in the desert, etc. I’ve seen them on both coasts. Having long distance radar, infrared camera, LiDAR above vision be it passive or active for autonomous or emergency driving saves lives.
 
  • Like
Reactions: rxlawdude