Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla replacing ultrasonic sensors with Tesla Vision

This site may earn commission on affiliate links.
...Does anyone actually believe Vision-only will be superior...
That's Tesla's belief. No sensor fusion. No radar. No sonars. Just pure vision.
Other systems that also heavily use Vision but are complemented by other sensors
Waymo has proven that sensor fusion is best in a geofenced location as it began its public rides with no human driver/backup staff in the car in 2019.

2022 Tesla Vision can still collide if the backup driver is not driving.
Will it just get most of the way with some sacrifices here and there?

MobilEye believes pure vision can be very good but only up to level 2 (ADAS) and cannot be improved beyond that. To go beyond level 2, MobilEye uses sensor fusion with cameras, 4-d radars, lidars, and sonars.
 
Most teslas park backwards (based on all the footage they get from superchargers obviously), so this removal allows for cost savings as the rear cam view is sufficient 🤷‍♂️
@thewishmaster You joke but this is 100% true for me.

Our older Tesla is too old for parking sensors, Tesla hadn't started offering them yet. So between that and the charge port location, I always back in, and use the backup cam for judging those last few inches of distance.

Our newer Tesla has parking sensors but I'm so used to backing in with the backup cam, that I never look at the parking sensor readings. The backup cam is more useful anyways because the sensor readings just say "Stop!" when you're not really that close yet. And I sure as hell turned off the annoying parking beeping. I don't know how anyone can stand that.

Occasionally I need to park front in, and then sometimes I'll pay attention to the front distance number. It's pretty infrequent though.

All this said, I agree with every negative comment here and instinctively I want to rag on this removal as much as anyone. 😅
 
  • Funny
Reactions: Ratch
I agree it will be a challenge for Tesla to beat multi-sensor systems with pure vision.

But I also understand their perspective: Sensor fusion is hard to get right, and prone to it's own challenges - a single input system is simpler for the AI to tackle, and then Tesla assumes it's AI will win the day.

There's another angle here: Tesla's passion to reduce the cost and complexity of their cars. Less sensors, less wiring, fewer computer inputs - this is all in their core principle. A bigger suite of sensors may be quicker to converge for L3/L4, but Tesla knows vision is possible because humans do it.
 
  • Like
Reactions: enemji and MB11
It’s been touched upon, but the timing is what sucks most to me. If Tesla had rolled out this capability in SW first, and parity with USS, then started ditching the sensors then fine, but they didn’t. And this is history repeating itself

- AP1 being replaced about a year before any real software and probably 4 years before performance parity
- rain sensors going, I’d say 5 years before parity and some still think it’s worse
- radar going, and various features dropped for a while, and I suspect there was/is a reduction in passive safety performance as Tesla stopped reporting the statistics

I can believe, maybe, one day, it can be as good. But common sense tells you to have the solution before you create the problem.

Is it a parts shortage? It maybe a self inflicted one, Tesla don’t just ring up and ask for a million sensors for next month, or bumpers/fenders without holes, these things will be part of supply chain planning months out. Are they just behind again? If release numbers are year.week.xxx then 2022.28 would be what was meant to occur in July, which makes them about 2 months behind this year, and might explain the rumour they’re skipping 2022.32.
 
Last edited:
It’s been touched upon, but the timing is what sucks most to me. If Tesla had rolled out this capability in SW first, and parity with USS, then started ditching the sensors then fine, but they didn’t. And this is history repeating itself

- AP1 being replaced about a year before any real software and probably 4 years before performance parity
- rain sensors going, I’d say 5 years before parity and some still think it’s worse
- radar going, and various features dropped for a while, and I suspect there was/is a reduction in passive safety performance as Tesla stopped reporting the statistics

I can believe, maybe, one day, it can be as good. But common sense tells you to have the solution before you create the problem.

Is it a parts shortage? It maybe a self inflicted one, Tesla don’t just ring up and ask for a million sensors for next month, or bumpers/fenders without holes, these things will be part of supply chain planning months out. Are they just behind again? If release numbers are year.week.xxx then 2022.28 would be what was meant to occur in July, which makes them about 2 months behind this year, and might explain the rumour they’re skipping 2022.32.

I agree with your end-user concerns that Tesla drops hardware before parity in software is achieved.

But I do know why they do it. Once they are convinced that they're going SW to eliminate a piece of hardware, they hate paying for and installing a single unit more that they know will be rendered useless in the near future. It's against their DNA. So they kinda temporarily screw customers.... knowing there's a 6 month waiting list and it simply won't matter to sales while immediately improving costs and eventually being "ok" for the customer.

It ain't right, but it's how they roll.
 
@thewishmaster You joke but this is 100% true for me.

Our older Tesla is too old for parking sensors, Tesla hadn't started offering them yet. So between that and the charge port location, I always back in, and use the backup cam for judging those last few inches of distance.

Our newer Tesla has parking sensors but I'm so used to backing in with the backup cam, that I never look at the parking sensor readings. The backup cam is more useful anyways because the sensor readings just say "Stop!" when you're not really that close yet. And I sure as hell turned off the annoying parking beeping. I don't know how anyone can stand that.

Occasionally I need to park front in, and then sometimes I'll pay attention to the front distance number. It's pretty infrequent though.

All this said, I agree with every negative comment here and instinctively I want to rag on this removal as much as anyone. 😅
Actually, I'm questioning myself now. I parallel park all the time. The front ultrasonics in the newer car should be useful for that, when there's a car in front. Am I really not paying attention to their readings, or has it become so unconscious/automatic that I do use them but I have no conscious recollection of it?
 
Tesla knows vision is possible because humans do it.
I really dislike this comparison they keep using because 1. cameras will never see as well as humans in our lifetimes, and 2. the software will never be as good as humans on the current gen hardware.

Cameras don’t adjust to different lighting situations the way our eyes can. Their low light performance sucks compared to ours. They also don’t have the dynamic range we do. The particular camera setup on Teslas don’t have stereoscopic vision to judge distance the way we can, and they don’t see details as far as we can. Heck, most of them can’t even see the color green. The constant 360° coverage is nice, but it doesn’t make up for everything else.

The software doesn’t learn on the spot right now which is the biggest gap between it and a human. Sure, it might drive better than a human… the first time they drive at a location. But humans get better at driving at a location and under particular conditions every time they do it. The software can only get better with an update right now. It doesn’t just need object permanence for an object that gets occluded for five seconds. It needs to have permanent object permanence. It needs to be able to update its own map data. It needs to stop trying to make a left turn into opposing traffic at this one intersection I drive through every single day and remember what the lanes look like. It needs to remember that the only way to go through this one particular intersection is to stay in the left lane starting half a mile before. Otherwise it’ll have to try to get back into that lane right before the intersection, and no one will ever let it in. It needs the ability to store all of this information, know when to store it, and know when to forget it.

It can’t do any of that right now, so it needs other ways to compensate for all sorts of shortcomings. USS was one such compensation.
 
You are likely talking about the existing cameras Tesla uses, but much of what you wrote is false when talking about cameras in general.

Cameras don’t adjust to different lighting situations the way our eyes can.

They can and do via aperture adjustment, along with dynamic exposure sensitivity.


Their low light performance sucks compared to ours.

Ever since about 2017, pioneered by Google but quickly adopted by Apple and others, night vision on cameras is way better than human perception. With an exposure time comparable to a daytime shot, most cameras these days can yield a night shot that looks like it was shot during the day. Of course there's a ton of software/AI post-processing going on, but there's a lot of that going on in Teslas as well.


They also don’t have the dynamic range we do.

But where they lack in dynamic range, they make up in contrast. If you're ever driving in torrential rain, and you can't even see the lanes anymore, take a video (safely) with your phone. Later, when you watch that video, you'll notice that the camera picked up the lanes and other details you couldn't. One of my first videos about AP was to demonstrate this. Where I was practically blinded, AP saw everything just fine and navigated through the storm.


The particular camera setup on Teslas don’t have stereoscopic vision to judge distance the way we can [snip]

The common argument I've seen here is that the 2 main front-facing cameras are not spaced far enough; therefore infinite distance focus is relatively close to the car, and you can't distinguish depth past infinite distance focus. But stereopsis (using the differences in the images between L and R camera) is not the only way to calculate depth. Parallax, shadows/textures, linear perspective, etc all work even if you only have a single eye/camera. Regardless, at parking distances, the two front cameras are spaced apart plenty for easy stereopsis.



All that said, I think it's perfectly reasonable to be skeptical about replacing USS with vision, given the track record of auto-highbeams and auto-wipers.
 
Coming soon means 2026 :)

I have no clue how those crapy camera (which are good for FSD and big stuff) will show distances in inches to smaller objects.
 

Attachments

  • Screen Shot 2022-10-04 at 7.09.34 PM.png
    Screen Shot 2022-10-04 at 7.09.34 PM.png
    494.3 KB · Views: 133
Tesla should have had 360 birds eye view nailed on, via more/relocated cameras, before considering this.

The fact they're launching this and disabling several features for an indeterminate amount of time clearly shows that this is a cart before the horse decision, like the removal of Radar was to begin with, and once again most likely parts/margin driven (like Tesla's margins aren't incredible already).

I can't shake the feeling that whenever these things happen it's always the customer that suffers as a consequence, whereas Tesla always benefits. Passenger lumbar support, front USB data, perhaps even Radar - it all gets removed, the price doesn't change (or goes up), and the customer has to suck it up with degraded or even removed functionality. It seems wrong from a purely "the customer should not suffer" mindset.

People are suggesting that the persistence of vision in Tesla's occupancy stuff ought to mean that the car sees stuff that would be below the front of the car as you drive in. Even if that were the case - and I have serious doubts about it, what happens when the layout of things changes while the car is asleep, etc - is it really an effective use of processing power for the car to have to keep a memory of literally everything it sees that could be driven into?

The worst case scenario from all of this - and the one I'm imagining will happen - is that Tesla Vision minus ultrasonics will try and simulate it with beeps, which people will unconsciously rely on as they do with ultrasonics today, leading to accidents. Either that or we'll get Morse Code sounds when the car has no idea how far away a novel obstacle is.
 
Nobody is considering that this could be partly supply chain issue. Probably not but it could be.

Second mixing input sensor type into the AutoPilot computer is probably extremely complex. We do everything with our eyes. Why shouldn’t the AutoPilot computer do the same? It may have a few dips during the transition. But it the long run I suspect it will all work better.

And eventually, like Radar, they will stop using the sensors on cars equipped with them.

I saw no negative impact going radarless.
 
You are likely talking about the existing cameras Tesla uses, but much of what you wrote is false when talking about cameras in general.



They can and do via aperture adjustment, along with dynamic exposure sensitivity.




Ever since about 2017, pioneered by Google but quickly adopted by Apple and others, night vision on cameras is way better than human perception. With an exposure time comparable to a daytime shot, most cameras these days can yield a night shot that looks like it was shot during the day. Of course there's a ton of software/AI post-processing going on, but there's a lot of that going on in Teslas as well.




But where they lack in dynamic range, they make up in contrast. If you're ever driving in torrential rain, and you can't even see the lanes anymore, take a video (safely) with your phone. Later, when you watch that video, you'll notice that the camera picked up the lanes and other details you couldn't. One of my first videos about AP was to demonstrate this. Where I was practically blinded, AP saw everything just fine and navigated through the storm.




The common argument I've seen here is that the 2 main front-facing cameras are not spaced far enough; therefore infinite distance focus is relatively close to the car, and you can't distinguish depth past infinite distance focus. But stereopsis (using the differences in the images between L and R camera) is not the only way to calculate depth. Parallax, shadows/textures, linear perspective, etc all work even if you only have a single eye/camera. Regardless, at parking distances, the two front cameras are spaced apart plenty for easy stereopsis.



All that said, I think it's perfectly reasonable to be skeptical about replacing USS with vision, given the track record of auto-highbeams and auto-wipers.
A lot of what you’re referring to applies to still photography, not video. Dynamic exposure sensitivity introduces noise. Contrast does not make up for dynamic range. I saw a video just yesterday of AP almost slamming into a barrier because the high contrast video made it look like something it wasn’t. The enhanced night vision only works on still photography. I think it’s certainly possible for the front cameras to provide stereoscopic vision, but that doesn’t apply to the other cameras.
 
Nobody is considering that this could be partly supply chain issue. Probably not but it could be.

Second mixing input sensor type into the AutoPilot computer is probably extremely complex. We do everything with our eyes. Why shouldn’t the AutoPilot computer do the same? It may have a few dips during the transition. But it the long run I suspect it will all work better.

And eventually, like Radar, they will stop using the sensors on cars equipped with them.

I saw no negative impact going radarless.
“We drive with eyes ipso facto cameras can do everything” is such a myopic perspective.

For starters it’s a specious conclusion. We wear sunglasses when it’s sunny so we can see, we don’t drive with glasses on that are obscured by dirt. Human eyes have a much, much better spectrum of vision across different light levels.

Cameras might be able to do everything, maybe more of them and higher resolution, at some point in the future. Is now the time to delete sensors that work and leave customers high and dry, for an anticipated future where this all works? I’d suggest it’s not, especially not when these customers have paid full price for their cars and aren’t expecting functionality to be degraded or disappear completely, for a indeterminate amount of time.

Can a rear view camera (or indeed eyes) see well enough when backing up without the benefit of any additional illumination, like ultrasonics can?

I have no problem with the notion of consolidating systems into a “one true vision” of cameras doing everything, but we’re not there yet and in the meantime it’s customers who suffer.
 
This doesn't make sense. USS are so cheap. Is it the cables that are missing?

I am not buying a car without rear corner radars for cross traffic alert, and 360 view, preferably with free camera view.

For instance, my former Audi etron and the BMW iX also had hidden USS in the doors and warned about curbs and stuff on the sides.

The iX had an AR view where they blended together USS as coloured boxes and view from the nose or rear camera.

Both had a virtual turning front cam that turned with the steering wheel and "peek out fisheye view".

Even the Kia EV6 had free float camera view.

The above and ID.4 also have a special "back up to trailer" view.
 

Attachments

  • Screenshot_20221005_163641.jpg
    Screenshot_20221005_163641.jpg
    468 KB · Views: 97
  • Screenshot_20221005_163703.jpg
    Screenshot_20221005_163703.jpg
    365.5 KB · Views: 80
  • Screenshot_20221005_163757.jpg
    Screenshot_20221005_163757.jpg
    612.1 KB · Views: 79
  • Screenshot_20221005_163810.jpg
    Screenshot_20221005_163810.jpg
    640.1 KB · Views: 77
  • Screenshot_20221005_163821.jpg
    Screenshot_20221005_163821.jpg
    593.1 KB · Views: 100