Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla replacing ultrasonic sensors with Tesla Vision

This site may earn commission on affiliate links.
Yea, radar, at a minimum, should have remained active for edge cases... ie front cameras blocked/blinded.

I've held off on updating. Likely will be forced into it in some future unrelated recall. Disabling radar should have been an opt in like it was when I bought the car and like it was for FSD Beta users without risk of voiding a warranty

Tesla needs to have a camera cleaning system. However, the main reason (I feel) Tesla is going with a cameras-only system is to avoid the expense of having multiple sensors, so I am not hopeful they will have one in the future.
 
  • Like
Reactions: Mark II and Boza
I’m not sure my Sept 2018 build Model 3’s radar is disabled.
I still have the one car follow distance available, and the two cars in front of me are still visualized.

I am on HW 2.5, and some comments here seem to suggest HW2.5 cannot do vision only, so they had to retain the radar functionality for those.

If so, I guess I’m one of the luckiest. I paid $5K for EAP. Don’t need FSD. 4 years after buying my car, I still think I’d have hated paying more for FSD.
did you notice that on your hw 2.5 that your auto high beam is now activated on every drive? i thought they only do that for vision cars
 
did you notice that on your hw 2.5 that your auto high beam is now activated on every drive? i thought they only do that for vision cars
No, it is not activated. I recently noticed high beam was on as I pulled into the garage one night. It turned out to be a misaligned drivers side headlight, which I corrected by adjusting the headlights fom the menu (neat feature, adjusting headlights from the screen).

However, I need to try it out more at night to verify the auto high beams. I’d hate it if they are on automatically.
 
Possibly next year. They certainly left the door open for that and my gut tells me they will. That said, I also think they are going to have the ability to approximate (maybe not exactly) the experience created with US with vision only.

For myself, none of my other vehicles have US sensors and I don't worry about parking them in my small garage. I know how to do it and yes, I have tennis balls but I only put them up a month ago so we wouldn't hit the E bikes in the summer which we never have done. This makes it easier.

YMMV. If this is a deal killer for others then they should definitely look elsewhere and sell their cars to people who can handle the situation. Bless em.
Tesla’s sweet spot was making great EV. That was what I bought - great EV. Then, they started monkeying with things that are not relevant and, frankly, they are not good at. Other manufacturers have better automation (leaving FSD aside because the jury is so far out on that), better quality and better ergonomics. Those other manufacturers are also making strides in the EV space.
I could deal with worse automation, lower quality/service/ergonomics so long as I have a better EV - up to a point. My feeling/hope is that in a couple of years the balance will shift. Which would be unfortunate because I really admire what Tesla did, as an EV maker.
 
...Object permanence could be fairly easy to handle...

Yes, it is easy for a 7-month-old baby, but not for the current technology.

If that technology is here, Tesla car wouldn't mow down Dan O'dowd's short mannequin that disappears from the front view blocked by the front hood.

In Tesla, people seem to talk about incapable tasks for today and mislead people that it's easy to handle.

Yes, birds can flap their wings to fly, so it's easy to handle with the movie Dune's orthnicopter:

tim-samedov-tim-samedov-thopter.jpg


But in the movie Dune that's the year 10191!

In the meantime, our flying machines don't have to flap. They can rotate their blades and have an engine to fly.
 
Last edited:
Curious - how so?
The vision-only adaptive cruise control (so called AP) hasn’t caught up with the radar-vision one yet. Vision-only has a longer following distance and lower max speed.
He is right about different types of sensors having different limitations. However, that is exactly why we use arrays of different sensors, not standardizing on one.
In general, he has exactly the same lack of understanding/appreciation that Tesla has. Driving is a problem with a long tail distribution and we have to solve for _all_ cases, not just _most_ cases. You can see how that “good enough” philosophy manifests in everything Tesla automates: auto wipers, auto high beams, AP, FSD… They simply do not bother with edge (and no so edge) cases. For example, vision only system will not work when cameras are blinded or covered in snow. Yes, that may be 5-10% of cases but still, that is significant not to ignore.
That philosophy, combined with focus on expansion/cost reduction, will lead to quite a few problems in the near future.
 
if a low resolution 1.2MP camera and some machine learning can calculate distances - even in adverse conditions (think parking at night in a dark spot) - down to just 12 inches or better... why in the world did Apple put LIDAR into the latest iphones to enhance measurement of distances???

Because top-tier cell-phones compete based on the length of the feature list, not based on value. They'll set the price to whatever, and we'll gladly pay it. They'll eventually include a horse feeder and cost $50,000 each, and we'll be happy because we think maybe I will get the horse one day, and then this will be useful, too.

And, yes, my last several phones were iPhones, current phone is an iPhone, and the next phone will most likely be an iPhone.
 
He lost credibility with me when he said vision is now vastly superior to radar. No caveats. That's a big statement to make. And then he based many of his follow-up statements on that premise.
These two blokes talk like they are Tesla employees praising their employer, for one.

Secondly, they admitted issues with both cameras and USS, and did not produce convincing arguments as to why exactly cameras are better. The use cases mentioned about how you could potentially park better over a curb with a camera system are pretty laughable.

Also there is this “USS costs money”, without mentioning how much the cost of the camera is. “They could be damaged” - so could the camera.

Not convincing.
 
Every single time my camera focuses. There are also other times it gets used without anyone realizing it.

Autofocus was mature before lidar was present. And autofocus works by phase detection or maximizing contrast, not by measuring distance. They can compute distance after the fact by knowing where the lens ended up and trigonometry, but to be clear that's deriving distance from focus, and not deriving focus from distance.
 
Yes, it is easy for a 7-month-old baby, but not for the current technology.

If that technology is here, Tesla car wouldn't mow down Dan O'dowd's short mannequin that disappears from the front view blocked by the front hood.

In Tesla, people seem to talk about incapable tasks for today and mislead people that it's easy to handle.

Yes, birds can flap their wings to fly, so it's easy to handle with the movie Dune's orthnicopter:

tim-samedov-tim-samedov-thopter.jpg


But in the movie Dune that's the year 10191!

In the meantime, our flying machines don't have to flap. They can rotate their blades and have an engine to fly.
Let's look at the problem from a basic level.

TV can see objects. TV can calculate distance to objects. TV has limited visibility due to the location of the cameras. So, if you are approaching a parking curb, and TV can see the curb (it's several feet in front of you when you start your parking), then when the curb is closer to the car and the cameras can't pick it up anymore, the computer still knows exactly where it was when it last saw it, and knows exactly how the car is moving. It can therefore estimate where the curb will be as you roll towards it.

And yes, that technology is here - it just needs to be coded for. And your cute reference to a 7 year old made me chuckle, so thanks for that. It reminded me of all the funny videos on AFHV of kids running around in a playground and slamming into objects at eye-line.
 
These two blokes talk like they are Tesla employees praising their employer, for one.

Secondly, they admitted issues with both cameras and USS, and did not produce convincing arguments as to why exactly cameras are better. The use cases mentioned about how you could potentially park better over a curb with a camera system are pretty laughable.

Also there is this “USS costs money”, without mentioning how much the cost of the camera is. “They could be damaged” - so could the camera.

Not convincing.
Agree with you on everything, but his example about parking curb made me curious. At present time I see no evidence that Tesla has object memory like humans do. We find something of interest and then monitor it's shape and position to have better understanding of the object nature, direction where it's moving or to predict location of the object relative to our moving trajectory. As far as I can tell, Tesla only analyses what it sees at present moment in time and forgets what it saw as soon as the object disappears from the cameras view. I wonder, if they are working on new software that will "remember" what it saw to calculate object position relative to the car after the object moves outside of Tesla cameras view?
 
Agree with you on everything, but his example about parking curb made me curious. At present time I see no evidence that Tesla has object memory like humans do. We find something of interest and then monitor it's shape and position to have better understanding of the object nature, direction where it's moving or to predict location of the object relative to our moving trajectory. As far as I can tell, Tesla only analyses what it sees at present moment in time and forgets what it saw as soon as the object disappears from the cameras view. I wonder, if they are working on new software that will "remember" what it saw to calculate object position relative to the car after the object moves outside of Tesla cameras view?
The occupancy network they discussed at AI Day has this capability (it’s also able to account for other occluded objects).
 
  • Informative
Reactions: buddhafree
Let's look at the problem from a basic level.
The idea is already here, but the technology is not.

Bringing the idea to explain how it can be done is here, but the technology to make that idea successful is not.
...7 year old...
Not years! A 7-month-old baby would start to notice that the toy covered by a blanket is still there and would remove the blanket to confirm the theory of object permanence.
 
Agree with you on everything, but his example about parking curb made me curious. At present time I see no evidence that Tesla has object memory like humans do. We find something of interest and then monitor it's shape and position to have better understanding of the object nature, direction where it's moving or to predict location of the object relative to our moving trajectory. As far as I can tell, Tesla only analyses what it sees at present moment in time and forgets what it saw as soon as the object disappears from the cameras view. I wonder, if they are working on new software that will "remember" what it saw to calculate object position relative to the car after the object moves outside of Tesla cameras view?
I think lot of people discussing this whole topic are missing that Tesla is talking about a tech they are rolling out in FSD Beta. I suggest reading about occupancy networks mentioned in the announcement:

What you see generally right now with most Teslas without FSD Beta is not representative of what they plan to push out.