Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla replacing ultrasonic sensors with Tesla Vision

This site may earn commission on affiliate links.
I think you are completely ignoring the point many are making in this thread, which is Tesla should first fix the software, then remove the sensors. In addition they need to add another camera in the front. Since occupancy network is not going to detect something left in front of the car that was not there the day before, e.g. a tricycle, If the camera can't see it.
That's not the point being discussed in the thread I was responding to. What is being discussed is if Tesla has the ability to persist objects it detected. The current (non FSD Beta) visualizations don't really do that (it even has trouble with objects that straddles two camera views), but the occupancy network does.

So that point is not an issue and it'll be able to handle the most common use case for the ultrasonic sensors (which is pulling into a space with static objects all around, including curbs). I'm skeptical of it can match the 1 inch depth resolution, but seeing the results, I don't have doubt that accuracy aside, it'll generally be able to handle that application.

As for objects that get under the car after its last memory, sure, it won't handle that. I agree a low mounted camera would be better for that (even better than the ultrasonics, which actually can't detect objects that are too low).
 
  • Like
Reactions: Mark II
Looks like somebody updated the sensor coverage hero image at the top of Autopilot:
View attachment 861205

But… I don't think the image is actually correct in representing the camera visibility. E.g., repeater camera's view should only be towards the back but still towards the sides for 60° coverage.

And for reference, here's how it looked in 2016:
View attachment 861204
Also do the pillar cams see that much content behind the car currently? The bottom image seems more correct with respect to those. Though on the other hand the bottom image shows repeater cams that see more to the side of the car than reality...
 
Agree with you on everything, but his example about parking curb made me curious. At present time I see no evidence that Tesla has object memory like humans do. We find something of interest and then monitor it's shape and position to have better understanding of the object nature, direction where it's moving or to predict location of the object relative to our moving trajectory. As far as I can tell, Tesla only analyses what it sees at present moment in time and forgets what it saw as soon as the object disappears from the cameras view. I wonder, if they are working on new software that will "remember" what it saw to calculate object position relative to the car after the object moves outside of Tesla cameras view?

The occupancy network feature they are working on is supposed to help there. I went through the video that explains the occupancy networks. It is highly technical even for me, (I’ve been in h/w and s/w design most of my life), but I got the basic concepts.

I’m not going to knock Tesla’s future tech before it is out, but I am skeptical about vision-only systems, with good reason. No redundancy.
 
That's not the point being discussed in the thread I was responding to. What is being discussed is if Tesla has the ability to persist objects it detected. The current (non FSD Beta) visualizations don't really do that (it even has trouble with objects that straddles two camera views), but the occupancy network does.
Sorry, went back and carefully reread the thread this time. You right. My apologies.
So that point is not an issue and it'll be able to handle the most common use case for the ultrasonic sensors (which is pulling into a space with static objects all around, including curbs). I'm skeptical of it can match the 1 inch depth resolution, but seeing the results, I don't have doubt that accuracy aside, it'll generally be able to handle that application.
Agree.
As for objects that get under the car after its last memory, sure, it won't handle that. I agree a low mounted camera would be better for that (even better than the ultrasonics, which actually can't detect objects that are too low).
Agree, and this is the one that I think most people are concerned about. That and the reality that the new occupancy stuff will most likely not be rolled out by the time customers start receiving cars without sensors, or even before they disable sensors in current cars.
 
Lol what did you expect the outcome would be? As if you’re going to pay an attorney hundreds or thousands of dollars over parking sensors
If you want to sue for $250, you don’t need hundreds of thousands of dollars in attorney fees. Just file yourself in small claims court. Tesla won’t show, you’ll win a judgement by default, send it to Tesla, and they’ll pay it. Your just out court costs.
 
Oh, I get it now. It took me a sec to understand what you were describing. Now that the ultrasonics have been removed and not replaced with any new software as of this morning, your scenario makes more sense.
Exactly, and most likely the software updates for the next year.

If Tesla actually delivers an update that provides equivalent functionality before customers receive their first cars, without sensors, I for one would be very pleasantly surprised.
 
  • Like
Reactions: _Redshift_
Exactly, and most likely the software updates for the next year.

If Tesla actually delivers an update that provides equivalent functionality before customers receive their first cars, without sensors, I for one would be very pleasantly surprised.
Let's touch base in a year and see where we are. Based on FSD Beta updates over the last 12 months, this could get interesting.
 
  • Like
Reactions: Mark II
Let's touch base in a year and see where we are. Based on FSD Beta updates over the last 12 months, this could get interesting.
Yeah, it's been more than a year since the first Model S without key pieces of hardware (i.e. stalks, center horn, etc.) were delivered. More than a year later there are still issues with the "software updates" that would make it all better.

Some examples:
- Press indicator to change lanes and then turn, however it auto turns off after lane change, need to press again to re-activate for the turn. The car knows where I am going, since I have a route.
- Center horn does not work. Apparently my car has the hardware to do it, just no software update after many months.
- No gear shift, because the car just knows where I want to go. All going to be fixed with software. Yet my car regularly wants to drive forward straight into a wall. Even the sensors are bright red.
- Auto wipers don't work well at night with light rain, and no way to set a regular interval due to lack of stalk. So need to press button all the time.
- Following distance with AP. Car does not increase distance when raining. I used to easily adjust with stalk. Now it is buried in a menu. They could just increase distance if the wipers are running.

These examples should all be relatively simple software fixes compared to what we are talking about here, yet more than a year later it is still broken.
 
An excellent edge case. Let's work on getting the car to park in 98% of boring parking scenarios that everyone on the planet deals with every day. Then we'll work on not running over chickens.
They call it “Full Self Driving” not “98% Self Driving”. Understanding the difference is their main challenge.
The system does not reliably identify those edge cases, let alone react safely to them. Case in point: phantom braking.
 
They call it “Full Self Driving” not “98% Self Driving”. Understanding the difference is their main challenge.
The system does not reliably identify those edge cases, let alone react safely to them. Case in point: phantom braking.
One might say phantom braking is the safer alternative to ambiguity than colliding.
 
Ah, I see where you're coming from now. Got it.
I have done h/w design verification quite a bit. 80% coverage of a design is fairly easy to achieve. 90% is harder, and the last 5% is the hardest.

My issue with every ADAS is that the sensors are exposed to dirt, grime, mud, snow, rain. They aren’t reliable unless there are cleaning systems. That market (cleaning systems for sensors) is expected to be nearly a $1B by 2023, according to some estimates. I hope Tesla implements that system, especially with their vision only approach.

Next, real world conditions present a nearly infinite verification space. It is not possible to test for all conditions.when accidents occur, the blame will fall on Tesla. MBZ recently went bold and is said to accept liability in case their system fails. I don’t ever see Tesla making similarly bold claims.

I was reading the the occupancy network educational material someone helpfully posted here. It starts with determining whether a pixel is occupied or not. Now, let us say there is a ballon lying on the road. Pixels are occupied. It now needs to represent them as the closest learned object it has in memory. Let’s say it chooses a basketball. That’s an obstacle you can’t possibly want to run over. Brakes real hard, and gets rear ended.
A human would not have done that.

There will be myriad of such edge cases. Which is why, FSD will be beta forever.
 
I was reading the the occupancy network educational material someone helpfully posted here. It starts with determining whether a pixel is occupied or not. Now, let us say there is a ballon lying on the road. Pixels are occupied. It now needs to represent them as the closest learned object it has in memory. Let’s say it chooses a basketball. That’s an obstacle you can’t possibly want to run over. Brakes real hard, and gets rear ended.
A human would not have done that.

There will be myriad of such edge cases. Which is why, FSD will be beta forever.
I have seen humans swerve and brake for plastic bags. So they are not all that great either. I think many humans are also still in Beta :)
 
Now, let us say there is a ballon lying on the road. Pixels are occupied. It now needs to represent them as the closest learned object it has in memory. Let’s say it chooses a basketball. That’s an obstacle you can’t possibly want to run over. Brakes real hard, and gets rear ended.
A human would not have done that.
Is that the same human that just rear ended a car?

In a world of engineer created situations, nothing ever works right.
Basketball Balloon, 18in
 
  • Like
Reactions: Mark II