Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

[uk] UltraSonic Sensors removal/TV replacement performance

This site may earn commission on affiliate links.
If it were that important perhaps you shouldn't have taken delivery of the car? (for what it's worth I think it's a ridiculous state of affairs, but people who accepted the car regardless are not blameless).

Imagine buying a £50k+ car and then hacking parking sensors into it like it's a car from the late 90s. :(
Absolutely my fault for blindly expecting a company like Tesla charging £58k for a car for removing a feature without first providing a substitute. I didn’t want to back my car into another car or worst into somebody so ok with hacking my £58k (now £53k) car.
 
  • Like
Reactions: CWT3LR and Durzel
200-225 miles I need to pee anyway.
Next question
how about a statement. If you have battery, I mean bladder, degradation or only charged to 80% and the is no bathroom when you need it, or it’s cold, you might be all wet. I have rarely gotten, if ever, 225 miles of highway use at 75 mi/hr on my 2018 LR Dual motor. I think 165 is more of a usable range, since the manufacture recommends 20-80 or 20-90. (my max charge has now degraded to 270). Cold or loaded is much worse. I could not get more than 70 miles towing my golf cart on a uhaul trailer.
I have received the must drive under 65, 55,45 warning to reach my destination many times. But the acceleration is amazing
 
The rear pointing side cameras still have a clear view. So will the cameras in the windscreen. I suspect the way Tesla will implement the vision-based parking system is by using all of the cameras to continuously generate a 3D map of the area the car is driving through (when driving at a slow speed). It's a technique called Simultaneous Localisation And Mapping (SLAM) which is widely used in robotics (some robot lawnmowers and vacuums already use this). Hopefully, Tesla will display a 3D view of the information it's gathered on the screen and where it thinks the car is in relation to the surroundings and obstacles. This method doesn't have to rely on the boot-mounted camera (but could use data from it if it was clear), nor do they need a camera in the front bumper.
This approach will only work when there is enough light reflected back from the scene and if the items in the scene have enough texture and contrast to be seen... In many ways this should be superior to a USS system but there are bound to be edge cases where it falls over.

SLAM just refers to the broader field. You can say autonomous driving is an application of SLAM, so Tesla and other manufacturers are already doing SLAM. Other applications include robot hoovers/mowers, drones, AR, robotics, etc.

SLAM in general is using all kinds of sensors - radar, sonar, lidar, USS, laser rangefinding, cameras... What Tesla is doing with Vision could be said to come under Visual SLAM (VSLAM) that uses only cameras. Note VSLAM is pursued for cheapness, not because it is in any way better or easier than SLAM with other/multiple sensor types. And even within VSLAM people are playing with stereo cameras and RGB-D cameras (cameras that also sense depth based on time-of-flight measurement) where Tesla are not.

So again it all boils down to can Tesla really get all this working with only simple cameras? No active sensing that gives range data - like lidar, radar, RGB-D, USS, etc. Just trying to figure it all out from 2D camera frames. I'll believe it when I see it! Just seems to me they are avoiding one problem (sensor fusion) by giving themselves an even bigger problem (autonomous driving with only cameras)!

No it doesn't. USS has no memory so can't tell if something is moving into the path of the car and has a very short range.

Cameras don't have memory either! It's all in the implementation. All this talk of persistency and the occupancy network is not exclusive to Tesla Vision - it can just as easily, and arguably much more precisely, be done with other sensory inputs. So I don't see it as some slamdunk justification for ditching the USS and going Vision only, nor anything to be super optimistic about. You could make a persistent occupancy network from USS data if you really wanted to - reality is the basic parking functions never needed it as they simply put appropriate sensors in the appropriate positions for the task!
 
Not really. They already have a map of the (immediate) environment , part of which is already displayed on our screens. It doesn’t need any additional hardware or inputs that aren’t already in the car.

Don’t know, I’m not a software developer but I can see how this approach may work.
I suggest that is the issue: Tesla’s approach MAY work (future-oriented). USS work NOW. I know, I know, repetitive to this thread. Sorry.
 
SLAM just refers to the broader field. You can say autonomous driving is an application of SLAM, so Tesla and other manufacturers are already doing SLAM. Other applications include robot hoovers/mowers, drones, AR, robotics, etc.

SLAM in general is using all kinds of sensors - radar, sonar, lidar, USS, laser rangefinding, cameras... What Tesla is doing with Vision could be said to come under Visual SLAM (VSLAM) that uses only cameras. Note VSLAM is pursued for cheapness, not because it is in any way better or easier than SLAM with other/multiple sensor types. And even within VSLAM people are playing with stereo cameras and RGB-D cameras (cameras that also sense depth based on time-of-flight measurement) where Tesla are not.

So again it all boils down to can Tesla really get all this working with only simple cameras? No active sensing that gives range data - like lidar, radar, RGB-D, USS, etc. Just trying to figure it all out from 2D camera frames. I'll believe it when I see it! Just seems to me they are avoiding one problem (sensor fusion) by giving themselves an even bigger problem (autonomous driving with only cameras)!



Cameras don't have memory either! It's all in the implementation. All this talk of persistency and the occupancy network is not exclusive to Tesla Vision - it can just as easily, and arguably much more precisely, be done with other sensory inputs. So I don't see it as some slamdunk justification for ditching the USS and going Vision only, nor anything to be super optimistic about. You could make a persistent occupancy network from USS data if you really wanted to - reality is the basic parking functions never needed it as they simply put appropriate sensors in the appropriate positions for the task!
I really like your reply, I can see why you are not very confident regarding the system being able to map a 3D scene from 2D camera views. I am hoping they can use something similar to the technology that Oculus/Meta use for their VSLAM on their Quest headset, stitching together 4 low res black and white camera views to a geometrically and depth correct 3D view to help track their VR headset and controllers in 3D space and alert users to objects within their space. I have a M3 that doesn't have USS and am very pissed off that it doesn't but at the time of delivery I had waited 6 months so didn't want to turn it down. Come on Tesla get this issue sorted.
 
No it doesn't. USS has no memory so can't tell if something is moving into the path of the car and has a very short range.
Cameras don't have memory either to remember space. The computers that the cameras are connected to do. There's no reason USS couldn't have a memory if the software was written for it. No one's done it though as it's not really needed.

I do hope Tesla can solve this and I think their direction is sound. Remove as many little components, chips and so on that they can where it's possible to replace with software. It's a great long term plan to lowering the cost of the cars or increasing their margin. It does however suck to live through this process when they don't make it seamless at all.
 
  • Like
Reactions: pow216 and My2cents
I admire the ambition, but very strongly doubt the execution will be good enough - based on experience of auto headlights (only just recently made acceptable) and auto wipers (highly variable performance based on conditions).

Tesla just don't have the track record when it comes to removing proven tech and replacing it with their own implementations, it's as simple as that.

Whilst I'm aware that some Stanford academics have been working on making simple cameras be able to produce depth maps (i.e. visual processing in 3D), this research is still in its infancy and - from what I've read at least - based on lab conditions, i.e. perfect light, cameras that aren't obscured by water, dirt, etc. People have posted photos of their rear camera and it's almost a complete blur.. yet people think somehow the car will be able to discern objects and their distance, to replicate USS....
 
I think their direction is sound. Remove as many little components, chips and so on that they can where it's possible to replace with software.

As an engineering exercise I totally agree. As a game being played on active vehicle orders and newly delivered cars without normal, established functionality being maintained, it's not acceptable imo.

With regards to the current direction, let's suppose HW4 actually does have new camera locations and heaters for b-pillar cam windows. This would seem to acknowledge deficiencies in current designs that need to be addressed to facilitate a heavier reliance on vision.

Can it be reasonable to create 'orphan to be' vehicles today on the basis that some new future spec vehicle will nolonger need certain sensors like USS?
 
Last edited:
I had some time to kill today so I wandered into Tesla Gatwick and asked them when the functionality previously associated with USS's would be restored to recently delivered sensor-less cars?

They 'expect' an OTA update to be available in about '4 weeks time'.

OK.....cue laughter, derision, incredulity.....carry on!
They know nothing
 
  • Like
Reactions: Ratch and Whyone
As an engineering exercise I totally agree. As a game being played on active vehicle orders and newly delivered cars without normal, established functionality being maintained, it's not acceptable imo.

With regards to the current direction, let's suppose HW4 actually does have new camera locations and heaters for b-pillar cam windows. This would seem to acknowledge deficiencies in current designs that need to be addressed to facilitate a heavier reliance on vision.

Can it be reasonable to create 'orphan to be' vehicles today on the basis that some new future spec vehicle will nolonger need certain sensors like USS?
Fully agree. The approach of cutting parts out makes sense but doing it before a replacement is ready, I don't agree with. Though if it was done due to a shortage of parts that forced this and slowing production is expensive so might have been the best of a bunch of bad options.

I'm not expecting this to work as well as USS when it arrives. I'm not even convienced it'll come soon as I'm guessing they might not have been working on it upfront. Possible they had USS shipment issues then decided they might be able to do it with cameras so went that route.