Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Emperor's new clothes

This site may earn commission on affiliate links.
Supervision was a bad choice by DF but Mobileye certainly believe (or believed) vision only is ok

From their website:

From the outset, Mobileye’s philosophy has been that if a human can drive a car based on vision alone – so can a computer.

Theory is different from practice. In practice, MobilEye has produced 2 versions:

Level 4 Robotaxi version with Cameras, Radar, and Lidar on the left side of the picture below and Driving Asssist version on the right side of the picture below with cameras but without Radar and Lidar:

1638898824512.png
 
....Given there are only 3 or 4 mega players in this space (Tesla, Mobileye, Waymo being the names we all know), to say "almost every" is a little disingenuous as the exception could be argued as being the company insisting on LIDAR and other technology...

There are many Autonomous Vehicle Companies:


Tesla is not among the list because Tesla has admitted to the California Department of Motor Vehicles that contrary to what the public believes, the Tesla FSD system is Level 2 and not Autonomous even in its final software release:

 
...The fire truck accidents happened WITH forward facing radar, the vision only approach is much later but the article suggests that its vision only that is the cause....

Tesla's refusal to work on sensor fusion including Lidar is the problem.

@flyhighboi20 was on pure vision FSD beta in the dark and hit the plastic garbage can in the middle of the road. This could have been avoided if Tesla has the skill of sensor fusion with Lidar:

2021-11-29_19-31-13-garbage-can_trim-short-2-gif.739270
 
Other than the present version of cruise control unless and until I can hand over 100% to an autopilot and watch a video or read a book I'm not interested.
Either I'm 100% in control, or the computer is. I see no value in anything less.

Tesla would do better making everything they offer work better than every other car manufacturer, everywhere, rather than promising the impossible. At the moment many of the peripheral functions, like YouTube and TuneIn don't function unless there is a good, strong 5G mobile phone signal. Out here in South Wiltshire there are areas of 3G where anything Web based is pointless.
 
unless it was raining of course.....

LIDAR can work in the rain but the range is shortened which means the car has to slow down or stop as needed.


That's why sensor fusion is a must. If one doesn't work then another different one can compensate for it.


But first thing first: Tesla system needs to have the capability to avoid obstacles in good weather first. Tesla Autopilot fatalities have been happening in broad daylight and good weather.
 
At the moment many of the peripheral functions, like YouTube and TuneIn don't function unless there is a good, strong 5G mobile phone signal. Out here in South Wiltshire there are areas of 3G where anything Web based is pointless.
That's not the case. The Tesla doesn't support 5G and I have watched plenty of YouTube on the screen! I can only think you meant to say 4G ... but it doesn't need an above average LTE (4G signal) in my experience though obviously 3G is pretty much a waste of time on any device these days.
 
I didn’t have much in the way of expectations concerning AP/FSD when I bought my car back in 2019. That was just as well since it became pretty obvious in about the first 10 mins of using it that apart from basic lane assist on motorways and similar roads, it was pretty useless. Engaging it on most other UK roads certainly provided a novelty factor and entertainment, but obviously no one would put any trust in it.

Despite Elon’s bragging, and carefully constructed media events (including the big FSD conference where he rolled out his chief hardware and software engineers), I came to the opinion that vision-only autonomy was always going to be a tall order. Even if the complexities of real world driving scenarios could ever be processed by the hardware and software of the “FSD computer” (in particular getting sufficient coverage of the edge cases in the NN), the challenge of doing it based on a handful of cheap webcams, exposed to the elements, seemed a basic flaw in the whole approach.

Nothing since has changed my view. Current AP has got worse over the 45 updates my car has had - based simply on the frequency of phantom braking events - and whilst all the videos of the rewrite look promising, every one I’ve seen has had failure/dropouts at some point and they’ve all been done in decent weather conditions. I’m also pretty sure there‘s a lot of US-specific road and driving behaviour assumptions in there that will need a whole load of work to adapt it for any other country it‘s rolled out to.

I’d like to see Tesla succeed, but I‘m completely convinced that it’ll take another generation of hardware and probably another 5 years before I can buy something that gets even close to me letting the car get on with the driving.
 
  • Like
Reactions: Beady3647
As a recent owner of a Model 3, I have to say that it's a good job Tesla priced the FSD package so expensively. I might have been tempted, then really disappointed by it. As it is, I have the basic lane keeping system and I'm happy with it (apart from the nagging about applying torque to the wheel every 20 seconds).

About the sensors, or lack thereof: I cannot understand Elon's reasoning behind his stated goal of "vision only" self drive capabilities. There is only one system capable of this that I know of, and that's the human brain (although the brain has access to the other four senses when driving). Does he think he will be able to emulate that in any meaningful timespan?

Put another way: We are buying a thing to serve a purpose. Why is he limiting the effectiveness of that thing by excluding sensors that improve the functionality? (I'm thinking auto wipers here, but any other feature that is degraded by the "vision only" approach applies, including FSD.)
 
Last edited:
  • Like
Reactions: candida
What I found out yesterday is that it seems beta will try to drive you into the side of a moving train. I disengaged and came to a stop at a rural rail crossing (because the car seems to not be able to see far enough ahead and will run the stop sign as it's unmapped and if you are doing the speed limit it doesn't see it and react until you are almost at the stop sign). Then after I stopped (no cars or crossing gate in front of me) I decided to engage beta again to see what it would do, it crept up to the stop line, was showing a couple box trucks going past on the screen in place of the train but after it crept up it stopped showing the trucks and then decided to go. I obviously had my foot over the brake waiting to see what it would do so I hit the brake, obviously I didn't let it get super close to the train or anything but it showed no signs that it recognized anything being there or that it was going to do anything other than normal acceleration. I think I remember Elon mentioning something about needing to make the system be able to understand something is there even if it doesn't know exactly what it is. I just feel like there are many disconnects in the logic of how one thing talks to another in the car. Like videos where the car tries to drive directly into a curb or a wall and the persons excuse is that it's in a parking lot and parking lots aren't supported yet ... well the car showed the curb on the screen so it knew it was there, parking lot or not don't drive directly into a curb or a wall. As I have said in other posts I fully understand this is a testing thing and the problems they are trying to solve are insanely difficult and I have no idea how they are going to solve all the stuff in any kind of nearish timeframe. I think the current camera setup is not enough between not being able to see and react far enough in advance for stuff not on the map, to having to do the weird creeping and turning behavior I am assuming to try to be able to see down the road far enough. In my specific area beta is terrible, I run it for the first couple days after an update to send some data back then after that if I use it I just disengage it at most intersections if other cars are present, all the rail crossings because it wont stop in time, going through town because it goes back and forth between the lane and the parking spots and acts drunk. I guess one upside is that I don't feel the anxiousness anymore for new updates, I used to be excited and keep checking my car now I am still on 10.5 and don't care lol. But I do kind of get this feeling when I read release notes and they are talking about we improved this or that by 1.5% and so on, I just think at least in my area with my experience we are so far away from worrying about the couple % improvements, seems like they feel like they are in the fine tuning phase and I feel like around here we are at the let's make it stop at stop signs phase.
 
  • Like
Reactions: candida
What I found out yesterday is that it seems beta will try to drive you into the side of a moving train. I disengaged and came to a stop at a rural rail crossing (because the car seems to not be able to see far enough ahead and will run the stop sign as it's unmapped and if you are doing the speed limit it doesn't see it and react until you are almost at the stop sign). Then after I stopped (no cars or crossing gate in front of me) I decided to engage beta again to see what it would do, it crept up to the stop line, was showing a couple box trucks going past on the screen in place of the train but after it crept up it stopped showing the trucks and then decided to go. I obviously had my foot over the brake waiting to see what it would do so I hit the brake, obviously I didn't let it get super close to the train or anything but it showed no signs that it recognized anything being there or that it was going to do anything other than normal acceleration. I think I remember Elon mentioning something about needing to make the system be able to understand something is there even if it doesn't know exactly what it is. I just feel like there are many disconnects in the logic of how one thing talks to another in the car. Like videos where the car tries to drive directly into a curb or a wall and the persons excuse is that it's in a parking lot and parking lots aren't supported yet ... well the car showed the curb on the screen so it knew it was there, parking lot or not don't drive directly into a curb or a wall. As I have said in other posts I fully understand this is a testing thing and the problems they are trying to solve are insanely difficult and I have no idea how they are going to solve all the stuff in any kind of nearish timeframe. I think the current camera setup is not enough between not being able to see and react far enough in advance for stuff not on the map, to having to do the weird creeping and turning behavior I am assuming to try to be able to see down the road far enough. In my specific area beta is terrible, I run it for the first couple days after an update to send some data back then after that if I use it I just disengage it at most intersections if other cars are present, all the rail crossings because it wont stop in time, going through town because it goes back and forth between the lane and the parking spots and acts drunk. I guess one upside is that I don't feel the anxiousness anymore for new updates, I used to be excited and keep checking my car now I am still on 10.5 and don't care lol. But I do kind of get this feeling when I read release notes and they are talking about we improved this or that by 1.5% and so on, I just think at least in my area with my experience we are so far away from worrying about the couple % improvements, seems like they feel like they are in the fine tuning phase and I feel like around here we are at the let's make it stop at stop signs phase.
Thanks for this. So the big rewrite isn’t a silver bullet either? - which is pretty much what I had concluded from the odd video that finds its way over here. Long way to go yet I think…