Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Poll: 81% of Prospective Model 3 Owners Say They Won’t Pay Upfront For Full Self-Driving

This site may earn commission on affiliate links.
[vc_row][vc_column][vc_column_text]It seems most prospective Model 3 owners aren’t willing to shell out cash upfront for a $3,000 “full self-driving capability” option that is likely years away from becoming available to engage.

In a poll posted by jsraw 81.3% (347) of respondents said they will not pay for the feature at purchase. Adding the option later will cost an additional $1,000. Of respondents, 18.7% said they will pay for FSD upfront.

According to Tesla’s website, FSD “doubles the number of active cameras from four to eight, enabling full self-driving in almost all circumstances, at what we believe will be a probability of safety at least twice as good as the average human driver. The system is designed to be able to conduct short and long distance trips with no action required by the person in the driver’s seat. For Superchargers that have automatic charge connection enabled, you will not even need to plug in your vehicle.”

Elon Musk has said that level 5 autonomous driving is possible with second generation Autopilot and the FSD option, meaning the car is fully autonomous in any and all conditions. During his TED talk in April, Musk said the company plans to conduct by the end of 2017 a coast-to-coast demo drive from California to New York without the driver touching the wheel.

Obviously, there will be regulatory hurdles ahead and Musk has said it will likely be two years before owners will be able to engage FSD capability.

See a few comments on the poll below, or go to the thread here.

Screen-Shot-2017-08-14-at-3.24.50-PM.png
Swift

Screen-Shot-2017-08-14-at-3.16.15-PM.png
EinSV

Screen-Shot-2017-08-14-at-3.17.02-PM.png
jason1466

Screen-Shot-2017-08-14-at-3.17.59-PM.png
Waiting4M3

Screen-Shot-2017-08-14-at-3.21.06-PM.png
Enginerd[/vc_column_text][/vc_column][/vc_row]

 
Last edited by a moderator:
pedestrian/potholes/debris are typically short-lived so only matters if you have a lot of Teslas in vicinity so they can take advantage of it right this moment. (if you do remember such events on the other hand that's going to be a separate can of worms in the form of "why is my car always slow at this spot?")

I disagree, if the car detects a pedestrian and the driver hits the said pedestrian while the software would have avoided it, that's a very valid data point. If the driver missed the pedestrian and the software would have hit it, then that's also a very valid data point. It doesn't need to remember the exact spot. The goal is to avoid hitting pedestrians / other cars / etc.

I suspect lots of people don't properly disengage AP before doing some maneuver (I certainly don't a lot of times) that disables the AP anyway so sifting through this data in search of gems would be hard. If you don't even depend on eap/tacc activated then there's going to be even bigger crapshoot. "oh no, there's a guy in Mustang overtaking me, cannot allow that, full on forward!"
"When you approach a yellow light - accelerate", ...
Disengaging or engaging AP is irrelevant if the system is running in shadow mode at all times in all vehicles (whether your purchased EAP, FSD, or nothing at all).
 
  • Informative
Reactions: SlicedBr3ad
The funny thing is FSD is harder to accomplish because the computer has to learn how to anticipate human behavior. If all the cars on the road were FSD and communicated with each other, then FSD with be much easier to accomplish.
To some degree there's also physics. If you have a two ton vehicle coming down a cross road at high speed... there's no need to try to guess the human behavior, probably safer either waiting until they stop or are clear. If it was assumed that the human would stop when there are no signs of slowing down, it'd be a mistake.

In some of the waymo demos they've shown the vehicle waiting during a green light because it detected a car it "felt" was going to run the light and, in fact, it did. There was another example with three cars going the wrong way down a one-way.

This has been posted several times but I still love this talk
How a driverless car sees the road
(And this was early 2015)

Here's another from 2011 to compare the progress.
Google's driverless car
 
Last edited:
To some degree there's also physics. If you have a two ton vehicle coming down a cross road at high speed... there's no need to try to guess the human behavior, probably safer either waiting until they stop or are clear. If it was assumed that the human would stop when there are no signs of slowing down, it'd be a mistake.

In some of the waymo demos they've shown the vehicle waiting during a green light because it detected a car it "felt" was going to run the light and, in fact, it did. There was another example with three cars going the wrong way down a one-way.

This has been posted several times but I still love this talk
How a driverless car sees the road
(And this was early 2015)
What I really hope happens once FSD are on the street is that vehicle will send information to each other. Imagine your car not only getting data from the cameras and sensors but also from the cars in front of it and in front of that car and knowing exactly what all the cars in the vicinity will be doing. That kind of information sharing should make driving much safer.
 
  • Like
Reactions: JeffK
I'm positive Google is using the information they are collecting from Waze and Google Maps in their testing. Mobileye joined up with them recently to do testing in Arizona. I'm not sure how confident Google is in their technology but they have been testing for a very long time. Since they are not a car company they have no reason to rush it to market. I think eventually they will just license their technology to other car companies who don't want to spend on R&D for self driving technology.
There's a bit of misunderstanding here, I am not talking about google cars.
There's this waze app where you can report accidents and such, frequently at a personal risk since you were supposed to be paying attention while driving.
If the vision was such great (and cellphones nowadays are quite gpu powerhouses) - why is not there a mode for waze where you just drop the phone into a holder dashcam-like and it analyzes and submits the data to google automatically?
Throw in some rewards (see google local guides and such for examples) and people would be doing that in large numbers quickly foreshadowing whatever small number of somewhat smart cars people tend to own.
Yet we see nothing of the sort.

I disagree, if the car detects a pedestrian and the driver hits the said pedestrian while the software would have avoided it, that's a very valid data point. If the driver missed the pedestrian and the software would have hit it, then that's also a very valid data point. It doesn't need to remember the exact spot. The goal is to avoid hitting pedestrians / other cars / etc.

Disengaging or engaging AP is irrelevant if the system is running in shadow mode at all times in all vehicles (whether your purchased EAP, FSD, or nothing at all).
So basically post-mortem analysis of accidents to see how to improve the in-car software to be better?
I guess this could be done now with accident reports being sent to mothership including everything for the preceeding 10 seconds or so. Number of accidents is relatively small so you can employ real people to look at every one of them and make corrections and add most important scenarios into the regression test suite.

But consider this data point, back at end of May every time there was FCW triggered, a report was generated to send to mothership (for analysis I gather) and then they stopped doing that and now FCW no longer generates a report like that, I just suspect they quickly realized there's very little useful data to be extracted wrt the efforts spent.
And this is FCW - a very clear condition, imagine how many more false positives would they send for somebody to analyze if the car sends something every time there's something it does not really recognize (e.g. yesterday I approached a transported truck - it was facing me, but the rear axle was lifted. In static it was just looking like a truck that was driving wrong way on the interstate).
And this is the bottleneck - everything would need to be vetted/analyzed by a human. Computers don't do too well with unknowns, in particular on the visual front.
 
  • Like
Reactions: Swift
So basically post-mortem analysis of accidents to see how to improve the in-car software to be better?
Yes, and that's exactly how it was described last October during the AP 2.0 HW conference call whether real accident or virtual.

So it’s really a question of what the public thinks is appropriate, what your regulators think is appropriate and gathering enough data because the system will always be operating in shadow mode so we can gather a large volume of statistical data to show the false positives and false negatives, when would the computer acted and with that have prevented an accident or if the computers would have acted and that would have produced an accident.

We think that operating in shadow mode so we can send say when would it have incorrectly acted or not acted & compare that to what should’ve been done in the real world, that point which it is a statistically significant result that shows material improvements of the accident rates for many of the driven cars.
-Elon, Oct 2016

for somebody to analyze
It doesn't need to be analyzed by a human the majority of time. It's either a human caused incident or an incident which would have been caused by the onboard software running in shadow mode.

everything would need to be vetted/analyzed by a human.
Even a video game can detect if there's been an incident.
 
There's a bit of misunderstanding here, I am not talking about google cars.
There's this waze app where you can report accidents and such, frequently at a personal risk since you were supposed to be paying attention while driving.
If the vision was such great (and cellphones nowadays are quite gpu powerhouses) - why is not there a mode for waze where you just drop the phone into a holder dashcam-like and it analyzes and submits the data to google automatically?
Throw in some rewards (see google local guides and such for examples) and people would be doing that in large numbers quickly foreshadowing whatever small number of somewhat smart cars people tend to own.
Yet we see nothing of the sort.
That's not the purpose of Waze. Google purchased Waze to get traffic data which they use in their Google Map app to give you better routes and ETA. I think they also use the information they obtain for marketing too. They can determine where you go based on your location information and sell that information to marketers. But that's a completely different topics.
 
It doesn't need to be analyzed by a human the majority of time. It's either a human caused incident or an incident which would have been caused by the onboard software.
Well, it does need human analysis, I think.
Human would need to compare what's there in the real world (via cameras and radar traces) versus internal autopilot state (feed the traces into the same version of software to reconstruct that), note the discrepancies, file internal bugreports as needed, add testcases, somebody else would then implement fixes and try the same testcase with a new code to see what would change.
This is only for when autopilot would also get into a crash - i.e. majority of time, since in cases where autopilot would see the the accident going to happen, it would employ EAB and other countermeasures (like steer away) automatically.

Even a video game can detect if there's been an incident.
This is beside the point since videogame controls EVERYTHING in the (local world). and when there's a "glitch in the matrix" you just drive through the walls or whatnot. Does not work like that if your AP did not detect a wall
 
  • Love
Reactions: lunitiks and Swift
That's not the purpose of Waze. Google purchased Waze to get traffic data which they use in their Google Map app to give you better routes and ETA. I think they also use the information they obtain for marketing too. They can determine where you go based on your location information and sell that information to marketers. But that's a completely different topics.
google maps already collects your routes and traffic data (i.e. how fast are you going where), no waze needed.
Waze tells them other details like "accident at X", "traffic cop at Y", ... Possibly more, but what you listed was already done by google maps long before waze purchase.
 
Well, it does need human analysis, I think.
Human would need to compare what's there in the real world (via cameras and radar traces) versus internal autopilot state (feed the traces into the same version of software to reconstruct that), note the discrepancies, file internal bugreports as needed, add testcases, somebody else would then implement fixes and try the same testcase with a new code to see what would change.
This is only for when autopilot would not get into a crash - i.e. majority of time, since in cases where autopilot would see the the accident going to happen, it would employ EAB and other countermeasures (like steer away) automatically.

No, a computer can tell instantly if the the steering input from a human is different than the steering input a machine would have made, or the position of the accelerator, etc. A full reconstruction is not always needed. I have inputs and I have outputs. Are my outputs different than what's in the driver logs, yes or no. What was the outcome?
Keep in mind the logs contain brake pedal position, accelerator pedal position, steering angle from the driver, current speed, etc. These are merely numbers which can be easily compared by a computer.

This is beside the point since videogame controls EVERYTHING in the (local world). and when there's a "glitch in the matrix" you just drive through the walls or whatnot. Does not work like that if your AP did not detect a wall

Try any software that simulated real world physics... if you have a car traveling at 60 mph and there's an obstacle 20 yards in front of it, would it hit the obstacle or not? How does the driver input compare with what the computer would have decided to do? You don't need to control everything else to judge the outcome based on those actions.

Say a black cat comes along twice and there's a glitch in the matrix, if the machine detects a wall and the driver drives through the wall without damage to the vehicle then it's a great data point of an example of a false positive which should be recorded and sent home.
 
Last edited:
  • Like
Reactions: SlicedBr3ad
google maps already collects your routes and traffic data (i.e. how fast are you going where), no waze needed.
Waze tells them other details like "accident at X", "traffic cop at Y", ... Possibly more, but what you listed was already done by google maps long before waze purchase.
A lot of the information between the two apps are similar. The reason google bought waze is because they wanted the user base. People who use waze are not using google maps. The information was important enough for them to pay for it.
 
No, a computer can tell instantly if the the steering input from a human is different than the steering input a machine would have made, or the position of the accelerator, etc. A full reconstruction is not always needed. I have inputs and I have outputs. Are my outputs different than what's in the driver logs, yes or no. What was the outcome?
Unless there's some fudge factor, computer steering input is going to be different 99% of the time. People drift within a lane, the speed tends to vary a bit and such.
Additionally while control inputs are definite, visual/radar inputs are not.

Also you miss a step at the end, so driver inputs differ from my suggested outputs, the real world outcome was a crash, how do you know what would have happened with the suggested outputs? You can plug all the known things into the model nd run it, but sicne your visual code is not perfect you still need to run it by a person to make sure nothing is missing or the system might be reinforcing some wrong learning which you want to avoid as well. While the number of events is small (accidents) this is ok, as you start to scale it out to other events - you quickly run out of human power.

Try any software that simulated real world physics... if you have a car traveling at 60 mph and there's an obstacle 20 yards in front of it, would it hit the obstacle or not? How does the driver input compare with what the computer would have decided to do? You don't need to control everything else to judge the outcome based on those actions.
the difference is the simulator knows the wall is a wall. AP assumes so but might be wrong, OR it might not see the wall when there is one.

The case of "there is a wall and the driver is on head on collision course" is NOT interesting because in that case AEB should trigger and no accident result (I know Tesla does not advertise their AEB like that yet).
Once you remove all the preventable accidents like that the only accidents that remain are the ones where AP did not expect the accident and now somebody would need to go through the data and see why and how to fix it.
 
  • Like
Reactions: Swift
A lot of the information between the two apps are similar. The reason google bought waze is because they wanted the user base. People who use waze are not using google maps. The information was important enough for them to pay for it.
Yes, the userbase was important of course, but waze gives you more data as well. Since purchase you now get icons on your google maps of "road work", "accident", ... that were reported by waze. Somebody enters those manually in their waze app and it's pretty important, important enough that the data made it to other google products.
Now imagine you just drop a phone on a holder and it scans the environment and reports everything to google for analysis for finer granularity of the same (also safer) - yet nobody does this (and on a phone you get a picture + accelerometer - so you know steering input too, position, heading and speed).
 
Unless there's some fudge factor, computer steering input is going to be different 99% of the time. People drift within a lane, the speed tends to vary a bit and such.
Additionally while control inputs are definite, visual/radar inputs are not.

Also you miss a step at the end, so driver inputs differ from my suggested outputs, the real world outcome was a crash, how do you know what would have happened with the suggested outputs? You can plug all the known things into the model nd run it, but sicne your visual code is not perfect you still need to run it by a person to make sure nothing is missing or the system might be reinforcing some wrong learning which you want to avoid as well. While the number of events is small (accidents) this is ok, as you start to scale it out to other events - you quickly run out of human power.


the difference is the simulator knows the wall is a wall. AP assumes so but might be wrong, OR it might not see the wall when there is one.

The case of "there is a wall and the driver is on head on collision course" is NOT interesting because in that case AEB should trigger and no accident result (I know Tesla does not advertise their AEB like that yet).
Once you remove all the preventable accidents like that the only accidents that remain are the ones where AP did not expect the accident and now somebody would need to go through the data and see why and how to fix it.
The answer to all of those questions is a matter of statistics as steering angles are in as little as tenths of a degree. Vastly different in the case of a turn or a fast crash avoidance maneuver.

You are sending back NN outputs not necessarily full radar/visual inputs so these can't be compared by a human anyway unless they are using software to reconstruct based on detected inputs. Does the purple box crash yes or no... do you need a human to tell?
googlecrash.gif

Does it matter if the purple box is a car, truck, 800 lb pedestrian, etc?
It's a purple box and it hit you. There's no need for it to be more complicated.
 
  • Like
Reactions: SlicedBr3ad
Yes, the userbase was important of course, but waze gives you more data as well. Since purchase you now get icons on your google maps of "road work", "accident", ... that were reported by waze. Somebody enters those manually in their waze app and it's pretty important, important enough that the data made it to other google products.
Now imagine you just drop a phone on a holder and it scans the environment and reports everything to google for analysis for finer granularity of the same (also safer) - yet nobody does this (and on a phone you get a picture + accelerometer - so you know steering input too, position, heading and speed).
It's definitely something that can be done. I think the reason why Google hasn't implemented it yet is because they don't really need the information right now. Remember they aren't a car company. In many ways they are a marketing company. The vast majority of their revenue is from ad sales.
 
It's definitely something that can be done. I think the reason why Google hasn't implemented it yet is because they don't really need the information right now. Remember they aren't a car company. In many ways they are a marketing company. The vast majority of their revenue is from ad sales.
Just think about how useful would it be to also know what cars are where when on the road and correlate that with their owners.
Who parks in front of what store... Lots of very juicy information (that you might not get otherwise because the other people decided not to own and Android device or not use google services).
 
The answer to all of those questions is a matter of statistics as steering angles are in as little as tenths of a degree. Vastly different in the case of a turn or a fast crash avoidance maneuver.

You are sending back NN outputs not necessarily full radar/visual inputs so these can't be compared by a human anyway unless they are using software to reconstruct based on detected inputs. Does the purple box crash yes or no... do you need a human to tell?

Does it matter if the purple box is a car, truck, 800 lb pedestrian, etc?
It's a purple box and it hit you. There's no need for it to be more complicated.
You need the actual visual radar inputs for the case when there is NO purple box, though.

Steering angles are also not as cut and dry. What if the car is on a curve, driver decided to overtake somebody/change a lane, ...

Basically Everything that the car already knows it already knows, it's the unknown that the car or other automated systems could not infer themselves so human help is needed. And that is where the bottleneck is. I am sure they did not just stopped FCW reporting because it was greatly useful.