Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

EAP and FSD pricing give hints to likely release readiness

This site may earn commission on affiliate links.
AP1 was, too. But that was half the price.

Pretty sure companies are in the business of charging what the market will bear. But I could be wrong.

Edit: Yep, I did some research. It does appear that companies will indeed charge what a product or service is worth until there is some competition to help bring down the price. Got to love that Wikipedia.
 
  • What do you do when you hear an ambulance siren?
    • Look around for the ambulance... if I have cameras looking around at all times to identify an ambulance I wouldn't need to hear it, in addition I can't hear it if my music is turned up.
  • See a clearly distracted 6 year old playing on the sidewalk.
    • A computer can identify children and also if they are moving toward the road. A computer can also react much faster than I could, if needed.
  • see the lumber truck up ahead dropping bouncing stones onto the highway
    • Not relevant for lvl 5 driving
  • Get stuck behind a snowplow that's kicking up all sorts of crap, making it nearly impossible to see until you can get up beside them
    • radar sees through snow, but if the path is not visible the car would not and should not try to pass. It should simply back off and wait. Computers aren't impatient. :)
Basically, there are simply solutions to most of life's problems. Humans tend to make things more complicated.

Oh so simple for the person who doesn't have to write the code. Nothing is ever as simple as it seems. And what's simple for you is decidedly not always simple for a computer. All of these situations demand a little contextual understanding and fore planning.

Ambulance: If you hear the ambulance you try to identify the direction, and if appropriate, squeeze aside with all of the other traffic. You may not even see the ambulance until it's near you. Even if you don't hear it, you will see the behavior of the other traffic and and quickly realize what is happening.

6 Year Old: We've all been in this situation. A kid may not be heading for the road, but the context tells you that the kid is not being safe and could jump into the road. So you slow down enough that you can react if the kid does something dumb. You might lightly honk.

Lumber Truck: It's relevant. You need to slow down, switch lanes or quickly pass to avoid getting a half pound stone into windshield.

Snowplow: If you weren't willing to make the run through the snowsplash, you might be stuck behind that plow for a LONG time. You have to know that it will only last a few seconds and then you're into clear air.

I can come up with endless scenarios where contextual awareness is key. You can fire as much data as you want at the machine; they don't have the capacity to learn how deal with all of these situations. I believe you can get to the point where 99.5% of the driving can be fully automated, and maybe more. But taking the driver out the equation entirely is a different beast.
 
Oh so simple for the person who doesn't have to write the code. Nothing is ever as simple as it seems. And what's simple for you is decidedly not always simple for a computer. All of these situations demand a little contextual understanding and fore planning.

Ambulance: If you hear the ambulance you try to identify the direction, and if appropriate, squeeze aside with all of the other traffic. You may not even see the ambulance until it's near you. Even if you don't hear it, you will see the behavior of the other traffic and and quickly realize what is happening.

6 Year Old: We've all been in this situation. A kid may not be heading for the road, but the context tells you that the kid is not being safe and could jump into the road. So you slow down enough that you can react if the kid does something dumb. You might lightly honk.

Lumber Truck: It's relevant. You need to slow down, switch lanes or quickly pass to avoid getting a half pound stone into windshield.

Snowplow: If you weren't willing to make the run through the snowsplash, you might be stuck behind that plow for a LONG time. You have to know that it will only last a few seconds and then you're into clear air.

I can come up with endless scenarios where contextual awareness is key. You can fire as much data as you want at the machine; they don't have the capacity to learn how deal with all of these situations. I believe you can get to the point where 99.5% of the driving can be fully automated, and maybe more. But taking the driver out the equation entirely is a different beast.

There seems to be some confusion on whether you need to write code to detect an ambulance... you don't. A camera will pick it and at that moment once it passes through the neural networks you've already recognized the ambulance, what camera picked it up, and where it is.

The 6 yr old scenario the car will brake. Probably going 20-25 mph anyway in a neighborhood, unlike humans.

Neither automated car nor humans can avoid all road hazards. Just has to be better than a human.

As for your snowplow scenario... how did you know it was clear, you could have just rear ended a vehicle. You're taking an unnecessary risk if you can't see at all. Chances are that there was some cue or you could partially see through it. As I mentioned, radar sees through it better than even a human. Passing a snowplow is widely regarded as dangerous. Be hey, if there's no driver in the car why does it need to pass in the first place.

When you say these scenarios you are coming up with things you think a human can do better. What's missing is since the vast majority of traffic fatalities are due to human error, what you should be asking yourself is how a machine can reduce the incidents of traffic fatalities whether absolutely complete lvl 5 FSD or not.
 
Pretty sure companies are in the business of charging what the market will bear. But I could be wrong.

Edit: Yep, I did some research. It does appear that companies will indeed charge what a product or service is worth until there is some competition to help bring down the price. Got to love that Wikipedia.

Why was it that ap1 that was currently available only worth half as much as vaporware? I guess they were being benevolent and foregoing profits.

I guess it was just a coincidence that they decided the market value of ap doubled at the same time the new hardware came out, and someone had recently been killed by autopilot.
 
Why was it that ap1 that was currently available only worth half as much as vaporware? I guess they were being benevolent and foregoing profits.

I guess it was just a coincidence that they decided the market value of ap doubled at the same time the new hardware came out.
EAP promised more than AP1 specifically freeway transitioning and exiting.
 
Why was it that ap1 that was currently available only worth half as much as vaporware? I guess they were being benevolent and foregoing profits.

I guess it was just a coincidence that they decided the market value of ap doubled at the same time the new hardware came out, and someone had recently been killed by autopilot.

Well you see, since Tesla cant see the future and has no way to know what the market will bear. My guess is that they sat down and said something like "Hey we have 25,000 orders and can only build 10,000. What if we raise the price and earn way more money to fund our mission" You learn what the market will bear by testing different pricing. When Demand outstrips supply by a 10:1 or greater ratio as it does with the model 3, you can charge whatever you want for AP2 and FSD anything else you want. Tesla is always changing pricing and what is offered and what is included. Some of this is to simplify production. EAP is supposed to be better then AP1 as well, so they must charge more.
 
Well you see, since Tesla cant see the future and has no way to know what the market will bear. My guess is that they sat down and said something like "Hey we have 25,000 orders and can only build 10,000. What if we raise the price and earn way more money to fund our mission" You learn what the market will bear by testing different pricing. When Demand outstrips supply by a 10:1 or greater ratio as it does with the model 3, you can charge whatever you want for AP2 and FSD anything else you want. Tesla is always changing pricing and what is offered and what is included. Some of this is to simplify production. EAP is supposed to be better then AP1 as well, so they must charge more.
Same arguments apply to ap1. I don't find EAP's supposed additional features particularly compelling. From what I've read, most people are interested in the Highway driving features. And as I said before, FSD is much more elaborate and compelling. Tesla doesn't think it commands much of a premium over EAP.

I agree that they found that their hardware costs had increased dramatically, so they increase the price to see if the market would bear it. Seems that it did. But I say it was the increased hardware costs that precipitated the price increase.

I guess I should point out that I refute the OP. I'm saying that just because FSD is only $3000 over EAP, that doesn't mean it's an orphaned child. I'm saying that they may actually be charging twice as much as the EAP software. Those of you that are arguing, do you think FSD is dead?
 
Last edited:
There seems to be some confusion on whether you need to write code to detect an ambulance... you don't. A camera will pick it and at that moment once it passes through the neural networks you've already recognized the ambulance, what camera picked it up, and where it is.

The 6 yr old scenario the car will brake. Probably going 20-25 mph anyway in a neighborhood, unlike humans.

Neither automated car nor humans can avoid all road hazards. Just has to be better than a human.

As for your snowplow scenario... how did you know it was clear, you could have just rear ended a vehicle. You're taking an unnecessary risk if you can't see at all. Chances are that there was some cue or you could partially see through it. As I mentioned, radar sees through it better than even a human. Passing a snowplow is widely regarded as dangerous. Be hey, if there's no driver in the car why does it need to pass in the first place.

When you say these scenarios you are coming up with things you think a human can do better. What's missing is since the vast majority of traffic fatalities are due to human error, what you should be asking yourself is how a machine can reduce the incidents of traffic fatalities whether absolutely complete lvl 5 FSD or not.

You're incorrect and/or trivializing on all counts.

Ambulance: 9 times out of 10, I hear the ambulance long before I see it. And then I'll catch the top corner of it 500 meters ahead or behind over the rest of the traffic. I've done some direct development work with neural networks, and they won't catch stuff like that easily.

Further, when you do catch it, you might be sitting 7 cars back at a multilane intersection and have to do a bit of a dance with the other lanes to open up a path for the ambulance. It can be a bit of a mess even with people at the wheel. Determining the path via computer won't be simple.

Snowplow: You answered the question yourself. Based on the weather, the behavior of the other cars, and road conditions (aka... the context of the situation) you make an intelligent and safe judgement call. Sitting behind a plow for half an hour in the spray and slush at 40km/hr is neither safe, nor good for the car. And the idea with L5 is not that the car is empty. It could be fully occupied, but there is no steering wheel.

Yes, humans are absolutely fallible. I'm not arguing against progress toward L5, nor am I arguing that L2, L3, L4 don't have safety benefits when they are usable. I'm pretty sure that they do. I'm arguing that L5 is going to take a lot longer than the hype will have us believe.

And finally, I expect that the regulatory bodies will allow fully automated cars to operate on the roads only when they employ at the same level of redundancy that is employed industrially where there is risk to human life. Simplified, that means that any one information channel or control channel can completely fail without risk to life. There will be question as to how that's interpreted. I expect it means that you will need two different technologies that ensure that the car doesn't do something stupid.

Take the example of a red light. It won't be enough for the car to see the light on a camera. There will need to be another means (perhaps short range radio, or a redundant sat-link) that tells the car that an upcoming light is red. For object avoidance we could use a mix of car-car communications and lidar. The current Tesla radar is not precise enough to catch small/soft objects.
 
There seems to be some confusion on whether you need to write code to detect an ambulance... you don't. A camera will pick it and at that moment once it passes through the neural networks you've already recognized the ambulance, what camera picked it up, and where it is.

The 6 yr old scenario the car will brake. Probably going 20-25 mph anyway in a neighborhood, unlike humans.

Neither automated car nor humans can avoid all road hazards. Just has to be better than a human.

As for your snowplow scenario... how did you know it was clear, you could have just rear ended a vehicle. You're taking an unnecessary risk if you can't see at all. Chances are that there was some cue or you could partially see through it. As I mentioned, radar sees through it better than even a human. Passing a snowplow is widely regarded as dangerous. Be hey, if there's no driver in the car why does it need to pass in the first place.

When you say these scenarios you are coming up with things you think a human can do better. What's missing is since the vast majority of traffic fatalities are due to human error, what you should be asking yourself is how a machine can reduce the incidents of traffic fatalities whether absolutely complete lvl 5 FSD or not.

The ambulance example reminded me of this article:
Waymo’s self-driving minivans get some emergency vehicle training time

You make a great point that human error is a major factor in many traffic crashes (fatal or not). That's true of other applications like aviation. The tricky question is how many near miss incidents were avoided by human judgment or even times when human judgment didn't even allow something to get to a near miss situation. Of course there are probably a good number of such situations that got to that point because of some bad decision making by the human operator.

The whole FSD situation is pretty fascinating. I see us as being fortunate enough to bear witness to a major paradigm shift in technology and ultimately culture and economics too since personal transportation is inexorably connected to both of those. It's going to be a little messy for a while but the problems will ultimately be solved.

Also as a disclaimer to hopefully avoid igniting massive tensions - I haven't a clue whether it will be Tesla that figures it out and whether AP2 will be able to do it. I'm taking no position on that discussion.
 
  • Like
Reactions: JeffK
I don't believe AP1 ever promised to take to correct freeway if it splits off in two directions. AP2 isn't there yet, but it's supposed to do that.

Actually, that was promised to us AP1 cars in 8.1 (the real 8.1 hasn't arrived yet,) provided the car is running under navigation and in the lane that splits.

AFAIK, the main thing eAP was promised that we don't have is autonomous lane changing (which would permit freeway following in a lot of other situations that AP1 can't do.)
 
Actually, that was promised to us AP1 cars in 8.1 (the real 8.1 hasn't arrived yet,) provided the car is running under navigation and in the lane that splits.

AFAIK, the main thing eAP was promised that we don't have is autonomous lane changing (which would permit freeway following in a lot of other situations that AP1 can't do.)
That makes sense. The rear facing camera would allow the car to detect fast approaching cars from the rear, which ap1 cars can't do.
 
For the past 10 month you have been spreading how l5 is so simple yet we dont even have any EAP features here.

A person who know nothing always tries to explain everything as being simple.

This is an example of being clueless.
I'm not explaining how lvl 5 is simple as much as many of the potential problems people cite are already considered solved. Such as a convolutional neural network being able to identify an ambulance or a child from an image.
 
I'm not explaining how lvl 5 is simple as much as many of the potential problems people cite are already considered solved. Such as a convolutional neural network being able to identify an ambulance or a child from an image.

Solved by who exactly?

Here is Google waymo doing full scale first responder test for example:

Waymo's self-driving vans learn how to drive near police cars

But according to your below comments these things have alreasy been solved and solved particularly by tesla. You do realize that tesla is soooo behind that they are still working on the first pillars of self driving which is the sensing and in that only the first layer which is the object recognition and 3d bounding box.

This is something that have been done by mobileye for years. So when you call stuff simple and done, yeah maybe for others but tesla is still migtily struggling with it.

What you call simple is actually extremely difficult to get a safety critical accuracy of over 99.99%

Basically, there are simple solutions to most of life's problems. Humans tend to make things more complicated.

You might be surprised at how little thinking you actually have to do.
 
  • Like
  • Disagree
Reactions: cwerdna and JeffK