Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Poll: Now that they are increasing FSD from $3k to $5k, will you be purchasing it?

Will you be purchasing FSD during configuration now that the price will be increasing?


  • Total voters
    212
This site may earn commission on affiliate links.
Mostly agree, though not so sure about camera vs eye resolution. Maybe for people who need glasses
Two things to keep in mind. First, humans need to focus on certain areas at once while with cameras the computer can analyze 360 degree views extremely quickly.
Secondly, computer vision has proven better at identification than a human in certain circumstances.
 
You're supposing regulators already have regulations.
The latest set of Federal FSD regulations were GUIDELINES, supposedly created purposefully instead of regulations not to hinder the progress.
It is possible we won't see much obstruction from the government unless people start dying in droves in FSD accidents and force their hand. So, if manufacturers follow guidelines and test responsibly, there may not be an additional delay due to regulation.

trained for decades, and that is many decades ahead of anything that we'll be able to build.
This is a wrong assumption. FSD model can be trained on data collected by thousands of cars on the road, so this will go a lot faster.

The current sensor suite is vulnerable to something as simple as a splotch of mud or ice over the forward cameras.
Have you ever had your whole windshield splashed and covered with mud? I had. There is no redundancy option for a human either.
You brake hard, b/c you no longer see the road. You try to use windshield wipers, but sprinklers are underpowered for a significant amount of mud and wipers just smudge the mud around. It takes few seconds before you can see the road again. You also try to reach emergency lights button, while figuring out where to steer to. Trying to maybe keep more to the right to avoid oncoming traffic. You are not safe and sound during this time, so I would not expect more from FSD than from a human - stop as fast and as safely as possible.
 
Last edited:
  • Like
Reactions: JeffK
I hope the coding isn't that sloppy but from the outside it sure looks that way
I think people don't realize it is not all coding anymore. There's model that makes its own decisions that are as good as a training that it received and that it was properly adjusted to ensure that it did recognize and interpreted the training data correctly. I forgot who, but somebody from Tesla recently talked about their job of training Teslas and mentioned 3/4 of this responsibility is not coding, but collecting training data.
 
Have you ever had your whole windshield splashed and covered with mud? I had. There is no redundancy option for a human either.

You couldn't move your head for a better angle? Including to the point of looking out the side window? You couldn't get out and clear the mud? Oh and....

You try to use windshield wipers...

...that current Tesla sensors lack, AFAIK. So even short of a heavy mud thump (which is dangerous with human drivers) you're already into that territory as current built.

Now the radar should help a good deal on this but the argument that humans "sensors" don't have a huge amount of adaptability is simply off the mark.

P.S. Adaptation to very low light situations is one place that computer vision has inherent an advantage over human eyesight, though. Of course then you get to the real meat of the issue, trying to figure out what to do with all that sensor input......
 
I think people don't realize it is not all coding anymore. There's model that makes its own decisions that are as good as a training that it received and that it was properly adjusted to ensure that it did recognize and interpreted the training data correctly. I forgot who, but somebody from Tesla recently talked about their job of training Tesla and mentioned 3/4 of this responsibility is not coding, but collecting training data.

I hope you are wrong on this assessment. Tesla has as many miles of data as anyone with its huge fleet of vehicles. If the information from that large a number of miles of data is producing poor results, then the NN is among the dumber ones. Personally, I don't think they are using much of NN yet, if any. For example the vehicles displayed in any lane including vehicles own lane tend to appear and disappear randomly and no indication that it identifies the type of vehicle be it semi or motorcycle. The behavior tends to indicate the use of radar where objects are detected and shown but no knowledge of how these objects behave is known by the system and it appears to get confused by false returns or reflections. If it knew that on object was a vehicle of type X and that vehicles of type X behave in certain known ways then vehicles stopped at a traffic light wouldn't jump into other lanes, wiggle around in their lane, etc. If it fused vision with the radar data it would see the vehicle had not moved and override the radar image and keep the vehicle stationary.
 
Do you think they are trying to sell as many 3k FSD upgrades before they release first FSD software this August or does Tesla think they have something that so many of us want after the release they can up the ante to 5k and sell it like hotcakes?
The psychology works both ways. In August (read December) I'd just be impressed if it handles stop signs / stop lights.
 
  • Like
Reactions: croman
Do you think they are trying to sell as many 3k FSD upgrades before they release first FSD software this August or does Tesla think they have something that so many of us want after the release they can up the ante to 5k and sell it like hotcakes?

Hard to say, but the price increase could convey increased confidence by Tesla in what they will be releasing soon... or decreased confidence by customers, which has resulted in falling FSD sales. I'm hoping it's the former.
 
Beyond the resolution debate, what about colorspace? How many bits are the cameras vs the human eye? I, for one, am not Impressed with the colorspace of the rear view camera .. .

Or maybe the 20 or so equivalent stops that the human eye offers for dynamic range?
 
Last edited:
Hard to say, but the price increase could convey increased confidence by Tesla in what they will be releasing soon... or decreased confidence by customers, which has resulted in falling FSD sales. I'm hoping it's the former.

I think the only reason this is happening is they are not seeing enough people purchase it up front. So to 'scare' these people who are on the fence with a larger gap, they're hoping to push them into coughing up the money now and help their profits now.
 
  • Like
Reactions: kbM3
You couldn't move your head for a better angle?
Not when the whole windshield is splashed. First thing you do is panic when this was unexpected and you're driving at a high enough speed. Waiting for side window to open so you can stick your head out would take the same several seconds that it takes to clear the windshield, but you don't really get a lot of bright ideas in your head at that time as you're trying to stop.
 
Beyond the resolution debate, what about colorspace? How many bits are the cameras vs the human eye? I, for one, am not Impressed with the colorspace of the rear view camera .. .
You don't need a huge colorspace but you do need dynamic range. Dynamic range of the human eye is something like 20 stops.

Cameras for autonomous vehicles have a higher dynamic range than normal sensors for like a consumer camera. As long as it can see in shadows and bright spots during the day it should be fine. At night your headlights, plus ambient light are usually more than enough.
 
Not when the whole windshield is splashed. First thing you do is panic when this was unexpected and you're driving at a high enough speed. Waiting for side window to open so you can stick your head out would take the same several seconds that it takes to clear the windshield, but you don't really get a lot of bright ideas in your head at that time as you're trying to stop.
Your increasingly narrow hypothetical (and keeping in mind I grew up in the dirt and mud, and having drove off-road in the mud for a living) ain't convincing me here. If you keep going improbably thin to say "well it could go this far, then whatcha going to do?" is ignoring the fatter chunk of the probably tail before that.

That's Bad Math.
 
  • Like
Reactions: nvx1977
I think most would agree that once FSD is a reality (whenever that is) it will be worth a lot more than when it is a promise. The original $1K delta played on this. Most, by far didn't choose it. Now that it is $2K, it is likely a tell about the first actual feature of FSD coming (I suspect that is stop light and stop sign recognition). It means FSD is closer due to at least ONE feature. I have no doubt that it is a sliding scale for adding FSD after purchase. Initially it was $1K, now it's $2K, sometime in the future it will be $3K....I can easily see it costing $5K with enough features. So, if the price delta is important to you, if you want FSD and think it will be here in a useful and valuable fashion in the near future, then buy it. I'm personally taking a more wait and see approach.
 
I hope you are wrong on this assessment. Tesla has as many miles of data as anyone with its huge fleet of vehicles. If the information from that large a number of miles of data is producing poor results, then the NN is among the dumber ones. Personally, I don't think they are using much of NN yet, if any. For example the vehicles displayed in any lane including vehicles own lane tend to appear and disappear randomly and no indication that it identifies the type of vehicle be it semi or motorcycle. The behavior tends to indicate the use of radar where objects are detected and shown but no knowledge of how these objects behave is known by the system and it appears to get confused by false returns or reflections. If it knew that on object was a vehicle of type X and that vehicles of type X behave in certain known ways then vehicles stopped at a traffic light wouldn't jump into other lanes, wiggle around in their lane, etc. If it fused vision with the radar data it would see the vehicle had not moved and override the radar image and keep the vehicle stationary.
I believe that what exists today can be crutches with some minimal NN use, but the meat of NN is in testing mode and won't be released until features are fully tested. And of course, they can focus on everything at the same time, so they will be prioritizing the work and deliver it in chunks.