Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

There will be NO HW4 upgrade for HW3 owners

This site may earn commission on affiliate links.
Right, reading the article, Elon's lawyers basically successfully argued that he was only "considering" taking it private, and that he did at least have a verbal agreement for the funding, so I agree it doesn't set any precedents for future cases about his tweets.

That said, I highly doubt that whatever Elon says on social media (or interviews or even investor calls) will be seen as a contract to deliver a certain feature in customer cars (can't really think of an example where this is the case). At most I think it will be an argument over false advertising, which has different remedies than things owed under contract.
Actually, thinking about this, neither Tesla or Elon would be liable as they will blame regulatory approvals for not having redundancy for FSD.
 
Actually, thinking about this, neither Tesla or Elon would be liable as they will blame regulatory approvals for not having redundancy for FSD.
This forum really needs a FAQ. There is no regulatory approval required in many states. Presumably for this defense to work they would have to at least apply for regulatory approval in states that do require it.
 
thanks for educating about lidar and radar. Lidar would be a perfect example for making a left turn safely. Tesla is developing their own algorithm that would detect the velocity of the of upcoming vehicles from the side to make a safe left turn as an example. But I miss the days where the car would stop only to find out that the two cars ahead were slowing down.
 
  • Like
Reactions: pilotSteve

Teslaraj (a veteran) bought a new Tesla and they have removed all the ultrasonic sensors. They will be removing USS from S and X too. But the software is not even available yet. imaging a naive individual buying a Tesla (a rather expensive car) and doesn't know about this. Service centers/call centers will be flooded once again. will be How can you ship a car without the software necessary for this? Another ex of Tesla cutting corners again just to beat China market.
 
They have already lost orders because of this. A friend just cancelled, but Tesla just kept the deposit and moved on, rather than commit to making their new system work in a timely fashion. Bait and switch at the musk level. If you can get away with inferring the car will perform a certain way (say robotaxi, or coast to coast autonomous drive), and then selling the cars that will clearly (at least now) never have that capability, then just lather, rinse, and repeat with radar, uss, and about anything else that you might expect to work, and continue to be the richest guy around at your customer’s expense.
 
Another ex of Tesla cutting corners again just to beat China market.
Just to beat the China market? Hmmm...

FB_IMG_1675580261346.jpg
 
Another ex of Tesla cutting corners again just to beat China market.
Seems to me driven by parts shortages / Elon's insistence on Vision-only. I mean they spent a lot of time trying to figure out whether its raining using NN rather than a cheap sensor like others.

So basically the S/W got delayed, but the hardware change was already in place. You can't make last minute changes to that ....
 
  • Like
Reactions: MTOman
Seems to me driven by parts shortages / Elon's insistence on Vision-only. I mean they spent a lot of time trying to figure out whether its raining using NN rather than a cheap sensor like others.

So basically the S/W got delayed, but the hardware change was already in place. You can't make last minute changes to that ....
I'd say it's half parts shortages and half Occam's Razor.
 
I’m watching the April 2019 Tesla Autonomy Day, and at 3:01:04, Elon says:
The whole system, from a hardware standpoint, has been designed to… for… to be a RoboTaxi since basically October 2016… so when we rolled out Hardware… AutoPilot version two.
So we should be good! RoboTaxi would require at least L4 I imagine, so we already have the hardware for L4! (mostly sarcastic)

We do not expect to upgrade cars made before that. We think it would actually cost more to make a new car than to upgrade the cars. Just to give you a sense of how hard it is to do this. Unless it’s designed in, it’s not worth it.
This is right after the previous quote. I listened to this part 10+ times trying to figure out if I missed a word or something to make this make sense. I’m guessing he misspoke and meant it would cost more to upgrade a car made before Oct 2016 than to make a new one?
 
I’m watching the April 2019 Tesla Autonomy Day, and at 3:01:04, Elon says:

So we should be good! RoboTaxi would require at least L4 I imagine, so we already have the hardware for L4! (mostly sarcastic)


This is right after the previous quote. I listened to this part 10+ times trying to figure out if I missed a word or something to make this make sense. I’m guessing he misspoke and meant it would cost more to upgrade a car made before Oct 2016 than to make a new one?
That reality distorted presentation might be the gift that keeps giving. They apparently took a wild guess on design requirements, started building something, and he decided everything would be perfect. It's hard to imagine a product architect being so far off the mark.
 
  • Like
Reactions: pilotSteve
Let's look at this a different way. Let's break of the entire FSD system into three parts: 1) sensors (cameras - including number and placement), 2) Processing power (Full self driving computer), and 3) Software.

We all know the current state of FSDBeta. Its a long ways from self-driving robo taxis. Lets exclude that from the ultimate goal and set the bar lower to performance on city streets matching performance on freeways (i.e. most of us find AP/FSD works pretty well on freeways). What is the reason this group thinks is the cause? Is it because of category 1 and the cameras are too low of resolution or put in the wrong place? Is it because we don't have other sensors like LiDAR and Radar? Is it because category 2 is maxed out and can't process fast enough? Is it because of category 3 is not functional/optimized/trained/buggy/etc? Is it a mix of all three?

If the problem is in 1 or 2, I think Tesla will have to offer replacements like they have in the past. By Tesla stating they don't plan upgrades to HW4, they seem to be saying they are pretty confident in these categories. I'd bet a large sum of money that some engineer at Tesla has already asked if the car could see better with cameras in the bumpers. Someone else has asked how much resolution do we need in cameras to be able to see far enough away for the car to calculate what it needs to do. Of course, the answers received to those questions could be wrong so I'd also bet that after the drama of discovering HW2 was not up to snuff and having to offer upgrades, they have checked and rechecked their resource calculations and have a backup plan in their pocket just in case. If the main issue is #3, then the only solution is to keep waiting for them to iterate the software.

Personally, I don't think there is much of an issue with #1. My car seems to see the environment just fine. Even in cases of tricky UPL turns, it can still see as good as I can. #2 is a little more questionable but I don't think anyone except Tesla can say if we are maxed out on compute. I've heard they run multiple stacks and compare output so if they had the software finished and tuned and optimized for the specific AP3 compute hardware, there is no reason I can see to assume it can't run it. Also, with enough power for #2, they might not need to see as far for #1 because they can compute the correct action so much faster.

I think the biggest issue is simply #3. The software still needs work predicting paths and planning the best route through the endless number of corner cases we all continue to experience. My experience version to version has been small improvements but compared to my first drives on 10.2, I have seen massive positive change. I don't have any reason to think the improvement will stop. The fact that it is not there yet is not proof itself that the current HW is not capable.
 
Let's look at this a different way. Let's break of the entire FSD system into three parts: 1) sensors (cameras - including number and placement), 2) Processing power (Full self driving computer), and 3) Software.

We all know the current state of FSDBeta. Its a long ways from self-driving robo taxis. Lets exclude that from the ultimate goal and set the bar lower to performance on city streets matching performance on freeways (i.e. most of us find AP/FSD works pretty well on freeways). What is the reason this group thinks is the cause? Is it because of category 1 and the cameras are too low of resolution or put in the wrong place? Is it because we don't have other sensors like LiDAR and Radar? Is it because category 2 is maxed out and can't process fast enough? Is it because of category 3 is not functional/optimized/trained/buggy/etc? Is it a mix of all three?

If the problem is in 1 or 2, I think Tesla will have to offer replacements like they have in the past. By Tesla stating they don't plan upgrades to HW4, they seem to be saying they are pretty confident in these categories. I'd bet a large sum of money that some engineer at Tesla has already asked if the car could see better with cameras in the bumpers. Someone else has asked how much resolution do we need in cameras to be able to see far enough away for the car to calculate what it needs to do. Of course, the answers received to those questions could be wrong so I'd also bet that after the drama of discovering HW2 was not up to snuff and having to offer upgrades, they have checked and rechecked their resource calculations and have a backup plan in their pocket just in case. If the main issue is #3, then the only solution is to keep waiting for them to iterate the software.

Personally, I don't think there is much of an issue with #1. My car seems to see the environment just fine. Even in cases of tricky UPL turns, it can still see as good as I can. #2 is a little more questionable but I don't think anyone except Tesla can say if we are maxed out on compute. I've heard they run multiple stacks and compare output so if they had the software finished and tuned and optimized for the specific AP3 compute hardware, there is no reason I can see to assume it can't run it. Also, with enough power for #2, they might not need to see as far for #1 because they can compute the correct action so much faster.

I think the biggest issue is simply #3. The software still needs work predicting paths and planning the best route through the endless number of corner cases we all continue to experience. My experience version to version has been small improvements but compared to my first drives on 10.2, I have seen massive positive change. I don't have any reason to think the improvement will stop. The fact that it is not there yet is not proof itself that the current HW is not capable.
Agreed - many people call for more sensors, and sensors are excellent, if for no other reason than redundancy. However, we've seen Cruise and Waymo, with massive arrays of sensors, stall in the middle of intersections, and most recently with Waymo, turn into an oncoming traffic lane. Sensors are fine, but the code that executes action based on those sensors is the most important.
 
Let's look at this a different way. Let's break of the entire FSD system into three parts: 1) sensors (cameras - including number and placement), 2) Processing power (Full self driving computer), and 3) Software....
I think your analysis is good and a reasonable way to break the issues into a manageable discussion. As you know, there is a huge amount of repetitive circling and meandering in the discussions here. It's difficult to make a a logical point in one area (a specific topic within one of your categories) without someone kind of missing it, for blowing past it by bringing up an annoyance related to something else. That's not to say that I think the forum will actually embrace your, or any particular framework for the discussion to get more productive :)
Personally, I don't think there is much of an issue with #1. My car seems to see the environment just fine. Even in cases of tricky UPL turns, it can still see as good as I can.
Here I take somewhat of a departure. I think a lot about the hardware suite and what it can do in concert with the developing software, and how humans deal with an arguably inadequate bio-hardware suite.
Agreed - many people call for more sensors, and sensors are excellent, if for no other reason than redundancy. However, we've seen Cruise and Waymo, with massive arrays of sensors, stall in the middle of intersections, and most recently with Waymo, turn into an oncoming traffic lane. Sensors are fine, but the code that executes action based on those sensors is the most important.

I do think the NN can learn to do great things with even mediocre camera images, yet I've been an advocate for more and or better camera angles. Sometimes people push back on such suggestions, and talk about how there is a 360° view, or that humans can drive just fine with only two eyes, even one.

Aside from a lot of specific discussion about perspective, geometry, occlusions in the environment, resolution and all that, I take the general position that there are some relatively inexpensive "superhuman" capabilities that can be leveraged to make up for obvious deficiencies in the current state of the self-driving perception and planning software.

For example, the simple fact that the car does have full-time surround vision (even if imperfect) is a superhuman capability that we wouldn't trade away for a swiveling, bobbing, only centrally-sharp pair of cameras behind the windshield. That would be silly, more expensive and less effective. So let's embrace what a bunch of inexpensive cameras can do and maximize that.

If I had been there in 2015 or so, I think that I would have argued for a little more and or better placement, for an exterior microphone, perhaps a set of IR illumination LEDs - all of which were available and inexpensive at the time, it would confer certain "superhuman" capabilities that I think would have greatly simplified some of the problems challenging the project right now. Man-years of development in managing creeping behavior, vulnerable or hostile actor persistence and prediction, parking lot challenges. We can say that the software can eventually overcome, but I think Tesla would be farther ahead with some of that 2015 available hardware, had it been included.

What about lidar, HD radar, Imaging sonar or other exotic sensors? People gloss over the key fact that those were unavailable at a practical cost and performance level when the 3 and Y were being planned. That is beginning to change and we might see HD radar for example.

But my point is that even with the 2015-ish level of hardware, Tesla could have leveraged it somewhat better and we'd already have a smoother and more confident FSD experience.

And this is where we come back to the thread topic: perhaps the cost of an HW4 retrofit, full or partial, would pay dividends as the development team could move ahead knowing that the whole feet could leverage off of this and they could reprioritize on next steps.
 
I think I speak for the majority of owners in Europe when I say this in respect to that statement;

😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂
Speaking from an owner in Boston: I think AP is simply amazing on highways. I use it 99% of the time I'm driving on roads where it makes sense (which is everywhere but the surface streets.) It's not just that it works "pretty well." It's life changing for those of use who used to suffer through long commutes day in and day out.

That said, I do think that Tesla has a legal issue on their hands with FSD if they can't basically make FSDb, on city streets, just as good as AP on the highway. And I think this legal issue is made worse if they make FSDb work, but only on new cars with new HW, and that HW platform is not made available to those who originally purchased the feature.
 
  • Like
Reactions: pilotSteve
I think I speak for the majority of owners in Europe when I say this in respect to that statement;

😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂
NoA works on freeways about as well as FSDb works on city streets. That is to say, if it is just staying in its lane and preventing a collision with the car ahead, it does well. Luckily that's about 80% of highway driving. NoA, however, has many the same problems as FSDb, including phantom braking, (recently) bad lane selection, poor understanding on when lanes are ending and when to merge, etc. It's just that we really only see these issues getting on and getting off the highway, for the most part.
 
I think AP is simply amazing on highways.
Really?

I think it’s amazing if I’m the only car on the road or if traffic’s light, but …

The speed based lane changes absolutely suck with even just moderate traffic.

It sucks in stop and go traffic.

It sucks when merging with cars entering the highway.

It sucks at reacting early to stopped traffic far ahead, only reacting to what’s close by, only recognizing individual cars as opposed to traffic as a grouping of objects.

It sucks when the current lane turns into two lanes (staying in the center until it finally realizes what’s going on and quickly jerking into one of the lanes).

It sucks when the current lane ends and merges with the next lane.

It sucks when coming up to an 18-wheeler and not giving it extra room (cracked windshields must be its kink).

It sucks at following the speed limit when it goes down (supposed to slow down before the sign, not after).

Because the lane change logic sucks, it sometimes misses the exit, adding another 5 to 10 minutes to a trip.

I can’t wait to get v11. I love the Beta and hope the highway experience improves with the unified tech stack.
 
Really?

I think it’s amazing if I’m the only car on the road or if traffic’s light, but …

The speed based lane changes absolutely suck with even just moderate traffic.

It sucks in stop and go traffic.

It sucks when merging with cars entering the highway.

It sucks at reacting early to stopped traffic far ahead, only reacting to what’s close by, only recognizing individual cars as opposed to traffic as a grouping of objects.

It sucks when the current lane turns into two lanes (staying in the center until it finally realizes what’s going on and quickly jerking into one of the lanes).

It sucks when the current lane ends and merges with the next lane.

It sucks when coming up to an 18-wheeler and not giving it extra room (cracked windshields must be its kink).

It sucks at following the speed limit when it goes down (supposed to slow down before the sign, not after).

Because the lane change logic sucks, it sometimes misses the exit, adding another 5 to 10 minutes to a trip.

I can’t wait to get v11. I love the Beta and hope the highway experience improves with the unified tech stack.

So one thing I've definitely noticed is that NoA gives quite a bit of room to semi trucks for me; it actually gets pretty close to the opposite side lane marker.
 
I am jumping in on a thread that is already 8 pages long, Here are my comments:
  • Promising FSD was a sales gimmick by Tesla to stimulate interest and it certainly did that. We should do what we can to 'hold thier feet to the fire' to make sure they deliver on that commitment.
  • It doesn't bother me that there have been long delays in introducing FSD. It should surprise no one that FSD is hardware and software challening.
  • With regard to moving to HW4 (then probably, 5,6, 7, etc), the danger is that they stop devleopment on HW3. Hard to believe they would have a ongoing parallel development programs for multiple systems (Microsoft doesn't support WIN7!).
  • The bigger problem arises when regulators (whoever they may be around the world) decertify FSD on HW3 cars and force Tesla to delete the function on our cars. It is fine for Musk to say that HW3 is 200% safer than human drivers, but will they be able to support that and HW4 at the same time. I doubt it.
  • I use FSD on the highways all the time. I would like Tesla to make that perfect and presently it is not.
 
  • Like
Reactions: argon2018