Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla, TSLA & the Investment World: the Perpetual Investors' Roundtable

This site may earn commission on affiliate links.
I guess I’m one of those on the other side. My S was undrivable on FSD. Tesla checked the software, cameras and whatever they could . They didn’t ship me another build until .4. That was not useable on I95 or around town. I got . 6 and now usable on I95. I’ll go out at 4:00 in the morning and see what happens around town. However I drive a lot of two lane roads and that’s a problem. It doesn’t know to stay way back from orange trucks, and it can’t pass safely since the cameras are centerline and would expose too much car to oncoming. A long range camera in the driver side mirror would be wonderful. My daughter has a coworker with FSD on his Y. It works almost perfectly. You don even know the car is driving itself. Same build. Go figure.
 
Keep using FSD and report back - I'm interested to see if you still find it lacking after a few more miles.

Just a follow-up after a night's sleep:

Isn't it pretty worthwhile its in public release and its good enough that you can actually debate over whether its impressive or not in the first place?

This feature used to be considered vaporware only 2-3 months ago by the general public. Now, the rhetoric, or initial reaction, turned into "oh no I'm scared about the impressiveness of it" and "oh no I'm scared about the future due to this and what it means".

For me, I had a lot of disengagements while getting used to it. Far less once I got used to trusting it a bit more around traffic-related situations. It's definitely gotten better since April.
Yeah, I think I gained a false perspective using FSD yesterday with all my problems, as I had already gone through the calibration phase when I left the service center after the hardware update. I saw with my own eyes the completed calibration and then it was not acting like what I saw from vids and what I read on here. After doing a second camera calibration last night, I took it out this morning and it worked really well...loved using the roundabouts!

So I'm not sure what happened, but I believe again! haha.
 
Yeah, I think I gained a false perspective using FSD yesterday with all my problems, as I had already gone through the calibration phase when I left the service center after the hardware update. I saw with my own eyes the completed calibration and then it was not acting like what I saw from vids and what I read on here. After doing a second camera calibration last night, I took it out this morning and it worked really well...loved using the roundabouts!

So I'm not sure what happened, but I believe again! haha.
Happy for you, but then Tesla should do a mandatory recalibration every new build, especially for each new FSD install.
 
If this LinkedIn post is accurate, it sounds like they're further along than we thought.
She says progress is exponential. That happens when you're a long way away. As you near autonomy progress becomes asymptotic. There's a famous chart of this, but I can't seem to find it.

To be fair, neither had Waymo or Cruise. They just put the hazard lights on and wait for a driver to come assist them.
You know that's not true. Fleet Response gets cars unstuck in almost all cases. Only in rare cases do they send Roadside Assistance out.
 
This prompted me (much like AI) to look up some info on the Waymo Pole accident in Phx
Folks, this is exactly what Tesla is avoiding when NOT doing sensor fusion, NOT relying on planning/control heuristics and NOT relying on HD maps.

Found a YT video on this with first person details. The environment suggests the vehicles cameras and LiDar would have very easily 'seen' this pole and impact appears near head on with force (impact popped off the passenger side front facia and dented the pole as shown by the reporter). The road is straight, clearly marked and wide on a clear day. The pole is normal size and not obfuscated by any semantic scene data (like a brown wall behind it). Then, Waymo sent another vehicle to the exact same spot! Why? They should have known the first vehicle struck an object due camera, LiDar, wheel speed sensors and the accelerometer profile data. And based on that, should have kicked off an impact protocol and process that had a remote operator inspect the status of the vehicle and that operator would have easily seen the pole had been struck. But that isn't what happened. Would bet the camera identified the object and classified it as an obstacle, LiDar did the same and then somehow 'forgot' about it at the planning software layer and then controlled it into the obstacle. *And* the front bumper has a LiDar and the pole literally hit that LiDar! How was that LiDar data ignored? (guessing sensor fusion could allow even the most egregious and high confidence data from one sensor to be completely discounted when allowing heuristics to be a determining factor as there is simply no way an AI model would have allowed this data to be ignored). Troubling if their HD maps were 'missing' this pole and thus could have been the determining factor. More on that below...

1718465838383.png


This suggests they do NOT have a way to detect, react and control for obstacles and impacts of this nature. Would they have been able to detect a person standing still on that street? And what if it had hit a human and they did NOT react with notifying medical assistance **AND** sent another car.

If I were on the Waymo team, I'd be extremely intent on understanding root cause of this incident.

I'd also think the city of Phoenix would want an immediate investigation for Waymo to turn over video and sensor data to understand how something like this happened.

(side point: Why not have AZ plates on this car? That seems weird)

And this might be the smoking gun as the poles are NOT shown in Google maps, but their shadow *IS*
1718466856007.png
 
She says progress is exponential. That happens when you're a long way away. As you near autonomy progress becomes asymptotic. There's a famous chart of this, but I can't seem to find it.


You know that's not true. Fleet Response gets cars unstuck in almost all cases. Only in rare cases do they send Roadside Assistance out.
I appreciate your continued fight for honesty and truthfulness on these topics. Please don't relent to the nimnut trolls, they are dangerous to an uniformed investor. I had a misdiagnosed case of Lymes disease that sucked away my life for weeks and this was after a disastrous winter. So, we are going to be doing some work mid summer that will be exhausting and brutal that would be better done in mid winter but for the need for monies. Nothing chaps my shorts worse than people that know better who continue to propagate falsehoods. It is deceitful and potentially quite injurious to an poorly informed investor. So thank you @Doggydogworld!
 
  • Like
Reactions: ShareLofty
Folks, this is exactly what Tesla is avoiding when NOT doing sensor fusion, NOT relying on planning/control heuristics and NOT relying on HD maps.

Found a YT video on this with first person details. The environment suggests the vehicles cameras and LiDar would have very easily 'seen' this pole and impact appears near head on with force (impact popped off the passenger side front facia and dented the pole as shown by the reporter). The road is straight, clearly marked and wide on a clear day. The pole is normal size and not obfuscated by any semantic scene data (like a brown wall behind it). Then, Waymo sent another vehicle to the exact same spot! Why? They should have known the first vehicle struck an object due camera, LiDar, wheel speed sensors and the accelerometer profile data. And based on that, should have kicked off an impact protocol and process that had a remote operator inspect the status of the vehicle and that operator would have easily seen the pole had been struck. But that isn't what happened. Would bet the camera identified the object and classified it as an obstacle, LiDar did the same and then somehow 'forgot' about it at the planning software layer and then controlled it into the obstacle. *And* the front bumper has a LiDar and the pole literally hit that LiDar! How was that LiDar data ignored? (guessing sensor fusion could allow even the most egregious and high confidence data from one sensor to be completely discounted when allowing heuristics to be a determining factor as there is simply no way an AI model would have allowed this data to be ignored). Troubling if their HD maps were 'missing' this pole and thus could have been the determining factor. More on that below...

View attachment 1056755

This suggests they do NOT have a way to detect, react and control for obstacles and impacts of this nature. Would they have been able to detect a person standing still on that street? And what if it had hit a human and they did NOT react with notifying medical assistance **AND** sent another car.

If I were on the Waymo team, I'd be extremely intent on understanding root cause of this incident.

I'd also think the city of Phoenix would want an immediate investigation for Waymo to turn over video and sensor data to understand how something like this happened.

(side point: Why not have AZ plates on this car? That seems weird)

And this might be the smoking gun as the poles are NOT shown in Google maps, but their shadow *IS*
View attachment 1056760
There is a good discussion of this on the Waymo thread. I thought our kindly mod asked us to reduce the waymo content here?
 
  • Like
Reactions: Doggydogworld
Folks, this is exactly what Tesla is avoiding when NOT doing sensor fusion, NOT relying on planning/control heuristics and NOT relying on HD maps.

Found a YT video on this with first person details. The environment suggests the vehicles cameras and LiDar would have very easily 'seen' this pole and impact appears near head on with force (impact popped off the passenger side front facia and dented the pole as shown by the reporter). The road is straight, clearly marked and wide on a clear day. The pole is normal size and not obfuscated by any semantic scene data (like a brown wall behind it). Then, Waymo sent another vehicle to the exact same spot! Why? They should have known the first vehicle struck an object due camera, LiDar, wheel speed sensors and the accelerometer profile data. And based on that, should have kicked off an impact protocol and process that had a remote operator inspect the status of the vehicle and that operator would have easily seen the pole had been struck. But that isn't what happened. Would bet the camera identified the object and classified it as an obstacle, LiDar did the same and then somehow 'forgot' about it at the planning software layer and then controlled it into the obstacle. *And* the front bumper has a LiDar and the pole literally hit that LiDar! How was that LiDar data ignored? (guessing sensor fusion could allow even the most egregious and high confidence data from one sensor to be completely discounted when allowing heuristics to be a determining factor as there is simply no way an AI model would have allowed this data to be ignored). Troubling if their HD maps were 'missing' this pole and thus could have been the determining factor. More on that below...

View attachment 1056755

This suggests they do NOT have a way to detect, react and control for obstacles and impacts of this nature. Would they have been able to detect a person standing still on that street? And what if it had hit a human and they did NOT react with notifying medical assistance **AND** sent another car.

If I were on the Waymo team, I'd be extremely intent on understanding root cause of this incident.

I'd also think the city of Phoenix would want an immediate investigation for Waymo to turn over video and sensor data to understand how something like this happened.

(side point: Why not have AZ plates on this car? That seems weird)

And this might be the smoking gun as the poles are NOT shown in Google maps, but their shadow *IS*
View attachment 1056760
Would you also agree on my thesis for Phoenix (or) Chandler as a prime candidate to start RT? (Edit, removed timeframe.)
 
Last edited:
Yeah, I think I gained a false perspective using FSD yesterday with all my problems, as I had already gone through the calibration phase when I left the service center after the hardware update. I saw with my own eyes the completed calibration and then it was not acting like what I saw from vids and what I read on here. After doing a second camera calibration last night, I took it out this morning and it worked really well...loved using the roundabouts!

So I'm not sure what happened, but I believe again! haha.

Maybe we're going off topic but it's the weekend.
Good to know the camera calibration helped. When people report issues with FSD, I do believe them even if my experience is different.
I think Tesla still needs much more training data but it's only a matter of time at this point.

Before v12, when FSD was driven by 300k lines of C++ coding, I was always skeptical how FSD would work in countries where there are odd unwritten road rules. Take Buenos Aires. In the BA neighborhoods (e.g. Villa DeVoto, Palermo), there are no signs at most intersections and the first car there has the right of way. If that car has another car or two trailing it, they too have the right of way. You sit there waiting for the train of cars to finish and then you go (and if there are cars behind you, they go with you). With v11 (with C++) I've was skeptical FSD would ever work in BA. Now with the neural network approach, I can see how feeding training data of Buenos Aires Tesla drivers to the network can achieve FSD there.
 
Every new AI challenge has always been better handled at the start by human codes, and as compute gets cheaper and faster, eventually compute catches up and surpasses the pace of improvement by human codes. The Bitter Lesson is an excellent read on this.

Now that compute is no longer bottlenecking, the next bottleneck for FSD is how quickly they can use this compute by providing it with data on new corner cases to train on, and come up with a metric to measure what's better. Corner case data is always known to everyone, but the metric part was what Elon mentioned in SH meeting. Right now these steps are relying on human knowledge (I think), but I wonder if Tesla's roadmap will have AI taking over those as well. Karpathy's "project vacation" comes to mind.
 
Last edited:
You know that's not true. Fleet Response gets cars unstuck in almost all cases. Only in rare cases do they send Roadside Assistance out.

No, I don't know this. I've been told repeatedly by the folks that follow Waymo extensively that they do not remote-control the vehicles and that it's a true L4.

I've read that Waymo can essentially send bits of metadata to a car to tell it how to respond in certain situations; but that's a far cry from the question Elon was asked during the Q&A.

Happy to follow up in the Waymo thread as to not make this too off-topic. But as far as I know, Waymo does not "remote" into stuck vehicles and drive them out of situations.
 
Every new AI challenge has always been better handled at the start by human codes, and as compute gets cheaper and faster, eventually compute catches up and surpasses the pace of improvement by human codes. The Bitter Lesson is an excellent read on this.

Now that compute is no longer bottlenecking, the next bottleneck for FSD is how quickly they can use this compute by providing it with data on new corner cases to train on, and come up with a metric to measure what's better. Corner case data is always known to everyone, but the metric part was what Elon mentioned in SH meeting. Right now these steps are relying on human knowledge (I think), but I wonder if Tesla's roadmap will have AI taking over those as well. Karpathy's "project vacation" comes to mind.

It seems that the CTO and CEO of Mobileye have yet to learn the The Bitter Lesson. They don't think Tesla's end-to-end approach will work.

 
Waymo and LIDAR feels like Toyota with Hydrogen from where I'm sat. It seems they are both VERY VOCAL that those things are the solution, and it doesnt matter how much evidence mounts against them, they will not change.
This is an underappreciated plus point of Elon. He frequently admits he was flat wrong, and then immediately changes direction. Its happened a lot, and he gets a lot of flak for it, but its way better than going down the pit deeper and deeper refusing to accept that its not working.
 
“This Delaware judge probably did more damage to Tesla than short sellers did to Tesla between 2013 and 2019, but only in the short run.


The judges decision in the Tornetta case was issued end of Jan 2024... A week earlier Tesla stock price was just over $180 a share.

A month AFTER the ruling it was just over $200 a share.

The only actual significant SP declines from just BEFORE the ruling to now were the results of:

Poor Q1 P&D numbers
and
The week leading up to expected poor Q1 ER, with the stock rebounding right after and back just just a few bucks under $200 a week after ER.

After that it drifted back down a but in summary the closing price Thursday, when we knew the comp vote has passed again- was within pennies of the stock price it was at the week before the judge rescinded the original comp package.

Thus the judge doesn't appear to have done much damage to anything compared to years of short sellers based on actual facts I'm afraid. Seems the market was rational enough to realize Tesla would resolve the compensation issue one way or another regardless of that decision.


So this claim seems mostly hate driven vs fact driven.

Doubly so when folks start creating elaborate conspiracy theories because the President appears in a picture with a state governor- and judges such as McCormack are appointed by state governors so that governor also appears in an entirely unrelated pic with the judge- as if that's in any way meaningful or related to the case.

I have a picture of myself with Cary Elwes- that doesn't mean I ever wrestled Andre the Giant, even though there's a whole movie showing the two of them interacting.


Even worse layering on an outright false claim she ruled against Elon in the twitter case... there was no final ruling in that case- it was settled before any such ruling was issued.
 
Last edited:
The 3 is using 2170. The answer is currently Panasonic isnt making enough 2170 to supply the 3 production in the US.

4680 is going into Semi and CT and they are trying their best but currently no excess capacity for much else.

So Im gonna ask again: for everyone who is disappointed with a so-called *intentional delay* in the 25k car: how do you suppose Tesla will get the batteries for it?
Disagree.

M3P uses 2170. 3SR uses CATL prismatic LFP cells. 3LR uses LG Chem cells, I am not sure what chemistry or format.

Semi uses 2170.
 
I think that won't be the case when volume production starts on complexity alone, a 50k cell battery pack has huge assembly time for a single vehicle vs a 10k cell pack, that obviously depending on 4680 ramp

50k Semis a year half LR and half is 24 GWh/y, or a full 4680 line, or a bit more, dedicated to it
If semi will use 4680 cells, wouldn't it make more sense to build a semi factory near where 4680 cell production, instead of right next to their 2170 supplier is?
 
I think that won't be the case when volume production starts on complexity alone, a 50k cell battery pack has huge assembly time for a single vehicle vs a 10k cell pack, that obviously depending on 4680 ramp

50k Semis a year half LR and half is 24 GWh/y, or a full 4680 line, or a bit more, dedicated to it


Depends how much Tesla cares about getting the IRA credit back for much of their Model 3 sales... right now the only real path to doing that is as 4680 ramps beyond CT needs they bring back the 4680 Model Y, freeing up US made cells to go into LR AWD 3s to qualify for the $7500 credit... (and the 2170 cells from China could go into the Semi, which won't care about the consumer tax credit).

Long term of course Semi could also go to 4680 as that scales even further, but seems that'd make more sense to do as adding 4680 lines in Nevada eventually rather than moving cells from Texas (where they could go right into CTs and maybe Ys again as I suggest above) to Nevada where Semi will be made.
 
If semi will use 4680 cells, wouldn't it make more sense to build a semi factory near where 4680 cell production, instead of right next to their 2170 supplier is?
Didn't Tesla announce an expansion of GigaNevada, that will manufacture 4680s, at the same time that they said the Semi would be produced in the same area?

Yep, they did: Tesla breaks ground on $3.6B Giga Nevada expansion for Semi and 4680 cell

Tesla has officially broken ground on the $3.6 billion expansion of Gigafactory Nevada, which will add 4 million square feet of manufacturing space and two new facilities for Semi and 4680 cell production.