Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla, TSLA & the Investment World: the Perpetual Investors' Roundtable

This site may earn commission on affiliate links.
I believe the real question is not whether Tesla has released yet another FSD with improvements (mind blowing even), but whether they have truly achieved 100% neural net end to end.

If this is really true, and the NN can learn from constant video feeds, then the improvement from here should proceed rapidly, perhaps even increasing in speed as the NN becomes “smarter”. At that point FSD is inevitable.

Has this happened? I keep hearing it has, but definitions are such a b*&#%.

Has Tesla created a primitive AI that can teach itself at an increasing rate with only data feeds and direction? I think that this has been identified as the only path to real FSD for a while now.
 
Some thoughts on FSD/robotaxi revenue.
True robotaxi is a long way off, even with V12, but if V12 (or more likely V13) can do 95%+ of your journey without interventions, then how the hell would anybody be able to choose any other car to do ride sharing as a driver? FWIW bing suggests there are 1 million Uber drivers in the US.
Now the problem is that Teslas+FSD is an expensive up front cost, and if people had that cash, they definitely wouldn't be driving an Uber, so I think what Tesla need to do is get actually serious about courting the ride-share driver market.

From what I can tell, right now Tesla just sit and hope that people who drive a taxi or an uber just 'discover' how good FSD is, and then sign up for it. This feels silly, given that they have the ULTIMATE tool for these potential customers.

I think if Tesla provided an all-in bundle for taxi drivers or uber drivers, with car lease, insurance and FSD all bundled, they would get a lot of takers. It just needs someone at Tesla to become an evangelist for targeting that market.
 
There is never any on-vehicle learning (and it'd be nightmarish to troubleshoot if there were since every car in the fleet would behave differently)

There's a ton of environmental factors that can cause it to act differently in seemingly the same spot.... and the car does get real-time map and routing data that can change FSD behavior in some ways (but not the NNs themselves)
Correct, but there are also updates in behind the scenes (or at least they used to anyway). However, that would be some seriously quick turn-around time and so I lean toward the environment that's varying the outcome here. Flukes do happen though, maybe there was a fix in the works already.
 
  • Like
Reactions: Musskiah
I appreciate your regular grounded projections of near-term earnings implications. However, I think the calculation of X subscribers at $Y/mo subscription fee is missing perhaps two of the bigger implications of a 'ChatGPT' moment on both Tesla's business and TSLA:

1. Once it becomes obvious that compelling autonomy is inevitable and that Tesla has both a hardware and data advantage, Tesla vehicles may demand a sales price premium as people won't want to buy into a 'dumb' car, increasing vehicle margins.

2. This ChatGPT moment may drive a much higher TSLA P/E multiple, as people start believing the future earnings potential for the autonomy part of the business, and some folks may rush into it as an "AI play" that has been underappreciated in the last year of AI mania.

I'm not that focused on either of these things as I believe the real 'winnings' are farther over the horizon and prefer to keep accumulating at these share prices. But I think it's potentially risky to think that EPS and TSLA upside is constrained by the # of FSD subscribers at the current subscription price.
Right, so WHEN FY2025 EPS comes in at $6-8 EPS range (a ton of variability here due to Deferred Rev for Energy and potential FSD take rate alone) AND FSD reinvigorates Tesla as an AI play, as you say @Wingfoiler...I can see a scenario where Tesla easily runs above its ATH by 2025. All it takes is a P/E 75. Why is this P/E deserved? Well, let's say EPS goes from $4 FY2024 to $7 FY2025, just to make th math easy. A fair PEG of 1 implies 75%EPS growth will garner a 75 P/E.
 
So you may have seen that I'm in the process of replacing a car. Doing research I got information that the $4000 used ev credit was NOT point of sale (not an instant credit) for Tesla (delivery locations weren't going through the online process to do the instant rebate).

Today that changed. Assuming someone didn't make a mistake on the Tesla website.

I did not expect the transportation cost to eat into the price. Quite a few of the cars I'm looking at have $2,000 transfer fees so that means I may have to find one below $23,000 to be sure to be under the $25,000 limit.

1710692507399.png


1710692446488.png
 
Correct, but there are also updates in behind the scenes (or at least they used to anyway).

<citation required>

All previous debate on this has said that's not true--- the firmware's contents can not change other than a complete firmware update because for security it CRCs the entire blob. If anything changes it fails the check.




The only behind the scenes stuff was the per-drive data sent to the car that's mostly map/routing info and only if you set a destination-but can also improve driving behavior as it includes road/traffic control elements in the map data

 
I believe the real question is not whether Tesla has released yet another FSD with improvements (mind blowing even), but whether they have truly achieved 100% neural net end to end.

If this is really true, and the NN can learn from constant video feeds, then the improvement from here should proceed rapidly, perhaps even increasing in speed as the NN becomes “smarter”. At that point FSD is inevitable.

Has this happened? I keep hearing it has, but definitions are such a b*&#%.

Has Tesla created a primitive AI that can teach itself at an increasing rate with only data feeds and direction? I think that this has been identified as the only path to real FSD for a while now.
Well we know there is some non NN programing as the cars still do the NHTSA stop. I don't believe it is trained as the cars seem to stop at a stop sign before a normal driver would and then proceed as a normal driver would, creeping through the intersection until it is good to go.

There will always be cases where the car is programmed to do something other than what the NN was trained to do but let's hope these are very few. Most of these will probably be for regulatory reasons like speed and obeying certain laws that most drivers ignore.
 
Last edited:
Well we know there is some non NN programing as the cars still do the NHTSA stop. I don't believe it is trained as the cars seem to stop at a stop sign before a normal driver would and then proceed as a normal driver would, creeping through the intersection until it is good to go.

There will always be cases where the car is programmed to do something other than what the NN was trained to do but let's hope these are very few. Most of these will probably be for regulatory reasons like speed and obeying certain laws that most drivers ignore.
Actually they could solely use a neural network (for inference) without using any rule based methods, howsoever implemented, to make complete stops.

They’d need to curate the training data appropriately, but that should be sufficient.

I, for one, know there are people who make complete stops, because I often do. So real world exemplars exist.

I don’t think it’s a stretch to think training data is evaluated and potentially rejected rather than labeled: Tesla’s folks presumably must evaluate footage from stop signs to remove that from people who rocket through the signs. Perhaps some non-neural network methods are used to cull bad exemplars from the training data.
 
Last edited:
Well we know there is some non NN programing as the cars still do the NHTSA stop. I don't believe it is trained as the cars seem to stop at a stop sign before a normal driver would and then proceed as a normal driver would, creeping through the intersection until it is good to go.

There will always be cases where the car is programmed to do something other than what the NN was trained to do but let's hope these are very few. Most of these will probably be for regulatory reasons like speed and obeying certain laws that most drivers ignore.
This is the one thing that will continue to bother me (and the ones behind me more), but I don't see a way around it. If they stop the car where it can peek, this might be past "the line" per NHTSA. And it's even harder to determine where that ideal line is located, especially when the data doesn't reflect the actual law.

(Edit: Editing on your edit..., yes we could filter the dataset for those rare and natural/safe stops. Maybe even inject simulations into the training. I do hope so for better PR with non-FSD owners, but lots more to solve first as it's not too bad. Plus, when there are clearly no cars coming, it seems quite natural.)
 
Last edited:
  • Like
Reactions: Musskiah
Getting a lot of phantom breaking in my neighborhood near puddles that are in adjacent lanes
I’m excited about End-end but I have to admit I’m finding it seems to be regressed on a lot of basic stuff, like getting in the right turn lane in my neighborhood to turn right etc.

The fact everyone else is singing positive songs makes me feel comfortable saying it’s just a matter of the next update or too with the the way it learns but I’m not experiencing any profound leap forward other people are.

(SW Houston)
 
Actually they could solely use a neural network (for inference) without using any rule based methods, howsoever implemented, to make complete stops.

They’d need to curate the training data appropriately, but that should be sufficient.

I, for one, know there are people who make complete stops, because I often do. So real world exemplars exist.

I don’t think it’s a stretch to think training data is evaluated and potentially rejected rather than labeled: Tesla’s folks presumably must evaluate footage from stop signs to remove that from people who rocket through the signs. Perhaps some non-neural network methods are used to cull bad exemplars from the training data.
Yeah, but they said a very large percentage of the data people do not stop fully.

It is not very natural right now. Chuck Cook has a video with just unprotected left turns and it stops well before any normal person would stop and you cannot see into the intersection. This is why I believe it is programming overriding the NN training.

Silly that NHTSA is intervening not to really help safety at all.
 
  • Like
Reactions: SOULPEDL
If the Tesla FSD Team is listening, thanks so much!!! Well worth waiting for V12.3. The shock and awe... SO worth it!
😍😍😍😍😍😍
Cheers!

Agreed. I even interrupted a session to record/send kudos to the team. Just because. Maybe someone will actually hear it...

This morning we did a 50ish mile round trip in light rain. Rural, city, highway. Blind turns, U-turns, Full stops, merges, juggling cars and what they were up to around us. Zero issues. I am particularly happy with the lack of constant lane changes now (once I pick a lane, it allows me to pretty much stay in it and doesn't try to "think" for me) and the Automatic Speed Offset is excellent. So glad it has gotten to this level.
 
12.3 easily passed the wife (as passenger) test this morning doing errands in urban traffic. She’s been extremely uncomfortable with it on city streets to this point and even doesn’t like some of autopilot’s behavior on the highways. So pretty much a “two thumbs down” critic most of the time.

I’d say we’re on the right track. 😉
 
Yeah, but they said a very large percentage of the data people do not stop fully.

It is not very natural right now. Chuck Cook has a video with just unprotected left turns and it stops well before any normal person would stop and you cannot see into the intersection. This is why I believe it is programming overriding the NN training.

Silly that NHTSA is intervening not to really help safety at all.
Even so, it seems more likely to me that the behavior is baked in at training rather than overridden at inference—simulated data may be used, other parameters may be injected,…

The windscreen cameras are ahead of the driver’s eyes, so the vehicle has visibility before the driver.

As for safety, people rationalize all sorts of inappropriate behavior.