Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla, TSLA & the Investment World: the Perpetual Investors' Roundtable

This site may earn commission on affiliate links.
Because Hertz is the owner...When you rent a Hertz, you cannot use your phone or link your account. You have to use a key card. Maybe they have something special worked out for Uber drivers, but normal renters cannot access upgrades.

Beep!

Hertz will now let you use your own Tesla app and car profile with their rentals.
 
Regardless, Uber drivers who own Teslas are already doing this, recording it, and putting it on YouTube.

We aren't talking about just letting FSD do the job 100% of the time, we are taking about having someone who drives for a living intervene as often as necessary.

You know, like, wouldn't this be a better experience than how Waymo and Cruise do it now with remote drivers and chase cars?

Let's say a passenger rides in a Waymo or Cruise, then gets a demonstration of FSD in an Uber Tesla. They tell two friends, etc.

In the long run this will expose more people to autonomy, and, likely reinforce Tesla's dominance for those passengers who have experience to compare with Waymo and Cruise.
I think you are severely overestimating the capabilities of FSD Beta. It can be dangerous, especially on left turns, approaching cars making right turns, when turn lanes open up on roads and it's not even programmed to recognize school zones or school buses stops.

It has a long way to go before it's like Cruise or Waymo, but that's a completely different model where they are using HD maps and geofenced. They are also speed locked.
 
Beep!

Hertz will now let you use your own Tesla app and car profile with their rentals.
Really? Can you access upgrades? That was the biggest complaint I had using a Hertz rental. It kind of sucks having to use the key card and not having walk away locks, etc.

Edit: Used the search function and you can't access upgrades, but they added a QR code to use the phone as a key.
 
The current tax incentives pretty much locks Chinese made cars out of the US, and even worst when they have the ability to capture data.

As for branding, the market is pretty sour on Chinese made stuff and a Chinese car market is not well established when it comes to reliability. It took Hyundai 2 decades of aggressive marketing and top of the line warranty to gain the trust of the people. What BYD needs to do is to market Chinese made cars under a brand that doesn't seem like it's Chinese, like Polestar.
I don't get why when people talk about Nio, BYD, etc., they ignore the fact that all Chinese EV makers are locked out of the US market. If they were just going up against the inefficient legacy auto makers, then they would probably overcome the disadvantage they face because of the IRA.

But they have to deal with Tesla who is positioned as well as they possibly could be to take advantage of the IRA and is much more efficient than legacy auto. Beyond the direct and indirect credits for battery production/EV sales that Tesla gets and they won't, their logistics are going to much more expensive since they have to ship overseas, Tesla produces and delivers all within the US.

If Chinese auto makers want to go about the route of setting up a factory here, it'll be at least a year, if not two before those factories are making cars and by then, Tesla will have Gen 3 on the road + Tesla's battery production will be a such a large level that the amount of credits Tesla receives will allow Tesla to be even more aggressive with their pricing.

It's really a no win situation for Chinese auto makers. And what's worse for them is that Europe seems to be going down the road of doing their own IRA style credits to promote local auto makers which again, Tesla benefits from and they don't.
 
I think you are severely overestimating the capabilities of FSD Beta. It can be dangerous, especially on left turns, approaching cars making right turns, when turn lanes open up on roads and it's not even programmed to recognize school zones or school buses stops.

It has a long way to go before it's like Cruise or Waymo, but that's a completely different model where they are using HD maps and geofenced. They are also speed locked.

This in no way changes the fact that Uber drivers are using FSD (Beta) in their daily rides NOW, and, to my knowledge, are intervening appropriately while FSD leaves passengers with a good impression.

Very likely a better impression than are Waymo and Cruise, who likewise may have overestimated their capabilities and can be dangerous (blocking emergency vehicles, etc.), yet have been allowed in the public space.
 
This in no way changes the fact that Uber drivers are using FSD (Beta) in their daily rides NOW, and, to my knowledge, are intervening appropriately while FSD leaves passengers with a good impression.

Very likely a better impression than are Waymo and Cruise, who likewise may have overestimated their capabilities and can be dangerous.
Getting too far into FSD will get this moved, but I doubt Hertz wants that sort of potential liability. I think when FSD is closer, it's possible, but not ideal right now. Waymo is almost 8k miles per disengagement while FSD Beta is around 100 for critical 10 for non-critical (car is just waiting too long, etc.). It's apples to hand grenades.
 
He (Vaibhav Taneja) did not speak at the last earnings call but I feel pretty certain that he has been present during the earnings calls over the past few years.
The reason why you don't see Executive Teams (of virtually all companies) hold Earnings Calls with Video (Zoom or other) is because they usually have support staff with them. The support staff is there in case a question comes in from an analyst which requires the staff to dig for numbers.
When a difficult question comes from an analyst, a CEO will be the first to answer the question (in generalities) to buy time for the team to dig into the data.

John (Analyst):
Have the tough market conditions but pressure on your margins for Product B in Europe this past quarter.

CEO:
I'll go first, thanks for the question John. We've seen very positive interest in product B in Europe as we believe we have a compelling product . blah . .blah.
Joe do you have anything to add?

Joe (CFO) after huddling with the finance staff:
Yes - I'll add that our margins for product B held up nicely in Europe going from 25.32% in Q1 to 25.45% in Q2.

I ask because if this was a pre-planned transition it would have been easy enough to either introduce him or give him a small role to discuss rev rec or something. Dropping your CFO right after earnings and appointing a internal person that no one has ever heard of doesn't scream "well planned transition" to me.
 
Anyone else feel like this is the beginning of the end in the market?
Oh contraire. This is the end of the beginning. The end of traditional economies. The beginning of new economies in transportation, energy, space, artificial intelligence and fintech. The next stage for Tesla will be like nothing we have ever seen. I have foreseen it. Time is on our side.
 
Yeah, absolutely not a "rewrite". In general, there are 3 main objectives in the FSD stack:
  1. Vision/perception (culminates in the Occupancy Network)
  2. Navigation (route/lane planning integrated w. map/traffic/weather updates)
  3. Real-time controls (currently 300K+ lines of C++ code in v.11.x)
The first 2 items on this list ARE NOT scheduled for a "rewrite": Those are constantly retrained with better data sets, but are close to their final form. Item 3 is the focus for the Autopilot team right now: (and has been for some months)

Elon Musk on Twitter: "v12 is reserved for when FSD is end-to-end AI, from images in to steering, brakes & acceleration out." / Twitter | May 08, 2023​
Elon Musk on Twitter: "@WholeMarsBlog I tested the version 12 alpha build today. It is mind-blowing." / Twitter | July 27, 2023​
Elon Musk on X: "@Scobleizer Vehicle control is the final piece of the Tesla FSD AI puzzle. That will drop >300k lines of C++ control code by ~2 orders of magnitude. It is training as I write this. Our progress is currently training compute constrained, not engineer constrained." / X | Aug 1, 2023​

Note the final piece of the puzzle, above: AI



Yup, it's not being re-written: human hand-written driving code is being replaced by a computer trained neural net (per Elon above). Now we can talk about how long that will take, but it's dependant upon Dojo training capacity. At some point (likely in the next 6-18 mths), Tesla will no longer be 'training compute limited'. That's likely when we'll see the 'singularity'. :D

Here's another interesting take on possible timelines:

Elon Musk: FSD Smarter than a Human by EOY. Herbert Ong Gives Analysis of Timing of FSD and Robotaxi | Randy Kirk on Youtube (Aug 04, 2023)


Cheers to the Coders!
Please excuse me if I get the techno-trivia lingo wrong:

Q1. Re obj3: does freeing up 300k lines of code also free up enough compute that one side of HW3 can cope with the non-optimised NN without needing to (as at present) bring in the other side of HW3 ?;

Q2. Re obj3: does freeing up 300k lines of code also free up enough compute that one side of HW3 can cope with the (subsequently) optimised NN without needing to (as at present) bring in the other side of HW3 ?;

I appreciate that no-one can give a definitive answer at this stage, but is there any view emerging on this ?
 
  • Like
Reactions: Thumper
Please excuse me if I get the techno-trivia lingo wrong:

Q1. Re obj3: does freeing up 300k lines of code also free up enough compute that one side of HW3 can cope with the non-optimised NN without needing to (as at present) bring in the other side of HW3 ?;

Q2. Re obj3: does freeing up 300k lines of code also free up enough compute that one side of HW3 can cope with the (subsequently) optimised NN without needing to (as at present) bring in the other side of HW3 ?;

I appreciate that no-one can give a definitive answer at this stage, but is there any view emerging on this ?
The 300k lines of C++ code is referred to as non-AI, classic compute. I would expect this to run on the CPU part of the HW3/4 SOC, while the neural network runs on the AI accelerator part of the SOC. I would expect less CPU usage and more AI accelerator use. If the AI part of the SOC is already compute constrained, moving the 300K C++ code to the neural network accelerator would make that even more constrained.
 
Please excuse me if I get the techno-trivia lingo wrong:

Q1. Re obj3: does freeing up 300k lines of code also free up enough compute that one side of HW3 can cope with the non-optimised NN without needing to (as at present) bring in the other side of HW3 ?;

Q2. Re obj3: does freeing up 300k lines of code also free up enough compute that one side of HW3 can cope with the (subsequently) optimised NN without needing to (as at present) bring in the other side of HW3 ?;

I appreciate that no-one can give a definitive answer at this stage, but is there any view emerging on this ?
Q1: code runs in the CPUs, not the TRIP NN processors. NN size impact is not known.

Q2: again unknown.

However, the system could theoretically have a smaller fall back/ limp to the side of the road NN that does fit in a single NN processor in the event the other side fails.
 
Please excuse me if I get the techno-trivia lingo wrong:

Q1. Re obj3: does freeing up 300k lines of code also free up enough compute that one side of HW3 can cope with the non-optimised NN without needing to (as at present) bring in the other side of HW3 ?;

Q2. Re obj3: does freeing up 300k lines of code also free up enough compute that one side of HW3 can cope with the (subsequently) optimised NN without needing to (as at present) bring in the other side of HW3 ?;

I appreciate that no-one can give a definitive answer at this stage, but is there any view emerging on this ?
The number of lines of code is irrelevant :D
Ignoring multithreading, and branch-prediction and other cleverness, a CPU is really only processing one line of code at a time, What matters most is the number of times a second you need to do stuff. I can happily write a single line of code that thrashes hardware like crazy, or thousands of lines of code that run super fast. Its really hard to generalize.
I strongly suspect that those 300k lines are actually very compute-light already. Its just a LOT of complex interlinking chunks of code to handle various use cases, but they are not all being run at once.

The awesome thing about removing 300k lines of code is simplicity, and reliability. Bugs love to nest in huge codebases. Making as much of the codebase pure NN makes for much better code.
Debugging NNs is a bit of a black art, but at least its a single entity. debugging 300k lines of accumulated spaghetti written by coders who have since left/died is hellish.
 
And if we look at that table I put up of cells per vehicle we can see that PHEVs were averaging 12 kWh three years ago and have since doubled to 24 kWh. Another few years of this progression (if it persists) and that'll be doubled again to 48 kWh by approx end 2025 at which point it is a no-brainer for BYD (and everybody else) to just drop the extra costs of the ICE-hydrid out of the vehicles and become a full-on BEV pureplay. After all if you look at the 2022 cells/vehicle for non-Tesla that is currently only 48 kWh. I appreciate that the Wuhling Mini (et al) is a downwards distortion on the avge cells/veh metric.
A longstanding (and totally valid) criticism of PHEVs is that their packs were never used. That makes a lot of sense when it was a small pack that only offered low double digit miles of range, however a 24kWh pack should be sufficient for most people's daily drive. It will be interesting to see at what point PHEVs stop being a distraction and start having a meaningful impact on carbon emission reduction.
 
It will be interesting to see at what point PHEVs stop being a distraction and start having a meaningful impact on carbon emission reduction.

When you see garages popping up that provide service for PHEVs to remove the ICE and replace with an equal weight of batteries, that point will have been reached.
 
The number of lines of code is irrelevant :D
Ignoring multithreading, and branch-prediction and other cleverness, a CPU is really only processing one line of code at a time, What matters most is the number of times a second you need to do stuff. I can happily write a single line of code that thrashes hardware like crazy, or thousands of lines of code that run super fast. Its really hard to generalize.
I strongly suspect that those 300k lines are actually very compute-light already. Its just a LOT of complex interlinking chunks of code to handle various use cases, but they are not all being run at once.

The awesome thing about removing 300k lines of code is simplicity, and reliability. Bugs love to nest in huge codebases. Making as much of the codebase pure NN makes for much better code.
Debugging NNs is a bit of a black art, but at least its a single entity. debugging 300k lines of accumulated spaghetti written by coders who have since left/died is hellish.
The 300k lines of C++ code is referred to as non-AI, classic compute. I would expect this to run on the CPU part of the HW3/4 SOC, while the neural network runs on the AI accelerator part of the SOC. I would expect less CPU usage and more AI accelerator use. If the AI part of the SOC is already compute constrained, moving the 300K C++ code to the neural network accelerator would make that even more constrained.
Q1: code runs in the CPUs, not the TRIP NN processors. NN size impact is not known.

Q2: again unknown.

However, the system could theoretically have a smaller fall back/ limp to the side of the road NN that does fit in a single NN processor in the event the other side fails.
Hmmm ....

- Thank you all for an answer

- now, ignoring the failsafe/limp stuff (thanks @mongo) as a distraction. however understandable

- can you please explain for the rest of us whether you are in wild agreement with each other or otherwise ?

- and then I can start to follow the logic without just doing a complete nodding dog impression.

I'm serious, because I am genuinely interested, irrespective of being an investor.
 
A longstanding (and totally valid) criticism of PHEVs is that their packs were never used. That makes a lot of sense when it was a small pack that only offered low double digit miles of range, however a 24kWh pack should be sufficient for most people's daily drive. It will be interesting to see at what point PHEVs stop being a distraction and start having a meaningful impact on carbon emission reduction.
We had a BMW i3 REx. I rather liked that design and was disappointed to see it discontinued (not profitable?). My understanding is that these are less common than the dual powertrain PHEV.

The idea of a range extender for an electric only powertrain makes a lot of sense to people who aren’t confident in charging infrastructure or only occasionally drive more than the EV only range. I think these are more likely to get plugged in daily (this is just a hunch).

It seems like with the range extender solution; manufacturers could get away with smaller batteries but still sharpen their EV chops as they scale battery sourcing. Like F series lightning with 150miles of all EV + range extended to 250/300 with a diesel electric generator and 10 gallon diesel tank.

Not too different than the plan for my cybertruck. If towing my travel trailer, I could charge using its onboard generator as an emergency backup range extender.
 
  • Like
Reactions: xhawk101
Random data point: my brother drives used (higher end) vehicles (from dealerships or auctions to collection areas) for a wholesaler in the Toronto area; up until last month most of the profits were in exporting most of the vehicles to the US.

On a call today with my brother, according to his boss, “The used car market has ‘collapsed’ in the (US) in the past five weeks; he figures the used car market will slow down in Canada before Christmas (although he expects the housing market in Canada to stay hot)”.