Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla, TSLA & the Investment World: the Perpetual Investors' Roundtable

This site may earn commission on affiliate links.
Are we talking about Mobile Eye cars or just HW 2.x cars that didn't get HW 3 yet or is there a group of HW3 cars with an orphaned branch?

View attachment 1034094
All legacy Model S&X vehicles, i.e. vertical screen, with AP3 hardware. So 2016? to 2021 models?

Though those are broken in to at least two different groups: MCU1 vs MCU2. (Expect MCU2 vehicles to get FSD V12 first.)
 
Last edited:
Here's the chart. If AGI is between 2027 and 2030, robotaxis are imminent.
View attachment 1034038
Kurtzwell, who I remind you predicted in the 80s that AGI would be achieved in 2029, is perhaps the most recognized futurist on Earth. Unfortunately, his cognition and physical appearance are certainly on thr decline over the past few years. Here he talks the AI singularity at 2024 SXSW:


This video belongs on this forum for many reasons, most notably that AI, FSD, and Tesla are intertwined. Bon Appétit!
 
Regarding FSD and AI.

This is a genuine question. I really don't know how this stuff works.

Would it be possible now that AI can make videos to just fabricate millions of videos instead of actually driving real miles, and then have FSD computer learn from those videos instead?

Like tell AI to make tens of thousands of different videos of cars entering a roundabout, or making unpredictable lane changes, or avoiding cats on the road, or any situation really. If you can have videos done from specific viewpoints could you then have FSD ready before you make any significant numbers of real cars? Eliminating much of the millions of miles driven that is now needed to get enough videos for FSD to learn from.

Say you build a prototype. Measure where all the cameras are and how much view they can capture and instruct the AI to make the videos based on those camera locations.

With the fast AI advancements going on it seems like if this isn´t doable now it will be at some point in the future?
 
Regarding FSD and AI.

This is a genuine question. I really don't know how this stuff works.

Would it be possible now that AI can make videos to just fabricate millions of videos instead of actually driving real miles, and then have FSD computer learn from those videos instead?

Like tell AI to make tens of thousands of different videos of cars entering a roundabout, or making unpredictable lane changes, or avoiding cats on the road, or any situation really. If you can have videos done from specific viewpoints could you then have FSD ready before you make any significant numbers of real cars? Eliminating much of the millions of miles driven that is now needed to get enough videos for FSD to learn from.

Say you build a prototype. Measure where all the cameras are and how much view they can capture and instruct the AI to make the videos based on those camera locations.

With the fast AI advancements going on it seems like if this isn´t doable now it will be at some point in the future?
It's really hard to make a simulator include all edge cases. Doing that is likely harder than solving FSD.

A better way is to find rare edge cases IRL then make the simulator create variants of these edge cases, for example same situation but different weather, add traffic, change velocities etc to train the neural network to generalize the edge case to more situations.

Also the simulator can be used for validation as crashing vehicles in simulation is a lot less costly than crashing irl.
 
Regarding FSD and AI.

This is a genuine question. I really don't know how this stuff works.

Would it be possible now that AI can make videos to just fabricate millions of videos instead of actually driving real miles, and then have FSD computer learn from those videos instead?

Like tell AI to make tens of thousands of different videos of cars entering a roundabout, or making unpredictable lane changes, or avoiding cats on the road, or any situation really. If you can have videos done from specific viewpoints could you then have FSD ready before you make any significant numbers of real cars? Eliminating much of the millions of miles driven that is now needed to get enough videos for FSD to learn from.

Say you build a prototype. Measure where all the cameras are and how much view they can capture and instruct the AI to make the videos based on those camera locations.

With the fast AI advancements going on it seems like if this isn´t doable now it will be at some point in the future?
Tell me you’ve never watched any of teslas talks on their FSD development without telling me you’ve never watched any of teslas talks on their FSD development.

Sorry. Had to.

Yes. It’s possible, and tesla does it.

But real world driving is still better for finding edge cases they can’t predict. There are a massive number of edge cases they have to figure out. But simulated data can literally come in an infinite number.

Training on a finite number of possibilities is infinitely easier than training for an infinite number of possibilities.
 
You can thank NHTSA for that one. FSD used to slowly roll through a 4-way stop, until NHTSA got wind of it, and insisted FSD come to a COMPLETE stop at any stop sign. Annoying, but I think we're stuck with it.
it was way more than that.
It stops 1.5 meters before the stop sign. Hold still for almost 5 seconds. Slowly roll out, really slow, although apparent nothing is coming near. After half of the body crossed the line it started to accelerate.
 
Tell me you’ve never watched any of teslas talks on their FSD development without telling me you’ve never watched any of teslas talks on their FSD development.
Well it's been a couple of years since they had one hasn't it? Things can change. It seems like it has regarding AI.

Also, I recall they were intended as recruitment for getting new AI employees. Was pretty sure I would not qualify so skipped some parts :)
 
Would it be possible now that AI can make videos to just fabricate millions of videos instead of actually driving real miles, and then have FSD computer learn from those videos instead?
Yes, but the AI that makes the videos is only as good as the data it is trained with...

Tesla already uses some simulation, it is possible to make variations on an edge case, perhaps altering the lighting, the number of cars / pedestrians present and the path of those actors. if the AI which generates the simulation is trained with high quality real world data it will probably generate some good test cases which might take longer to occur in the real world.

It is even possible competitors might mine YouTube videos including videos posted by those testing FSD.

But as well as the video, Tesla also has the steering and pedal inputs from the driver it may be harder to guess them in a simulation...

The other advantage is the HW3 / HW4 / HWx computer, it is designed for the task, there are competitor products available.

We saw the NVida robot demo recently, I see that as the "Android" version of Robots, while Tesla Optimus is the "Apple" version.

Similarly a FSD competitor might join an alliance of vendors with a hardware / software AI toolkit supplier to make the "Android" version of FSD, while Tesla remains the "Apple" version of FSD.

Being the tightly integrated "Apple" version of a technology isn't a disadvantage provided the vendor has sufficient market share. The "Android" version(s) need to both collaborate and compete, it can be harder for them to achieve relative technical superiority within the "Android" family.

Starting later means that the "Android" version of FSD will find it hard to be clearly better than Tesla's version of FSD, or even cheaper. The best they can hope for is to close the gap in a timely fashion.

I think the Robot competition has got a better strategy, but they don't have is FSD. Their impressive demos are more around the LLM AI suite which is something (voice/text) Tesla can plug in at anytime.

Does FSD give Tesla some advantage when developing Optimus? (I don't know. but someone here might know.)

Tesla has mass manufacturing experience that many Robot start-ups don't have. I am sure someone once said prototypes are easy :)

Chinese AI companies have the problem that they are Chinese, and might not be trusted in the west.
 
Last edited:
It's really hard to make a simulator include all edge cases. Doing that is likely harder than solving FSD.

A better way is to find rare edge cases IRL then make the simulator create variants of these edge cases, for example same situation but different weather, add traffic, change velocities etc to train the neural network to generalize the edge case to more situations.

Also the simulator can be used for validation as crashing vehicles in simulation is a lot less costly than crashing irl.
Ok, creating variations of real cases was actually gonna be a second example but didn't want to make my post to long.

It just seems like we are getting closer to a point where you can tell the AI to make the edge cases. Like create videos of the one thousand most common things people put on their truck beds and have those thing fall of in front of you at different speeds and distances. You will never ever get real videos of all those variations. Because some of them has not happened yet. But with AI you should be able to. If not now then at some point in the future.
 
Yes, but the AI that makes the videos is only as good as the data it is trained with...

Tesla already uses some simulation, it is possible to make variations on an edge case, perhaps altering the lighting, the number of cars / pedestrians present and the path of those actors. if the AI which generates the simulation is trained with high quality real world data it will probably generate some good test cases which might take longer to occur in the real world.

It is even possible competitors might mine YouTube videos including videos posted by those testing FSD.

But as well as the video, Tesla also has the steering and pedal inputs from the driver it may be harder to guess them in a simulation...

The other advantage is the HW3 / HW4 / HWx computer, it is designed for the task, there are competitor products available.

We saw the NVida robot demo recently, I see that as the "Android" version of Robots, while Tesla Optimus is the "Apple" version.

Similarly a FSD competitor might join an alliance of vendors with a hardware / software AI toolkit supplier to make the "Android" version of FSD, while Tesla remains the "Apple" version of FSD.

Being the tightly integrated "Apple" version of a technology isn't a disadvantage provided the vendor has sufficient market share. The "Android" version(s) need to both collaborate and compete, it can be harder for them to achieve relative technical superiority within the "Android" family.

Starting later means that the "Android" version of FSD will find it hard to be clearly better than Tesla's version of FSD, or even cheaper. The best they can hope for is to close the gap in a timely fashion.

I think the Robot competition has got a better strategy, but they don't have is FSD. Their impressive demos are more around the LLM AI suite which is something (voice/text) Tesla can plug in at anytime.

Does FSD give Tesla some advantage when developing Optimus? (I don't know. but someone here might know.)

Tesla has mass manufacturing experience that many Robot start-ups don't have. I am sure someone once said prototypes are easy :)

Chinese AI companies have the problem that they are Chinese, and might not be trusted in the west.
Yeah, I wasn't so much thinking GM would buy a few computers and solve this thing.

But Nvida has money, and some smart people, and can afford to keep a portion of their chips for themselves and throw more computing power than anyone has ever thought possible at a problem like this, and then lease it out to everyone not named Tesla. Without needing a million cars with cameras first. That's what I'm a bit worried about.

Also if we previously thought Tesla was say five years ahead of everyone else. Wouldn't all these AI improvements cut that to less time? Meaning the value for Tsla would be less than previously thought.
 
  • Like
Reactions: UncaNed and Linus_2
Yeah, I wasn't so much thinking GM would buy a few computers and solve this thing.

But Nvida has money, and some smart people, and can afford to keep a portion of their chips for themselves and throw more computing power than anyone has ever thought possible at a problem like this, and then lease it out to everyone not named Tesla. Without needing a million cars with cameras first. That's what I'm a bit worried about.

Also if we previously thought Tesla was say five years ahead of everyone else. Wouldn't all these AI improvements cut that to less time? Meaning the value for Tsla would be less than previously thought.
What is it that Elon says? The only moat is constant progress or something along those lines.
 
Last edited: