You can install our site as a web app on your iOS device by utilizing the Add to Home Screen feature in Safari. Please see this thread for more details on this.
Note: This feature may not be available in some browsers.
All legacy Model S&X vehicles, i.e. vertical screen, with AP3 hardware. So 2016? to 2021 models?Are we talking about Mobile Eye cars or just HW 2.x cars that didn't get HW 3 yet or is there a group of HW3 cars with an orphaned branch?
View attachment 1034094
That’s right. I am one of them.All legacy Model S&X vehicles, i.e. vertical screen, with AP3 hardware. So 2016? to 2021 models?
Though those are broken in to at least two different groups: MCU1 vs MCU2. (Expect MCU2 vehicles to get FSD V12 first.)
Kurtzwell, who I remind you predicted in the 80s that AGI would be achieved in 2029, is perhaps the most recognized futurist on Earth. Unfortunately, his cognition and physical appearance are certainly on thr decline over the past few years. Here he talks the AI singularity at 2024 SXSW:Here's the chart. If AGI is between 2027 and 2030, robotaxis are imminent.
View attachment 1034038
It's really hard to make a simulator include all edge cases. Doing that is likely harder than solving FSD.Regarding FSD and AI.
This is a genuine question. I really don't know how this stuff works.
Would it be possible now that AI can make videos to just fabricate millions of videos instead of actually driving real miles, and then have FSD computer learn from those videos instead?
Like tell AI to make tens of thousands of different videos of cars entering a roundabout, or making unpredictable lane changes, or avoiding cats on the road, or any situation really. If you can have videos done from specific viewpoints could you then have FSD ready before you make any significant numbers of real cars? Eliminating much of the millions of miles driven that is now needed to get enough videos for FSD to learn from.
Say you build a prototype. Measure where all the cameras are and how much view they can capture and instruct the AI to make the videos based on those camera locations.
With the fast AI advancements going on it seems like if this isn´t doable now it will be at some point in the future?
Tell me you’ve never watched any of teslas talks on their FSD development without telling me you’ve never watched any of teslas talks on their FSD development.Regarding FSD and AI.
This is a genuine question. I really don't know how this stuff works.
Would it be possible now that AI can make videos to just fabricate millions of videos instead of actually driving real miles, and then have FSD computer learn from those videos instead?
Like tell AI to make tens of thousands of different videos of cars entering a roundabout, or making unpredictable lane changes, or avoiding cats on the road, or any situation really. If you can have videos done from specific viewpoints could you then have FSD ready before you make any significant numbers of real cars? Eliminating much of the millions of miles driven that is now needed to get enough videos for FSD to learn from.
Say you build a prototype. Measure where all the cameras are and how much view they can capture and instruct the AI to make the videos based on those camera locations.
With the fast AI advancements going on it seems like if this isn´t doable now it will be at some point in the future?
it was way more than that.You can thank NHTSA for that one. FSD used to slowly roll through a 4-way stop, until NHTSA got wind of it, and insisted FSD come to a COMPLETE stop at any stop sign. Annoying, but I think we're stuck with it.
Well it's been a couple of years since they had one hasn't it? Things can change. It seems like it has regarding AI.Tell me you’ve never watched any of teslas talks on their FSD development without telling me you’ve never watched any of teslas talks on their FSD development.
Yes, but the AI that makes the videos is only as good as the data it is trained with...Would it be possible now that AI can make videos to just fabricate millions of videos instead of actually driving real miles, and then have FSD computer learn from those videos instead?
Ok, creating variations of real cases was actually gonna be a second example but didn't want to make my post to long.It's really hard to make a simulator include all edge cases. Doing that is likely harder than solving FSD.
A better way is to find rare edge cases IRL then make the simulator create variants of these edge cases, for example same situation but different weather, add traffic, change velocities etc to train the neural network to generalize the edge case to more situations.
Also the simulator can be used for validation as crashing vehicles in simulation is a lot less costly than crashing irl.
Yeah, I wasn't so much thinking GM would buy a few computers and solve this thing.Yes, but the AI that makes the videos is only as good as the data it is trained with...
Tesla already uses some simulation, it is possible to make variations on an edge case, perhaps altering the lighting, the number of cars / pedestrians present and the path of those actors. if the AI which generates the simulation is trained with high quality real world data it will probably generate some good test cases which might take longer to occur in the real world.
It is even possible competitors might mine YouTube videos including videos posted by those testing FSD.
But as well as the video, Tesla also has the steering and pedal inputs from the driver it may be harder to guess them in a simulation...
The other advantage is the HW3 / HW4 / HWx computer, it is designed for the task, there are competitor products available.
We saw the NVida robot demo recently, I see that as the "Android" version of Robots, while Tesla Optimus is the "Apple" version.
Similarly a FSD competitor might join an alliance of vendors with a hardware / software AI toolkit supplier to make the "Android" version of FSD, while Tesla remains the "Apple" version of FSD.
Being the tightly integrated "Apple" version of a technology isn't a disadvantage provided the vendor has sufficient market share. The "Android" version(s) need to both collaborate and compete, it can be harder for them to achieve relative technical superiority within the "Android" family.
Starting later means that the "Android" version of FSD will find it hard to be clearly better than Tesla's version of FSD, or even cheaper. The best they can hope for is to close the gap in a timely fashion.
I think the Robot competition has got a better strategy, but they don't have is FSD. Their impressive demos are more around the LLM AI suite which is something (voice/text) Tesla can plug in at anytime.
Does FSD give Tesla some advantage when developing Optimus? (I don't know. but someone here might know.)
Tesla has mass manufacturing experience that many Robot start-ups don't have. I am sure someone once said prototypes are easy
Chinese AI companies have the problem that they are Chinese, and might not be trusted in the west.
What is it that Elon says? The only moat is constant progress or something along those lines.Yeah, I wasn't so much thinking GM would buy a few computers and solve this thing.
But Nvida has money, and some smart people, and can afford to keep a portion of their chips for themselves and throw more computing power than anyone has ever thought possible at a problem like this, and then lease it out to everyone not named Tesla. Without needing a million cars with cameras first. That's what I'm a bit worried about.
Also if we previously thought Tesla was say five years ahead of everyone else. Wouldn't all these AI improvements cut that to less time? Meaning the value for Tsla would be less than previously thought.
LiDAR, even a weak one, might be a necessity for FSD as a backup sensor to vision.
Lol, Troy is a joke.
Disparage? I like jokes.I thought it was against forum rules to blatantly disparage other active forum members?