Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Impressed by Waymo's AI Simulation City

This site may earn commission on affiliate links.
Tesla is doing what other were doing in 2016, 2017, 2018, etc... Back in 2019 on autonomy day, Tesla and elon were laughing about others extensive use of simulation, basically calling it useless
Where are you getting this from? Project Dojo was first publicly mentioned by Elon Musk in Apr 2019, meaning it was already being worked on. They didn’t just start down the simulation path in 2021.
 
They are just demonstrating that they are not just moving through the same trajectory that the images were captured in, but that they can have a novel view from any angle.

Tesla already does this... In the car, LIVE. You can drag around the view from different angles (Or at least I have seen people do it on YouTube videos, as I dont have FSD beta).

I am still failing to see how this is anything more than step 2 of 1000. The have images of the 'static' word and they can fly around. What labels all the images (signs, roads, lights, road edges), what about all the factors of FSD that aren't 'static' and move (Other cars, People, Animals, trashcans, cones, etc)? Do they have any other tech demo of the simulation with non-static objects in it?

Seems cool but a FAR way away from anything that Tesla has shown. I'm really starting to feel like we are just feeding a troll. 🤷‍♂️
 
Last edited:
Obvious bias in my opinion: if it is pro Tesla clickbait it stands, if it is not, it doesn't stand.
Believe me - after my multiple interactions with mods, it’s very difficult to convince them to make any changes either way.

They won’t even remove obvious troll flame baits.

ps : examples of pro-Tesla clickbait’s ? I don’t recall seeing many in this sub forum. BTW, personally I’d have not changed the title of this thread - except may be to add “IMO”, if at all.
 
They don't test because they are looking for sparse very odd or unpredictable events.
Infact the whole "long tail events" is an actual myth.
Something in the "long tail" is not there because it happens rarely. It's there because relative to other things, it is rare. The term "long tail" (for those who don't know) refers to a graph/chart where things tail off, like in an exponential decay, or either end of a bell curve. In this discussion, we're talking about frequency of specific events that you can encounter while driving. If you sort your data from most common to least common, the least common section is the long tail.

"Encountering a traffic light" is definitely not a long tail event. But "hubcap falling off bouncing toward your windshield" would certainly be. If we agree that some things happen more often than others, then by definition, there's a long tail. Calling it a myth is nonsensical.
 
I found this interesting article. Waabi is a new start-up that aims to develop autonomous driving with simulation alone, with no real world testing until the very end:

Like Cruise, Waabi bases its virtual world on data taken from real sensors, including lidar and cameras, which it uses to create digital twins of real-world settings. Waabi then simulates the sensor data seen by the AI driver—including reflections on shiny surfaces, which can confuse cameras, and exhaust fumes or fog, which can throw off lidar—to make the virtual world as realistic as possible.

But the key player in Waabi World is its god-like driving instructor. As the AI driver learns to navigate a range of environments, another AI learns to spot its weaknesses and generates specific scenarios to test them.

In effect, Waabi World plays one AI against another, with the instructor learning how to make the driver fail by throwing tailor-made challenges at it, and the driver learning how to beat them. As the driver AI gets better, it gets harder to find cases where it will fail, says Urtasun. “You will need to expose it to millions, perhaps billions, of scenarios in order to find your flaws.”