Reciprocity
Active Member
Many credible posters here are doubting what FSD progress the Tesla AI team could possibly have achieved in the 3 years since they split with MobilEye - an automotive vision pioneer with 30 years of history.
Here's a comparison of AP1 (the last MobilEye iteration) and AP2:
Tesla Autopilot 1 vs. AP2: Incredible Improvements On Difficult RoadI believe that's what a grounds-up, first-principles approach and the "Tesla data advantage" of a fleet of hundreds of thousands of vehicles gives you in just 3 short years.
The Tesla testing takes place on a two-lane road with plenty of sharp turns, blind curves, dips, and elevation changes. Most of the time, you can see by the lane markings that passing isn't allowed on this road, which generally indicates that it's a challenge. Later in the video, there's a complete lack of lane lines.
"As you can see, Autopilot 1 struggles constantly on this drive, and I can assure you, it used to be WORSE on this road. I have previous tests with AP1."
"On this test, AP2.x absolutely NAILS IT! Successfully navigated the test roads"
And this AP1 to AP2 comparison is still on a HW2.5 basis and not the very latest firmware - HW3 gives ~21x more processing power.
I believe we've seen nothing yet in terms of FSD magic ...
People forget, the mobileye in Tesla's on AP1 are highly leveraged using Tesla code. Meaning off the shelf mobileye won't do what AP1 does today and AP1 keeps improving as well. But yes, Tesla did need to completely replace the core functionally of Mobileye which is the vision recognition, arguably the the most complex part. Tesla also started at almost the exact time of the breakup to develop the fsd chip and board, which surpasses where eyeq4 which isn't even out yet. They essentially replaced a $15B company with a 30 years head start in 3 years and over the next 3 years, Intel and Google will be so far in the rearview mirror that you wont be able to see them. Google doesn't have the data and is overly reliant on lidar. Mobileye doesn't have the data either because their partners won't give them that info, tough some map data is shared with mobileye which has a great deal of value but still only 1/1000th the value Tesla has with it's data collections capabilities.
To most the lead is not apparent and the meaning us even less understood. The head is insurmountable never AI and machine learning thrive with data. Yes why Google is so good at search and advertising. But it's also why Google lags on driving and has to the simulators to train their NNs. Their simulator can only be as good as they can make it which yes as complex a problem as creating self driving cars. They are equally difficult. Simulation still has a place and driving policy is one of them. But without the real world examples in large volumes, you can't train the NN to deal with it and you can't even know it needs to be dealt with until it's too late. If you look at the goal, 5 nines or 6 or 7 (99.99999%). You need a billion corner cases solved. So you need billions of miles driven. Only Tesla has that data for the foreseeable future. This cannot be understated. You cannot solve fsd without solving a billion corner cases and you can't even find them without a billion miles of data. Gathering and identifying the corner cases are almost as complex as creating a perfect simulator. I say almost because what Tesla is doing is creating a system that automatically identifies corner cases and then gathers examples then humans get involved to annotate the images. Then the process happens again to identify more occurrences based on annotations. Then those are fed into the ML NN to train. This then goes into shadow mode to validate and the whole process starts over until it's completely validated. The way Tesla does it, they can leverage massive amounts data with little human annotation. No one else is doing this, everyone else has an army of people annotating images. They are at ridiculously far behind it's not even funny.