Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla, TSLA & the Investment World: the Perpetual Investors' Roundtable

This site may earn commission on affiliate links.
Okay I tried not to get involved but I feel like people are speaking on different wavelengths. I, as a mediocre machine learning scientist, mediocre TSLA investor, and mediocre meme generator have a well balanced analysis of how Tesla is positioned in the autonomy space.

Where your friend is right:

Your friend is right that camera-based depth perception takes a lot more processing power. Yes the voxel depth resolution output currently is likely worse than what Waymo / Cruise are getting from lidar. But these are nothing new and no one should be surprised by this. I've wrote about this years ago on TMC, like here


Maybe Tesla has already used up all of HW 3.0 processing & memory, this is hard to tell because of course they are going to use as much as possible before optimization (induced demand)

He is right that sensor fusion gives higher accuracy than individual sensors (more on this later).

I would agree that its quite unlikely that Tesla will be able get to to say 10x safer than humans with current hardware.

Where you friend is wrong:

Everything else, and every conclusion he makes from those observations above. He basically discounts any of the data Tesla collects because he doesn't think the camera resolution is sufficient to be worth anything. This is inane and myopic.





The Big Picture

Tesla is focused on developing a profitable approach to autonomy


Tesla is starting with a L2 assist system that will probably soon make the entire Tesla AI team cash flow positive. Oh and they'll be able to sell excellent upgraded passive safety features (all Tesla's produced from 2019 onward will be able to detect pedestrians / cyclists / running a red light/ into a curb etc... soon). Let's assume the path to L4/L5 robotaxi autonomy takes 5 years, well in the meantime Tesla is still making money off their development with the best in class driver assist features.

Meanwhile, Waymo / Cruise were essentially forced from the beginning to rely on sensor fusion w/ LIDAR / RADAR. Sure it's more accurate, but they had to chose it because they rely on VC funding. And you aren't getting the next round of funding if you can't demo clear progress. That means focusing on the easy way to get something working (lots of expensive sensors and focusing on limited geographical areas). And still no clear path to profitability.

If Tesla need to add sensors, improve cameras, or improve processing power, they can do that later. What, excactly have they lost in the meantime?

Whose path do you think is most stable in the current moment?


Tesla is focused on developing a scalable approach to autonomy

By starting out with trying to make FSD work almost everywhere, Tesla is forcing itself to make a scalable solution. There won't need to be as much "going back to the drawing board" as there might be at Waymo / Cruise etc who barely have gotten things to work outside of one city. Of course a downside of that is going to be performance of the general algorithm will be much worse at first. The data shows most of the disengagements are currently due to mapping issues. Tesla clearly hasn't finished whatever their generalized mapping solution is, but I'm confident they have the diverse data to figure it out. It's hilarious to think people believe Tesla has flatlined progress when even that one issue clearly has a path to improve.

Meanwhile Waymo has "solved" perception, but nary a peep about global deployment? If it was scalable, they would be touting their scaling prowless, demo in 50 cities and IPO for a trillion dollar valuation. Oh, not happening?


Tesla is doing the best compromise between what a data scientist and an accountant would do.

If you told the data scientist you had unlimited funds to solve FSD, he/she would load 100,000 cars with cameras / radars, lidars and have them driving around every area of the U.S. or world, collecting the data and working to develop a robust, scalable solution. But that's not cost effective, so no one is doing that. Tesla is chosing to collect more diverse data while competitors are focused on collecting less diverse, but better resolution data.

By developing a "cheap" perception engine (cameras + AI), Tesla is allowing their AI team to move on to focus on how to solve all the other problems in full-scale autonomy. Planning, prediction, whatever else. There is no need to wait for centimeter level depth & perception precision in order to work on all the other areas of the tech stack. This is where your friend is totally wrong. And any good data scientist knows you need a wealth of diverse data in order to make a complicated algorithm generalize well. That's something that Tesla has and competitors absolutely do not.

So Tesla is going what a good data scientist would do given a constraint on funding. And guess what? Say that the camera only stack isn't good enough in a few years for robotaxi level precision? Is Tesla farked? Um no, they can simply add high-res radar or lidar in a few years and fuse the data then.


TLDR Tesla is making a cheap, scalable, and profitable autonomy software that they can upgrade with better sensors and hardware in a few years if they need to.

It‘s almost like people don’t understand the importance of the concepts of scalability and profitability, which I thought were obvious enough to be at the top of an investor’s priority list. Guess it’s easier than thought to be an above-mediocre investor?
 
Nice to see the nonGAAP estimate avg for 2Q down to $1.98 from $2.14 a few weeks back. That's what Yahoo is showing anyway.

Same for delivery estimates. Consensus is now at 270k, but estimates are being revised and coming in between 240k and 250k.

 
Intel's Mobileye is probably the other leader in autonomous driving data collection. Mobileye's technology was the foundation for Autopilot back in the day before the two companies parted ways, and it might even be data collection that led to the business relationship deteriorating. Tesla decided to go their own way around 2016 and build their version of what Mobileye had pioneered.

But much of the technology you see in other vehicles with ADAS is Mobileye's stuff implemented in different ways, and they say they're still collecting data from all those vehicles across all the different brands.

In my opinion, autonomous vehicle capabilities will quickly spread after the technology is finally realized and it won't remain with any one company for long. If it actually happens and it greatly improves road safety, it would be unconscionable to not spread the tech as far and wide as quickly as possible to bring down traffic-related deaths.
 
Because of auto labeling doing all the work instead of manual labeling?

IMG_20220628_225719.jpg
 
Autolabelling has been happening all along, the humans are only there to tweak/correct the autolabel output where necessary to improve accuracy

So I'm not sure what would have changed
Nope, started out separate frames, separate cameras all manual.
Then, it was 10ish second video climps with all eight cameras combined via NN with human labeling of objects that then forward and back propagated across cameras and frames.
Now, it's a huge NN doing the labeling and propogation with humans verifying and extending the object set.
 
Nope, started out separate frames, separate cameras all manual.
Then, it was 10ish second video climps with all eight cameras combined via NN with human labeling of objects that then forward and back propagated across cameras and frames.
Now, it's a huge NN doing the labeling and propogation with humans verifying and extending the object set.
This sounds like auto labeling is not a reason for letting human labelers go, right? Sounds like autolabeling is an amplifier for human labeling but not a stand alone solution.

Other possible reason - move labeling to a location with cheaper wages? Is this something anyone can learn?
 
This sounds like auto labeling is not a reason for letting human labelers go, right? Sounds like autolabeling is an amplifier for human labeling but not a stand alone solution.

Other possible reason - move labeling to a location with cheaper wages? Is this something anyone can learn?
Autolabling greatly replaces hunan effort.
People validate labeled output instead of marking objects themselves. People are also needed to create new object classes.
One piece of Karpathy's Operation Vacation.

Buffalo also has labeling teams.
 
  • Informative
Reactions: hobbes
Other possible reason - move labeling to a location with cheaper wages? Is this something anyone can learn?
If I remember correctly Andrej has mentioned that it’s critical to have the labeling team colocated with the AI engineers in the SF Bay Area so that they can all work together directly. However, @mongo is saying there’s a Buffalo team too which I didn’t know.

Andrej definitely has said that the labeling job at Tesla is quite tricky and the tools they’re using change frequently.