Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

The Information reports new stuff about Autopilot and autonomy

This site may earn commission on affiliate links.
True story. A friend of mine was driving south on the freeway in San Diego and he got onto a cell phone call. He was so absorbed in the call he didn’t realize he had made it all the way to the Mexico border, with no way to turn back. He ended up having to go into Mexico and come back. The point is that we ourselves use a part of our brains to drive that really could be considered a not very smart autopilot. Just like a Tesla AP.

This happened to a friend of mine as well. He ended up in Mexico, somehow got lost trying to turn around and had to basically beg his way back in because he didn't have his passport. I couldn't believe he could be so mindless, but maybe it happens more than I thought!
 
The question is, how do they test that their system -- however it's implemented, whether with a NN or hand-coded, behavior cloning or traditional motion planning and control -- how do they test that is it safe in a million different situations that it may encounter? The answer appears to be, they test it on their customers.

Am I crazy to think that that is a brilliant way of doing it? Actual customer and cars in the field will experience a more diverse set of conditions than a small set of employees or simulators. To me it seems like one of Tesla's main advantages that they have a fleet of cars they are getting data from and they aren't afraid to send beta software out to collect info.

I also agree with the sentiment that while it may be that other companies have technology way ahead of what is currently in my Tesla, the fact that I cannot access it makes it relatively worthless to me today. From my research and experience, EAP is the best system available to end users. Entirely possible or maybe probable that another company will release something better in the future, but until then, the argument seems silly.

Just today I used drive on nav and I had to correct it or ignore lane change suggestions a few times, but it also made very effective suggestions most of the time. The net effect, at least for someone paying attention, is that it is useful when it is right and easy to ignore when it isn't. And I am presumably providing valuable data to Tesla to make the next version better. Seems like a win-win. Maybe I have a more positive outlook on it all because I don't hang on every tweet or statement a company spokesman/CEO makes about his company and, as an engineer, I know how to adjust estimates ;)
 
...they test it on their customers.

Yes, that has been the case since the introduction of AP1 and it has continued forward to AP2, AP3, Full Self Driving too.

Owners are legally expected to read the manual to realize that human owner is still responsible for driving during the beta testing phase.

No one forces an owner to accept that and if they want to use beta feature they must answer a screen in the car to opt-in or opt-out.
 
Last edited:
It is really very disturbing to me that Tesla has only recently started using simulation to test.

We don't know how recently, though. A year? Two years? Six months? Three months?

In May 2016, Sterling Anderson said that simulation is part of the Autopilot development cycle. That suggests simulation may have been used in 2015 or 2014.

In articles like this, I find there is a lot of ¯\_(ツ)_/¯ stuff that isn't given enough detail to really understand. I want dates, numbers, specifics.

As for HD maps... it sounds like they're going in the direction of geo-fencing for FSD to me. Just like all those other AV companies that Tesla fans like to criticize for being geo-fenced...

I think all the ~300,000 HW2 Teslas can be used for HD mapping, so it's a very BIG geo-fence.
 
In articles like this, I find there is a lot of ¯\_(ツ)_/¯ stuff that isn't given enough detail to really understand. I want dates, numbers, specifics.

For example: does Tesla's path planning use a neural network or not? What exactly meant by "behavior cloning" in the context of this article? Is the author saying that Tesla is using one big end-to-end neural network for perception, prediction, path planning, control, and everything else? Because that doesn't seem likely. I think maybe the author is misusing the term "behaviour cloning".

So, what did the author mean to say? Just that Tesla wants to use separate, independent neural networks for perception, prediction, and path planning? That's not too crazy or controversial, at least I don't think so. Waymo uses neural networks for perception and prediction. Not sure about path planning.

It seems like some stuff got lost in translation. Either there was some misunderstanding between the sources and the story author, or in the author's effort to convey the tech stuff to a general audience, some stuff got overly simplified. I don't know.
 
Am I crazy to think that that is a brilliant way of doing it? Actual customer and cars in the field will experience a more diverse set of conditions than a small set of employees or simulators. To me it seems like one of Tesla's main advantages that they have a fleet of cars they are getting data from and they aren't afraid to send beta software out to collect info.

I also agree with the sentiment that while it may be that other companies have technology way ahead of what is currently in my Tesla, the fact that I cannot access it makes it relatively worthless to me today. From my research and experience, EAP is the best system available to end users. Entirely possible or maybe probable that another company will release something better in the future, but until then, the argument seems silly.

Just today I used drive on nav and I had to correct it or ignore lane change suggestions a few times, but it also made very effective suggestions most of the time. The net effect, at least for someone paying attention, is that it is useful when it is right and easy to ignore when it isn't. And I am presumably providing valuable data to Tesla to make the next version better. Seems like a win-win. Maybe I have a more positive outlook on it all because I don't hang on every tweet or statement a company spokesman/CEO makes about his company and, as an engineer, I know how to adjust estimates ;)



I tend to believe (or hope!) that Tesla is actually working on some cool, secretive *sugar* that would blow our minds if we truly knew what's up! I read these posts, especially from people that are obviously in the know to some extent and get discouraged...but hold out that Elon's crew has a solution and is just figuring out how to implement it into the world. I've been drinking btw, so....:)
 
I tend to believe (or hope!) that Tesla is actually working on some cool, secretive *sugar* that would blow our minds if we truly knew what's up! I read these posts, especially from people that are obviously in the know to some extent and get discouraged...but hold out that Elon's crew has a solution and is just figuring out how to implement it into the world. I've been drinking btw, so....:)

Right, but even if they aren't working on super secret blow-your-mind tech, what they have actually deployed to customers is really useful and ahead of anything else that has also been deployed to customers. I include in that the over-the-air update capability for future improvements.

I understand people not liking that Tesla listed capabilities that don't currently exist (and my never exist) in their sales literature. But for me personally, I feel the price for EAP, given what it does today, is fair. And knowing that the capability will improve, even if it never hits the ultimate dream, it is a bargain. I drive quite a bit and do a lot of long trips and AP has noticeably reduced driving fatigue for me. I would go so far as to say I can pay better attention to the road for longer stretches with AP on than without.
 
  • Like
Reactions: Achilles
The question is, how do they test that their system -- however it's implemented, whether with a NN or hand-coded, behavior cloning or traditional motion planning and control -- how do they test that is it safe in a million different situations that it may encounter? The answer appears to be, they test it on their customers.

tldr; Tesla's strategy of using owners as guinea pigs sounds bad, but it comes with substantial benefits for owners as well.

Edit: @cartwright beat me to it, making the same point more succinctly.

I agree that "testing on customers" sounds bad, but (1) so far this strategy doesn't seem to have resulted in a statistical increase in accidents or deaths, (2) most people seem to love autopilot anyway, compared to any competing system, and (3) I'm not sure what a better strategy would be. Maybe simulations could be valuable in testing (validation) of a new autopilot version, but I suspect simulations are more useful earlier, providing pre-labelled data to train the system in the first place. Simulations are not completely accurate representations of reality, and they cannot reproduce all of the unforeseen corner cases encountered in the real world. In the end, I don't see any way to test a full autopilot implementation for worldwide release without implementing it in a worldwide fleet and letting it drive for millions of miles. Tesla does do their own testing of course, before releasing the software to customers (apparently Elon is involved in this). Then they have the early access program of at least 1000 Tesla owners worldwide, which they use for a second round of testing to catch rarer or region-specific problems. They also apparently send pre-release software to the general public in "shadow mode". It's not clear exactly what data they collect from this, but maybe they can flag potential problems in certain cases, like if the driver brakes hard when the test software does not detect an obstacle.

After all of this, they start releasing to the general public, asking them to confirm that the software is "beta" and they need to be attentive. In my opinion, the nice thing about this setup is that the user has the choice to enable a new feature immediately or wait for other cars to collect more validation miles. Those who want the features early and are happy to test the new and imperfect features help your car get better faster vs. exclusively internal or closed beta testing. Although it seems counter-intuitive, Tesla is betting that this approach is actually going to be safer in the long run than a more cautious approach. The goal is a solution that is both significantly safer than a human driver and widely implemented in the world. It is the combination of the two that leads to lots of lives saved, and the sooner the better.

No one else is taking this approach (besides comma.ai?). If more careful conservative approaches are really better, we should eventually see a widely implemented solution from Waymo, Uber, or one of the other car manufacturers, and when this solution is released, we will how it compares with Tesla on performance and availability. Exciting times!
 
  • Like
Reactions: cartwright
Wow. I'm at a bit of a loss for words after reading this article. If it is true, Tesla is so far behind what Elon is advertising and many folks here believe it is comical.

So let me get their testing strategy right; Elon cruising around and tweaking settings, a few guys on test tracks, and a primitive digital environment. Then, push to customers and see what happens. No wonder they are making such slow progress. This approach is lunacy and a bridge to nowhere. I don't see how any technical person worth their salt could look at this workflow and see a path to success.

The biggest take away for me though is that there appears to be a leadership vacuum on AP team. I'm guessing a big part of it has to do with Elon and his management style - micromanager and doesn't like "no" for an answer.

Oh, and maybe next year they'll work on right hand turns...oh my.

Seriously, this article can't be true. Someone with insider knowledge please tell us this isn't the current state of affairs.
 
...slow progress...

Google/Waymo autonomous program started in 2009 with huge resources as if money was no object. You need a $75,000 lidar? No problem, you'll get 2 and many more!

Tesla autonomous program started in 2014 with very limited financial resources. Want a lidar? Forget it!

Even today, Waymo has no independent youtuber personally evaluating their system while numerous youtubers are reviewing Tesla's system every single day!

The goal has not reached for Tesla but the progress is tremendous when it comes to average consumers.
 
Last edited:
...100,000 current generation LIDARs :)

I am not sure people appreciate how difficult an Autonomous Program is.

It's easy to boast how cheap and how good a "current" LIDAR is but the fact is we are nowhere near that kind of level: cheap and good!

These cheap and good LIDARs are only good for low speed and short range detection as in city driving.

When you talk about Autopilot cruising at 90 MPH, that's another whole different ball game!

The range of $75,000 Velodyne HDL-64 E is about 120 meters or less than 400 feet or about 1 football field length.

Newer Velodyne VLS-128 is much better with the range of 300 meters or more than 900 feet or almost 3 football field lengths.

Even with that much of a range, its speed limit is 70 MPH!

$75,000 is for old slow speed. You'll need much more money and beyond the cost of Velodyne VLS-128 if you want to match the speed of Tesla 90 MPH system.
 
I am certainly not an expert on anything, but my almost 80 year old brain, below average eyes and slow reflexes have not had an accident in this century. My opinion is that Musk is right. Driving is a software problem not a hardware problem. Hell, I even drove for two years while legally blind. I don’t recommend that, but the software in my brain compensated for a lot. Right now a new Tesla sees a whole lot better than me- or anyone else it has eight eyes and a bunch of other sensors. People basically learn to drive by getting behind a wheel and driving. Usually it’s pretty ugly at first, but after a time we can drive from work to home and have no memory of the drive. This is the stage Tesla is at, learning to drive. I find it refreshing that Tesla trusts their customers just as my father trusted me when I learned to drive.
 
I am certainly not an expert on anything, but my almost 80 year old brain, below average eyes and slow reflexes have not had an accident in this century. My opinion is that Musk is right. Driving is a software problem not a hardware problem.

Well the hardware problem may not be in the sensors, but in the compute. Even the new super ultra mega HW3 AI chip Tesla is bragging about has less than 1% of the compute capability of the human brain with its 100 billion neurons and unfathomable number of connections between them. (And I should also mention, a human brain only uses around 20W of power, vs a high-end GPU at close to 300W under load.) There are a few hardware problems in there to solve I think.

And don't underestimate the software problem either. Right now know that the human brain can do things that deep learning can't, no matter how fast the hardware is, such as learn from a very small number of examples (like 1). There is some exploratory research in this field but no good answers yet as to how we do it.
 
I consider the computer as part of the software. Obviously they each need to be designed to work with the other. I remember some interesting work done by the University of London,I believe, back in the 60’s. I don’t know what happened to it, but they were trying to duplicate brain cells and connections with transistors. The initial work was promising, almost surprising. What happened to it I don’t know.
 
I don’t think TMC wants to host pirated content. You can read the articles for free if you give them an email address.
Yes, but maybe he is thinking what I am thinking. I am not willing to enter an email address into another site just to read articles and then not know how else they are going to abuse my email address. Possibly more spam. I will just forego reading the articles :)