Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

The Information reports new stuff about Autopilot and autonomy

This site may earn commission on affiliate links.
The highlights:
  • Autopilot has made progress in avoiding crashes into parked cars
  • Work on braking for red lights and stop signs after highway exits, and maybe making right turns, will reportedly start next year
  • Tesla is using simulation and making HD maps
  • Tesla is using a neural network for prediction of vehicle/pedestrian/cyclist behaviour
  • Sadly unclear from the article whether path planning uses a neural network
  • The Autopilot team has 200 people and Karpathy's AI team has 35 people
  • The hardware team is designing a new radar (no details beyond that)
  • Some stuff about "behaviour cloning" that wasn't exactly clear and sounded like it might have gotten lost in translation
  • A lot of flavour and slice of life stuff
Let me know if I missed anything important. Or correct me if my summary is inaccurate.

What Makes Tesla’s Autopilot Different

The 200 People Behind Tesla Autopilot

The articles are paywalled, but you can gain free access to an article by putting in a working email address. (That subscribes you to their frequent email updates.)
 
Last edited:
The 200 people Behind Tesla Autopilot is a really good article.

I think it gives a more realistic vibe for where the AP team is really at than other articles.

Like the fact that they're just getting started with a navigation/maps centric approach.
 
...
  • Some stuff about "behaviour cloning" that wasn't exactly clear and sounded like it might have gotten lost in translation

I thought it is just like "monkey see monkey do". The system observes how a human would do in encountering a curve or obstacle and it tries to replicate that kind of driving.

It says others do not rely on that technique because

1) it "cannot teach an automated driving system to handle dangerous scenarios that cannot be easily anticipated."

2) it's hard to troubleshoot when errors occur because human doesn't write the program.
 
Last edited:
Its laughable how behind they are. You cant say I didn't warn u ppl!

Yes. It's obvious that Tesla has problems with timeline such as delivery dates, production number, AP1 activation date, initial limited AP2 activation date, and subsequent mass releases...

It's further validated when Tesla missed its autonomous coast-to-coast demo for 2017.

Thus, why did I buy Tesla system when I know that it's "behind"?

Driving a "behind"-the-race Tesla system is much better than not driving a way-ahead-the-race Waymo, Cruise system...

I can drive Autopilot in almost any roads now but Cadillac Super Cruise can't on many roads including highway construction zones.

Tesla system may be "behind" but it's user's accessible.
 
So, it’s interesting that Tesla is working on their own radar. Makes sense as the current automotive radar has limitations with parked cars and other stationary objects. If Tesla is successful in making a radar that can see parked cars, just that alone will be a huge advantage for Tesla.
 
I told you ppl!

Under Mr. Bowers, the simulation and maps teams are both still in their infancy, said a person familiar with the situation.

Detailed road maps, on the other hand, are in an even earlier stage at Tesla. These are different from surface-level navigation maps from Google that Tesla owners can use in their vehicle.

The more detailed maps Tesla is building rely on image data that’s collected by Tesla vehicles on the road, in combination with GPS. In the future, these maps might be used to spot construction zones or other hazards and communicate them to other Tesla vehicles so that the Autopilot driving system can avoid them automatically.

This one especially is hilarious!

The system won’t be able to handle general city driving for years, this person estimated. In public comments Mr. Musk has implied such non-highway capabilities are in the works, but he has long offered a more aggressive timeline for automated city driving, saying it will happen a year or so from now.

VzYHGf3.png
 
Last edited:
The highlights:
  • Autopilot has made progress in avoiding crashes into parked cars
  • Work on braking for red lights and stop signs after highway exits, and maybe making right turns, will reportedly start next year
  • Tesla is using simulation and making HD maps
  • Tesla is using a neural network for prediction of vehicle/pedestrian/cyclist behaviour
  • Sadly unclear from the article whether path planning uses a neural network
  • The Autopilot team has 200 people and Karpathy's AI team has 35 people
  • The hardware team is designing a new radar (no details beyond that)
  • Some stuff about "behaviour cloning" that wasn't exactly clear and sounded like it might have gotten lost in translation
  • A lot of flavour and slice of life stuff
Let me know if I missed anything important. Or correct me if my summary is inaccurate.

What Makes Tesla’s Autopilot Different

The 200 People Behind Tesla Autopilot

The articles are paywalled, but you can gain free access to an article by putting in a working email address. (That subscribes you to their frequent email updates.)

So, I read the articles. Interesting, but not interesting enough to pay $40/month.

With regard to behavior cloning, what the author means is training a neural net through real world and simulator training. Basically, some people think that it will be difficult if not impossible to have a really good AP system if you only rely on standard NN training. Waymo, for instance, has a LOT of hard coded rules about what to do in particular situations (for example, stop at a stop sign, and an algorithm to determine when it is safe to proceed). I KNOW that current Tesla AP ALSO has hard coded rules. Karpathy is trying to replace hard coded rules with NN learning.

I know some people think this will run into limitations. Maybe. I know for myself, I don’t actually do a lot of cognitive thinking when I come to a stop sign intersection. I used to think a lot about it when I was first learning how to drive, but not anymore. Frankly, a not conscious part of my brain makes the decisions for me when driving most of the time. I have a bad habit of thinking about crap when driving and letting a not very conscious part of my brain do the actual driving.

True story. A friend of mine was driving south on the freeway in San Diego and he got onto a cell phone call. He was so absorbed in the call he didn’t realize he had made it all the way to the Mexico border, with no way to turn back. He ended up having to go into Mexico and come back. The point is that we ourselves use a part of our brains to drive that really could be considered a not very smart autopilot. Just like a Tesla AP.
 
I told you ppl!



This one especially is hilarious!



WKE0OFF.png

But, we knew that.

Most long term owners/followers know Elon over-promises not just accidentally, but as a general rule.

They don't need reminding anymore than a woman who dates a "bad boy" needs a reminder.

It might be baffling behavior, but owners have their reasons. Mine was it wasn't really that important as long as I had a good working TACC. So far that seems to be the case.

I'll re-evaluate when there is an EV with a better autonomous driving system that's available for purchase in the US. What Tesla has right now is ahead of what any other company has. The only thing that would even get close a Porsche Taycan with something like Super Cruise. That would be pretty compelling.
 
So, I read the articles. Interesting, but not interesting enough to pay $40/month.

With regard to behavior cloning, what the author means is training a neural net through real world and simulator training. Basically, some people think that it will be difficult if not impossible to have a really good AP system if you only rely on standard NN training. Waymo, for instance, has a LOT of hard coded rules about what to do in particular situations (for example, stop at a stop sign, and an algorithm to determine when it is safe to proceed). I KNOW that current Tesla AP ALSO has hard coded rules. Karpathy is trying to replace hard coded rules with NN learning.

I know some people think this will run into limitations. Maybe. I know for myself, I don’t actually do a lot of cognitive thinking when I come to a stop sign intersection. I used to think a lot about it when I was first learning how to drive, but not anymore. Frankly, a not conscious part of my brain makes the decisions for me when driving most of the time. I have a bad habit of thinking about crap when driving and letting a not very conscious part of my brain do the actual driving.

True story. A friend of mine was driving south on the freeway in San Diego and he got onto a cell phone call. He was so absorbed in the call he didn’t realize he had made it all the way to the Mexico border, with no way to turn back. He ended up having to go into Mexico and come back. The point is that we ourselves use a part of our brains to drive that really could be considered a not very smart autopilot. Just like a Tesla AP.

That's not hard coded rules. I think you who are confusing what the author is trying to relay. Infact its the author himself who mis-classified holistic path planning as behavioral cloning. In behavioral cloning. You are doing pixel to direct actuator control, there is nothing in-between. therefore you can't have another NN doing path prediction on other vehicle (which he says tesla is doing) because you can't use it.

But with HPP, its just another input to your control algorithm.
 
  • Informative
Reactions: croman
So, it’s interesting that Tesla is working on their own radar. Makes sense as the current automotive radar has limitations with parked cars and other stationary objects. If Tesla is successful in making a radar that can see parked cars, just that alone will be a huge advantage for Tesla.

This is what Elon said a few days ago about future of the radar: they don't need it:

Screenshot_20181105-233133.png
 
  • Informative
  • Like
Reactions: OPRCE and cwerdna
So, it’s interesting that Tesla is working on their own radar. Makes sense as the current automotive radar has limitations with parked cars and other stationary objects. If Tesla is successful in making a radar that can see parked cars, just that alone will be a huge advantage for Tesla.

Radar can see parked cars. It can see all sorts of stationary objects; there may be hundreds of returns to every ping from stationary objects. This can include signs alongside the road, overhead signs, overpasses, nearby buildings, the road surface itself. The problem is that radar needs to filter out almost all of this when travelling at high speeds to avoid false positives; automotive radar can't tell you the exact shape and position of objects, only their relative velocity (it is very good at this), their estimated center of mass, and a rough indicator of "size" which is actually strongly influenced by the material and surface characteristics which governs the strength of the return.

So radar systems can totally, easily detect stopped cars in your lane. What they can't do is discriminate easily between stopped cars in your lane and an overpass, or something almost but not quite in your lane (which happens a lot). The only way to improve this is to go to radar with more antennas and better resolution, particularly better vertical resolution. Or you could just, you know, use lidar.
 
>Under Mr. Bowers, the simulation and maps teams are both still in their infancy, said a person familiar with the situation.

>Detailed road maps, on the other hand, are in an even earlier stage at Tesla. These are different from surface-level navigation maps from >Google that Tesla owners can use in their vehicle.

It is really very disturbing to me that Tesla has only recently started using simulation to test. How do they test their Autopilot code prior to release to ensure that it's safe in the billions of different situations it might in encounter out in the real world? Every other AV company's answer to this is (at least in part) rigorous testing in simulation. Tesla on the other hand, I'm sure does some testing on a test track, then releases to the early access program testers, then calls it ready for wide release. Staggering.

As for HD maps... it sounds like they're going in the direction of geo-fencing for FSD to me. Just like all those other AV companies that Tesla fans like to criticize for being geo-fenced...
 
  • Like
Reactions: OPRCE and Matias
...very disturbing...

I think Tesla relies heavily on Behavior Cloning while others do not.

What that needs is a lot of human doing the driving and let the system write its own code to replicate human's driving.

Tesla relies on the fleet of consumers while others rely on a fleet of employees (and thus, simulator is more important for a small fleet of employees VS and not a priority for a large fleet of consumers).
 
I think Tesla relies heavily on Behavior Cloning while others do not.

What that needs is a lot of human doing the driving and let the system write its own code to replicate human's driving.

Tesla relies on the fleet of consumers while others rely on a fleet of employees (and thus, simulator is more important for a small fleet of employees VS and not a priority for a large fleet of consumers).

The question is, how do they test that their system -- however it's implemented, whether with a NN or hand-coded, behavior cloning or traditional motion planning and control -- how do they test that is it safe in a million different situations that it may encounter? The answer appears to be, they test it on their customers.
 
  • Like
Reactions: OPRCE and Matias