Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

The Information reports new stuff about Autopilot and autonomy

This site may earn commission on affiliate links.
I am just glad that they are on the right track now.

I will believe they are on the right track with EAP when the car feels safer with Autosteer on than with it off. That is not the case and in fact I feel like V9 is a regression in Autosteer safey -- particularly with respect to cresting hills. Later versions of V8 were good at hill crests but V9 wobbles the wheel this way and that on a crest in a way that makes me extremely nervous.

So far TACC remains the only autopilot feature that I feel actually makes me safer. The rest is just toys still. And I'm still only talking about EAP here, nevermind FSD.

A single credible example of EAP executing a side collision avoidance would do a lot to convince me that the Autopilot system really is safer than any of the other current-generation advanced ACC/AEB systems available from other manufacturers. My experience tells me that I am 100% responsible for side collision avoidance, whether autosteer is engaged or not.
 
I have no doubt that if Tesla had done things right from the start, we would probably have FSD by now.

Just out of curiosity, what is your basis for this statement? You say we would probably have FSD by now, which is an assertion that the likelihood is above 50%. As a machine learning expert you must be familiar with basic statistics, so what is the evidence on which you base this assessment? What are your priors? What's your training data set?

As far as I know, we have N=0 for observations of how long it takes to develop FSD. Waymo is the best example we have but they're not quite done yet so we're still at N=0. But if you did want to use Waymo as priors in the form of a lower bound on how much effort it is, you'd have to estimate that Tesla would still be several years behind just based on when they started.

Of course your assumption is that this process will be much faster with deep learning[1]. Again, N=0 on that so far.

Or... perhaps you, like Elon, believe you know what it takes to do FSD and can estimate this without priors. Well we can see where that got Elon.

[1] I should also point out that Waymo does use deep learning, and they're still not done.
 
  • Like
Reactions: NateB and Bladerskb
I will believe they are on the right track with EAP when the car feels safer with Autosteer on than with it off. That is not the case and in fact I feel like V9 is a regression in Autosteer safey -- particularly with respect to cresting hills. Later versions of V8 were good at hill crests but V9 wobbles the wheel this way and that on a crest in a way that makes me extremely nervous.

I appreciate your perspective. Personally, I have not noticed any noticeable regression in autosteer in V9. It seems the same or slightly better for me. I am getting less wobbles in V9. I have a crest on my commute to work everyday and V9 handles the crest fine every time.

So far TACC remains the only autopilot feature that I feel actually makes me safer. The rest is just toys still. And I'm still only talking about EAP here, nevermind FSD.

Interesting. I had the opposite feeling. My first impressions with TACC a few months back when I first got my car were downright scary. On a couple of occasions, when coming up to a red light with cars stopped, my car would seem to continue racing towards them at 50 mph and I would have to slam on the brakes in a panic. But that is gone with V9. TACC is very smooth now and slows down at stopped cars at red lights. Now, I trust TACC and it has been great but that was not my impression in the beginning.

I do think people's personal experiences with AP probably affect their view on FSD. For me personally, V9 EAP has been superb so far. I know AP's limitations of course, but in many situations, it feels like "almost self-driving" to me. It is really solid. So for me, FSD feels closer because of how good AP has been in my experience.
 
It is really very disturbing to me that Tesla has only recently started using simulation to test. How do they test their Autopilot code prior to release to ensure that it's safe in the billions of different situations it might in encounter out in the real world? Every other AV company's answer to this is (at least in part) rigorous testing in simulation. Tesla on the other hand, I'm sure does some testing on a test track, then releases to the early access program testers, then calls it ready for wide release. Staggering.

According to a 2017 report, "Tesla conducts testing to develop autonomous vehicles via simulation, in laboratories, on test tracks,
and on public roads in various locations around the world. Additionally, because Tesla (...) hasa fleet of hundreds of thousands of customer-owned vehicles that
test autonomous technology in “shadow-mode” during their normal operation,
Tesla is able to use billions of miles of real-world driving data to develop its autonomous technology.
In “shadow mode,” features run in the background without actuating vehicle controls in order to
provide data on how the features would perform in real world and real time conditions. This data
allows Tesla to safely compare self-driving features not only to our existing Autopilot advanced driver
assistance system, but also to how drivers actually drive in a wide variety of road conditions and
situations."

Is this rigorous enough?
Simulation, Lab, Track, Early Access Tier 1, & Early Access Tier 2.
 
According to a 2017 report, "Tesla conducts testing to develop autonomous vehicles via simulation, in laboratories, on test tracks,
and on public roads in various locations around the world. Additionally, because Tesla (...) hasa fleet of hundreds of thousands of customer-owned vehicles that
test autonomous technology in “shadow-mode” during their normal operation,
Tesla is able to use billions of miles of real-world driving data to develop its autonomous technology.
In “shadow mode,” features run in the background without actuating vehicle controls in order to
provide data on how the features would perform in real world and real time conditions. This data
allows Tesla to safely compare self-driving features not only to our existing Autopilot advanced driver
assistance system, but also to how drivers actually drive in a wide variety of road conditions and
situations."

Is this rigorous enough?
Simulation, Lab, Track, Early Access Tier 1, & Early Access Tier 2.

None of this is true. The entire point of the article by the Information is that Tesla is spreading false info about their capabilities.
Quoting a statement from Tesla is PROVING that point. The statement itself is full of stuff we 100% know ISN'T happening, billion miles, shadow mode, etc.
 
Is this rigorous enough?
Simulation, Lab, Track, Early Access Tier 1, & Early Access Tier 2.

My point was that according to the article they only recently began any kind of serious simulation testing. None of the rest of those testing types can test a truly wide variety of conditions; simulation should have been their first priority from the beginning.

Also "shadow mode" according to the people who have rooted their cars is a myth.
 
My point was that according to the article they only recently began any kind of serious simulation testing. None of the rest of those testing types can test a truly wide variety of conditions; simulation should have been their first priority from the beginning.

Also "shadow mode" according to the people who have rooted their cars is a myth.
I've read the article. Actually they've got a few things wrong, like the weekly meeting with Elon, the simulation stuff...
So take some of the stuff you hear with a grain of salt.
Ultimately what matters is where Tesla is compared to the AP of the previous month, are they continuously improving?
 
  • Like
Reactions: diplomat33
Ultimately what matters is where Tesla is compared to the AP of the previous month, are they continuously improving?

V9 is better in some ways (blind spot monitoring, sort of) and worse in others for me. Nighttime driving and hill cresting are regressed for me. V9 AP is better than V8 at the beginning of this year in every way though. (Now the V9 UI and climate control is way worse, but that has nothing to do with AP.)
 
The problem with Blader is not that he criticizes Tesla or, as he's doing in this thread, pointing out how ridiculous it is to believe in anything Elon says about AP given his past track record on AP promises/predictions. Blader's only problem is his obsession with MobileEye, but personally I'm willing to simply account for that bias and otherwise give his comments the respect they deserve. He has been right about AP (nevermind MobileEye) for quite a long time... unlike anybody who has ever taken Elon at his word about anything having to do with AP.

I don't mind disagreement or different perspectives at all, in fact I relish that. Groupthink is bad. Homogeneity of thought is bad. It is a positive contribution if fans of autonomy companies besides Tesla participate in this forum. Or if skeptics of Tesla participate. Bladerskb has contributed some value for sure.

What I can't tolerate are statements that are either lies or unwitting falsehoods about me or other people in the forum. Blader literally just makes stuff up about me — either he's lying or he actually can't tell the difference between truth and falsehood. It is hard to tell sometimes whether someone is lying or confabulating — making stuff up that they actually believe. I have corrected Blader on this before but their behaviour has not changed. This crosses the boundary of decency into trolling or abuse. If you lie about what people have said, or if you can't distinguish between the truth and a lie, it is impossible to have a reasonable discussion.

Most of the views/claims that Blader attributed to me are just completely made up. I never said most of those things. You can't just make something up and claim another person said that. That's craziness. If I just lie and say, "Blader said Mobileye would have 10 billion fully autonomous cars by 2009!!" (which is not true) that changes the conversation from a simple disagreement into a ridiculous smear campaign. I think people who engage in smear campaigns should be banned from the forum, or at least ignored by everyone.
 
Last edited:
Most of the views/claims that Blader attributed to me are just completely made up. I never said most of those things. You can't just make something up and claim another person said that. That's craziness. If I just lie and say, "Blader said Mobileye would have 10 billion fully autonomous cars by 2009!!" (which is not true) that changes the conversation from a simple disagreement into a ridiculous smear campaign. I think people who engage in smear campaigns should be banned from the forum, or at least ignored by everyone.

Ah the good 'ole Straw Man argument. Honestly I also like the different perspectives, it's a good reason I like TMC more than I like Reddit's echo chamber. I just hope people can be more civil about it. I don't think that Tesla will have FSD by next year, but I also think that they're making good progression towards it.

As with all things, I think the truth lies somewhere in the middle. Tesla is not at the forefront of autonomous vehicle tech, but I can see how they could make strides towards that goal. It's interesting that Tesla is going for a "general solution" for autonomy and if it works that would be huge. Only time will tell if they can make good on their claims.

Just my $0.02.

Also, with that new radar they're supposedly developing, I wonder if that will be able to be retrofitted to current vehicles.
 
My point was that according to the article they only recently began any kind of serious simulation testing. None of the rest of those testing types can test a truly wide variety of conditions; simulation should have been their first priority from the beginning.

Also "shadow mode" according to the people who have rooted their cars is a myth.

You can see from Andersen's discussion around 9:20, how they use several simulation in the development circle, (I dunno if "several" count as serious), this is from 2016 at MIT:

As for "shadow mode", you can check MIT lectures on self driving, or this:Tesla Autopilot Miles | MIT Human-Centered AI. Even @jimmy_d sometimes mention codes that are not used in the actual driving. Is it hard to imagine Tesla testing a feature in the AP background!? It's something super advanced so why the argument even?

As for the number of miles, this is from 2016, you can extrapolate to 2017 to have a sense of the numbers.

My point was to discuss your point that Tesla is not 'rigorously' testing the software before releasing to customers. I think they're really trying in that front.
 
  • Helpful
Reactions: strangecosmos
It looks to me as very hard to find balanced voices on talks such as this. Some folks are usually very optimistic about everything Tesla and often suspicious of anything competition does and some folks are vice versa the other way around. Less tribalism would be nice for such technical talks?

It is clear in how statements from Tesla and say Mobileye are interpreted with different standards. Like Elon Musks now often quoted talks of generalized solution or deep learning are lines I can not see the same people giving similar appreciation when said by Mobileye who is also designing a generalized solution and uses deep learning, as does Waymo. Other way around of course for some folks.

Everyone even geofences on the road there but only some of that is talked about. Tesla also geofences NoA and same would seem even more likely for other advanced features where there is no responsible driver. Different state laws alone would be a reason for that?
 
Last edited:
  • Like
Reactions: strangecosmos
My point was that according to the article they only recently began any kind of serious simulation testing. None of the rest of those testing types can test a truly wide variety of conditions; simulation should have been their first priority from the beginning.

“Recently” could mean 2 years. The article doesn’t say. I get frustrated with vague stuff like this. I want hard numbers!

It looks to me as very hard to find balanced voices on talks such as this. Some folks are usually very optimistic about everything Tesla and often suspicious of anything competition does and some folks are vice versa the other way around. Less tribalism would be nice for such technical talks?

Yes, I agree. The tribalism on both sides gets in the way of having interesting, informative technical discussions. The autonomous vehicles subforum is generally pretty good in my experience, although of course since it’s TMC the user base is skewed towards Tesla fans. I would love a more diverse mix of views. Could we reach out to other fan communities or technical communities on Facebook, Reddit, Twitter, etc. and invite them to participate?

The investor subforum is scary. Maybe because money is involved. There is so much hostility to discussing Tesla’s potential weaknesses or risks, or anything that could be interpreted as negative for Tesla. You can’t draw up a pros and cons list; the list has to be all pros and no cons. You can’t believe in even a very weak version of the efficient markets hypothesis and argue that Tesla is not undervalued based on past cash flow, ignoring future growth. There is widespread credence in traditional technical analysis, which is, if not outright pseudoscience, an unsubstantiated approach to predicting stock prices (with strong reasons for skepticism). It is a mirror image of the people who think the Gigafactory is a Hollywood movie set and that Teslas are exploding in flames left and right. Tribalism can affect anyone, regardless of whether their cause is good or their view is right.

Talking to someone who disagrees with you can be an enjoyable experience if you are both respectful, polite, and open-minded. It is a great way to learn and advance your thinking. Disagreement is about sharing information and ideas that someone might not be aware of. It’s not about fighting or competing to “win”.
 
Last edited:
You can see from Andersen's discussion around 9:20, how they use several simulation in the development circle, (I dunno if "several" count as serious), this is from 2016 at MIT:

As for "shadow mode", you can check MIT lectures on self driving, or this:Tesla Autopilot Miles | MIT Human-Centered AI. Even @jimmy_d sometimes mention codes that are not used in the actual driving. Is it hard to imagine Tesla testing a feature in the AP background!? It's something super advanced so why the argument even?

As for the number of miles, this is from 2016, you can extrapolate to 2017 to have a sense of the numbers.

My point was to discuss your point that Tesla is not 'rigorously' testing the software before releasing to customers. I think they're really trying in that front.

  • The existence of a model/simulation doesn't mean it is something credible or useful. The presentation is very vague on that point.
  • Sterling left Tesla for a reason.
  • "Shadow mode" as a means to collect millions/billions of miles of driving data and send it back to Tesla has been pretty well debunked. Is it hard to imaging Tesla doing it? No. Why the argument? Because it isn't true. They may collect snippets of AP info here or there but there is no evidence that Tesla is doing what many think they are with "shadow mode".
  • The subject article of this thread is pretty damning in terms of their development/testing of AP software, their process is definitely not rigorous (if the article is to be taken as-is). This is disheartening to me because I want to see Tesla succeed, as most here do.
 
Last edited:
  • Like
Reactions: rnortman