Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Don't take your hands off the wheel

This site may earn commission on affiliate links.
The manual is there to limit liability. If Tesla did not want you to use AutoPilot on certain road types, it would not function at all on those roads. Since they can limit Navigate on Autopilot to freeways, they could limit the whole system.
Agree completely but evidently we are missing the point somehow. Oh and AP and NOA are completely unrelated as are AP and upcoming FSD. o_Oo_Oo_O
 
On the contrary- it's explicitly how they expect it to be used.

Which is why it behaves so oddly/poorly when used elsewhere.

"Huh so you're saying this system that explicitly tells you not to use it with oncoming traffic does things you don't like when used with oncoming traffic? WHAT A SHOCK NOBODY COULD HAVE PREDICTED!"





Again, though- it is.

If you want to ignore reality- knock yourself out. But it's literally why folks are having these issues- using something in a place it's explicitly not designed to be used- and then bitching when something goes wrong.


The fact that it "works fine some of the time" in those places doesn't change any of that.








I mean, your actual point was:




Which is demonstrably untrue in a very black and white way, since "AP in the city" is explicitly not a feature, in any way, that is actually deployed to any consumer vehicle at all.

hence why there's 0 "reasonable" about said assumption.

If you think pointing that out reinforces it...uh...ok....

@Knightshade Please stop turning this thread into your personal battleground. As the OP, I never said anything about city streets. Unless you are a senior AutoPilot developer at Tesla you aren't qualified to make the statements you make. Your posts are off topic and I'm going to start requesting they be removed from this thread if you continue in this manner.
 
Meanwhile, in the real world, a lot of us are using [TACC + AS] as Alpha level surface streets quasi-FSD. Not sure where it fits into the current scholastic head-butting, but we have a lot of "boulevard" type stretches that it's helpful on.

My favorite surprise was once when it went into unprecedentedly loud alerts -- and I'm almost certain it wasn't in AS or TACC ... and right around the corner there was a city police car lying in wait. Talk about non-binary. I struggle with trying to understand that to this day.
 
On the contrary- it's explicitly how they expect it to be used.
ROFL

You keep telling yourself that, turnem is right. No use trying to reason with you on this, that is so fundamental and obvious that Tesla/Musk expect EAP to be used outside of limited access divided highways that there's no sane discussion to be had without acknowledging this. :(
 
Meanwhile, in the real world, a lot of us are using [TACC + AS] as Alpha level surface streets quasi-FSD.

I mean, no, you aren't. Because there's 0 actual programming to support that use.

You're not using an "alpha" version of FSD at all. You're using the advanced nearly-entirely-complete version of the highway software on places that aren't highways.

The inability to grasp the difference between those things appears at the root of all those complaining it doesn't work "as expected" in places it's explicitly not, even slightly or in any way, designed to actually work as expected.


Again it absolutely can work in those situations- often well, and for a considerable length of time when nothing is happening that's really outside the "expected highway scope" like oncoming traffic turning in front of you or something.... but when that kind of input does occur, anybody acting like "Well it's obviously not ready yet" is totally misunderstanding everything about what's actually going on.



ROFL

You keep telling yourself that, turnem is right. No reasoning with you on this. :(

It's not so much me telling you that, it's literally Tesla telling you that right in the manual

You're free to ignore reality all you wish of course.


As the OP, I never said anything about city streets.

Which is probably why it wasn't your original post I quoted- instead it was one of the several folks in this thread who did bring those up as "other" examples of the system not working right.... to point out they were trying to use it someplace it's not supposed to be used at all so expecting it to "work right" there was nonsensical.


Unless you are a senior AutoPilot developer at Tesla you aren't qualified to make the statements you make.

That makes no sense. I'm literally quoting from the user manual for autopilot, in response to folks who seem upset or surprised when the system doesn't operate in conditions the manufacturer specifically tells them it's not meant to operate in.

If you're upset about others bringing up AP issues on an AP thread you started you should probably start with the ones who raised those issues, rather than those explaining to them why they are happening to them.


But if you'd like to go back to your original point- keeping your hands on the wheel and ready to adjust the car at any time is also explicitly what Tesla tells you to do in the manual- so your thread title seems to just reinforce what the manual already tells you.



Agree completely but evidently we are missing the point somehow. Oh and AP and NOA are completely unrelated as are AP and upcoming FSD. o_Oo_Oo_O


I mean, upcoming FSD literally is unrelated to AP other than they're both made by the same company.

One will use the old 2.x codebase and run on HW2.x (or likely in emulation mode of 2.x on HW3 cars)

The other will use a completely new, much larger and more advanced codebase and NN, and run natively, and exclusively, on HW3.
 
Last edited:
Last Thursday, I was headed home from San Francisco on 24 Eastbound. Went thought the Caldecott tunnels. Was in the right most lane of the right tunnel. A couple of hundred feet before the end of the tunnel, AutoPilot suddenly swerved right and hit the curb. I had my hand on the wheel and reacted quickly. Quick enough that the only damage was a curbed rim and a messed up section of my aero hubcap.

This was on 2019.12.1.1. I forgot to hit the steering wheel button and say "Bug Report WTFU HAPPENED" The next morning I received 2019.12.1.2 and AutoPilot handled the same tunnel perfectly on Saturday.

I love my car, but I try to keep at least one hand on the wheel 99% of the time.

This is a great post, I had situations where errors happened and I reported bugs only for them to be resolved the very next day, I tend to treat my car as a 7 year old and a lot of the corrections I do I report bugs like (Report a bug Tesla needs more training on considering on ramp vehicles joining in)
(report a bug Car did not differentiate between me asking to merge into lane VS switching into left lane. Which almost caused it to cross 2 lanes in 1 swoop while getting on ramp)
(report a bug car just crashed) (they can go to timestamo get the logs and find out why)


:) I'm thrilled by how faster Tesla learns in areas with more Tesla drivers!
 
My favorite surprise was once when it went into unprecedentedly loud alerts -- and I'm almost certain it wasn't in AS or TACC ... and right around the corner there was a city police car lying in wait. Talk about non-binary. I struggle with trying to understand that to this day.

It just occurred to me that maybe it was going nuts because the cop was using a radar gun? possible?

Or because I was misunderstanding the true nature of AP vs. FSD ?
 

1. Is anyone else uneasy that Lex Fridman's "paper" permitted self-selection of subjects with potential non-scientific motivations* to help prove the system is very safe by always being on their best behaviour while under evident camera surveillance? It's a bit like having a uniformed cop on the passenger seat in a study to determine if people ever jump red lights.

[ * possibilities include the (perhaps sub-conscious) desire to:
A. ensure AP which they have already paid for does not get regulated out of existence, equivalent to a loss of investment.
B. maintain or increase the image/share price of Tesla, in which they may be invested, emotionally and/or financially.
C. secure ego-stroking confirmation that they made a wise choice in purchasing this safe and safety-enhancing technology. ]

2. His introduction states:
This paper aims to show, by using an analysis of real-world driving in Autopilot-equipped Tesla vehicles, that patterns of decreased vigilance, while common in human-machine interaction paradigms, are not inherent to AI-assisted driving (also referred to as “Level 2”, “semi-autonomous”, and “partially automated” driving). One implication of this is that it may be possible to design AI-assisted vehicles that rely on humans for supervision in a way that will not necessarily lead to over-trust and significant vigilance decrement.
So, apart from designing an incoherent non-hypothesis and not even bothering to disguise his own-goal of confirmation bias, he also took a statistically insignificant sample size from a large population and the results are neither peer-reviewed nor published in any recognised scientific journal, which taken together for me translates to: "I wish to bolster Elon's claims for AP safety, now what can we cobble together under a scientific patina?"

3. If the intro instead stated "This paper aims to investigate (using an analysis of real-world driving) whether patterns of decreased vigilance common in human-machine interaction paradigms are found in Tesla's L2 AI-assisted driving and discuss how these may best be mitigated" and the study had been done with access to the raw source data from Tesla, allowing to pick a much larger group of random anonymised AP owners from the fleet database without their knowledge, and to compare their behaviour with that of a similarly selected control group without AP, I think rather different but more scientifically valuable conclusions would have been reached.

4. The fact that Fridman does not discuss whether he approached Tesla for such collaboration on the font of best data indicates to me that he probably did and they refused, which, if both parts were true, would reflect badly on each side.

5. Maybe I am just old-fashioned?
 
The manual is there to limit liability. If Tesla did not want you to use AutoPilot on certain road types, it would not function at all on those roads. Since they can limit Navigate on Autopilot to freeways, they could limit the whole system.

If (any car manufacturer) didn't want you to go 120mph in the U.S. then they wouldn't let you????

Kid: Well you left your keys out where I could get to them so I just assumed you WANTED me to take a joyride in your car.
 
  • Like
Reactions: Knightshade
This is a great post, I had situations where errors happened and I reported bugs only for them to be resolved the very next day, I tend to treat my car as a 7 year old and a lot of the corrections I do I report bugs like (Report a bug Tesla needs more training on considering on ramp vehicles joining in)
(report a bug Car did not differentiate between me asking to merge into lane VS switching into left lane. Which almost caused it to cross 2 lanes in 1 swoop while getting on ramp)
(report a bug car just crashed) (they can go to timestamo get the logs and find out why)


:) I'm thrilled by how faster Tesla learns in areas with more Tesla drivers!

Omg my typos sorry everyone.
 
I think we can all agree on one thing here -
1. Is anyone else uneasy that Lex Fridman's "paper" permitted self-selection of subjects with potential non-scientific motivations* to help prove the system is very safe by always being on their best behaviour while under evident camera surveillance? It's a bit like having a uniformed cop on the passenger seat in a study to determine if people ever jump red lights.

[ * possibilities include the (perhaps sub-conscious) desire to:
A. ensure AP which they have already paid for does not get regulated out of existence, equivalent to a loss of investment.
B. maintain or increase the image/share price of Tesla, in which they may be invested, emotionally and/or financially.
C. secure ego-stroking confirmation that they made a wise choice in purchasing this safe and safety-enhancing technology. ]

2. His introduction states:
This paper aims to show, by using an analysis of real-world driving in Autopilot-equipped Tesla vehicles, that patterns of decreased vigilance, while common in human-machine interaction paradigms, are not inherent to AI-assisted driving (also referred to as “Level 2”, “semi-autonomous”, and “partially automated” driving). One implication of this is that it may be possible to design AI-assisted vehicles that rely on humans for supervision in a way that will not necessarily lead to over-trust and significant vigilance decrement.
So, apart from designing an incoherent non-hypothesis and not even bothering to disguise his own-goal of confirmation bias, he also took a statistically insignificant sample size from a large population and the results are neither peer-reviewed nor published in any recognised scientific journal, which taken together for me translates to: "I wish to bolster Elon's claims for AP safety, now what can we cobble together under a scientific patina?"

3. If the intro instead stated "This paper aims to investigate (using an analysis of real-world driving) whether patterns of decreased vigilance common in human-machine interaction paradigms are found in Tesla's L2 AI-assisted driving and discuss how these may best be mitigated" and the study had been done with access to the raw source data from Tesla, allowing to pick a much larger group of random anonymised AP owners from the fleet database without their knowledge, and to compare their behaviour with that of a similarly selected control group without AP, I think rather different but more scientifically valuable conclusions would have been reached.

4. The fact that Fridman does not discuss whether he approached Tesla for such collaboration on the font of best data indicates to me that he probably did and they refused, which, if both parts were true, would reflect badly on each side.

5. Maybe I am just old-fashioned?

Putting to the side all of the possible motivations, I would tend to agree that a single study (particularly one that is missing key criteria for well designed studies) is not relevant in the grand scheme. It’s very hard to compete with a meta analysis of a group of peer reviewed studies. Coming from MIT however, I’m sure they realized this and had to settle for the best they could do under the circumstances.

My biggest issue is that we have two changing variables here; the first is that AP sometimes improves and sometimes exhibits reduced functionality release to release. The second is that humans by nature are also highly variable, which is to say their level of trust in the system and use of the system also changes over time. By way of example, a person who has been used to driving a particular route with AP engaged now receives a phone call on the phone and does not have a headset. Will they be as vigilant?

My own level of trust in the system has changed over time, and it has not been linear, or in one direction.

Elon himself readily admits that far more must be proven, scientifically - before we will see regulatory approval for FSD. That won’t be accomplished with this type of study.
 
  • Like
Reactions: OPRCE
Since they can limit Navigate on Autopilot to freeways, they could limit the whole system.
They could, and it would take a huge chuck of value out of it because, at least right now, there are segments of what otherwise looks like appropriate highway environment for NoAP that is not white listed. You'll be driving along, it'll warn you it is disabling it ahead, and the road doesn't look any different from prior or after the short section where it is disabled.
1. Is anyone else uneasy that Lex Fridman's "paper" permitted self-selection of subjects with potential non-scientific motivations* to help prove the system is very safe by always being on their best behaviour while under evident camera surveillance? It's a bit like having a uniformed cop on the passenger seat in a study to determine if people ever jump red lights.

[ * possibilities include the (perhaps sub-conscious) desire to:
A. ensure AP which they have already paid for does not get regulated out of existence, equivalent to a loss of investment.
B. maintain or increase the image/share price of Tesla, in which they may be invested, emotionally and/or financially.
C. secure ego-stroking confirmation that they made a wise choice in purchasing this safe and safety-enhancing technology. ]

2. His introduction states:
This paper aims to show, by using an analysis of real-world driving in Autopilot-equipped Tesla vehicles, that patterns of decreased vigilance, while common in human-machine interaction paradigms, are not inherent to AI-assisted driving (also referred to as “Level 2”, “semi-autonomous”, and “partially automated” driving). One implication of this is that it may be possible to design AI-assisted vehicles that rely on humans for supervision in a way that will not necessarily lead to over-trust and significant vigilance decrement.
So, apart from designing an incoherent non-hypothesis and not even bothering to disguise his own-goal of confirmation bias, he also took a statistically insignificant sample size from a large population and the results are neither peer-reviewed nor published in any recognised scientific journal, which taken together for me translates to: "I wish to bolster Elon's claims for AP safety, now what can we cobble together under a scientific patina?"

3. If the intro instead stated "This paper aims to investigate (using an analysis of real-world driving) whether patterns of decreased vigilance common in human-machine interaction paradigms are found in Tesla's L2 AI-assisted driving and discuss how these may best be mitigated" and the study had been done with access to the raw source data from Tesla, allowing to pick a much larger group of random anonymised AP owners from the fleet database without their knowledge, and to compare their behaviour with that of a similarly selected control group without AP, I think rather different but more scientifically valuable conclusions would have been reached.

4. The fact that Fridman does not discuss whether he approached Tesla for such collaboration on the font of best data indicates to me that he probably did and they refused, which, if both parts were true, would reflect badly on each side.

5. Maybe I am just old-fashioned?
I'm pretty sure you're wrong on his motivations. It seems he came in fairly skeptical.
 
I think we can all agree on one thing here -


Putting to the side all of the possible motivations, I would tend to agree that a single study (particularly one that is missing key criteria for well designed studies) is not relevant in the grand scheme. It’s very hard to compete with a meta analysis of a group of peer reviewed studies. Coming from MIT however, I’m sure they realized this and had to settle for the best they could do under the circumstances.

My biggest issue is that we have two changing variables here; the first is that AP sometimes improves and sometimes exhibits reduced functionality release to release. The second is that humans by nature are also highly variable, which is to say their level of trust in the system and use of the system also changes over time. By way of example, a person who has been used to driving a particular route with AP engaged now receives a phone call on the phone and does not have a headset. Will they be as vigilant?

My own level of trust in the system has changed over time, and it has not been linear, or in one direction.

Elon himself readily admits that far more must be proven, scientifically - before we will see regulatory approval for FSD. That won’t be accomplished with this type of study.
It is too bad my comments about that exactly were snipped out when I was quoted, some uncertainty about what exactly will happen going forward and that this isn't conclusive. :(

As for your last sentence; Proofs are for mathematicians and alcohol. :p Science, and nominally regulation, are about levels confidence. You can't really prove these things, you just need increasingly larger chunks of increasingly controlled data to [potentially] increase confidence. This of course is very preliminary. However if you don't think it takes a meaningful amount of air out of assumptions about automobile automation becoming very unsafe in the valley between manual and full AI, that there is no way to manage it and meaningfully mitigate the risk, then you're doing a very poor job of adjusting your priors and "science orientated" isn't a mantle you should be ascribing to yourself.
 
  • Informative
Reactions: Fernand
4. The fact that Fridman does not discuss whether he approached Tesla for such collaboration on the font of best data indicates to me that he probably did and they refused, which, if both parts were true, would reflect badly on each side.
My conspiracy theory is that the paper and the interview with Elon are a long con to get access to Tesla’s data.
 
  • Funny
Reactions: OPRCE
My conspiracy theory is that the paper and the interview with Elon are a long con to get access to Tesla’s data.

Tesla is eventually going to need to provide their data in the course of this development to several trusted third parties. Keep in mind, no matter how trustworthy an organization may be; there are always going to be competing interests at play. They've been withholding this up until now at least because they believe that people will spin the results against them. Given human nature to push back against disruptive technologies, I'm sure that is also true.

This is why it is so irrational that the medical industry as an example (here in the US) relies heavily on studies run and published by the pharmaceutical companies themselves (their conclusions are often heavily weighted in their favor and designed to increase the bottom line ahead of other priorities, INCLUDING patient health). Several years ago, I read a book on the subject (Overdosed America) and it was an eye-opening experience. If the same thing occurs in the automotive industry, it will doom, or delay FSD from the start.
 
I'm pretty sure you're wrong on his motivations. It seems he came in fairly skeptical.

I tried not to judge his personal motivations too harshly as I find him a very sympathetic character who in general is doing useful work, but he does seem a bit too emotionally attached to Tesla/Musk, for whom this "paper" appears mainly designed to do a solid.

My conspiracy theory is that the paper and the interview with Elon are a long con to get access to Tesla’s data.

LOL, let's hope it works and some serious output results.