Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Automatic driving on city streets coming later this year!

This site may earn commission on affiliate links.
The thing about the nags is that AP/EAP as they exist now require the driver to be constantly alert as the driver is responsible at all times for constantly determining if the conditions are suitable for autonomous operation. Recognizing that having your hands on the wheel does not always apply detectable pressure, Tesla has decided not to nag the instant it no longer detects pressure on the wheel.

It's not about how long it's okay to take your hands off the wheel, or how long its okay to be inattentive. It's about finding a reasonable time to alert you if it does not detect pressure on the wheel. Not because the car is safe for a given period of time of driver inattention, but because there's a reasonable amount of time that the system might not detect your hands even though they are on the wheel.

So there is not a progression from ten seconds of inattention to one minute of inattention to ten minutes of inattention to take-a-nap. Rather there's a step from driver attention required to driver attention not required. A step from "Driver must determine if the car can drive safely" to "The car decides if it can drive safely." And that's the jump from L2 to L3.

There will be a progressive reduction in the frequency of driver interventions. And behind the scenes, the system will get better and better at recognizing when it needs the driver to intervene. When the system is better at determining if driver intervention is needed than the driver is, then the car has crossed a threshold and is ready to be declared L3 and the driver can be allowed to stop paying attention. But some people feel that a distracted driver would be so bad at resuming control after being effectively a passenger in the driver's seat that L3 is inherently unsafe and that companies should skip it and stay with L2 until the car is good enough for L4. Effectively requiring the driver to remain alert even though the car can decide when it needs to relinquish control, because that's the only way to insure that the driver is ready to resume control when called upon to do so. I'm not sure how I feel about this, but it's the argument against implementing L3.
 
As I have already stated several times, this isn't about SAE definitions. This is something I'm defining. Your acting like some kind of SAE police that isn't appreciated. This is about something that isn't quite level 3, but most would consider it more than current level 2. Nags every ten minutes would make a lot of people happy, and would consider it more than level 2..

The level doesn’t help me, and less frequent nags would likely not help me either. What would be useful is some expert system knowledge in the Tesla FSD system. If Tesla would warn me that an “edge case” was approaching that it may not handle well and to be vigilant. That would help to know that a complex driving issue could soon arise. But I don’t think a neural net can add logic and future likely events to the system. The neural net seems to only be able to respond to the immediate observations. If it could predict that a cross street was coming, and the warn me to be ready to take control. That would be better than reducing the frequency of nags.

the new 40.50.7 software seems to recognize more events on the road, but there aren’t any predictive controls that I notice.

Is that a fatal flaw in achieving full self driving with Tesla’s approach?
 
The level doesn’t help me, and less frequent nags would likely not help me either. What would be useful is some expert system knowledge in the Tesla FSD system. If Tesla would warn me that an “edge case” was approaching that it may not handle well and to be vigilant. That would help to know that a complex driving issue could soon arise. But I don’t think a neural net can add logic and future likely events to the system. The neural net seems to only be able to respond to the immediate observations. If it could predict that a cross street was coming, and the warn me to be ready to take control. That would be better than reducing the frequency of nags.

the new 40.50.7 software seems to recognize more events on the road, but there aren’t any predictive controls that I notice.

Is that a fatal flaw in achieving full self driving with Tesla’s approach?
Neural nets are just a tool. There's no reason they couldn't be used to predict the actions of other road users. Tesla talked about using a neural net to predict when other drivers would change lanes at autonomy day (though I swear that they haven't actually put it in my car!).
What you're describing is a Level 3 vehicle, one that would alert you to take over when it encounters a situation it can't handle. Though I guess you're proposing that it merely drop back to Level 2 instead of disabling entirely. I haven't seen that proposed before but it seems theoretically possible.
 
  • Disagree
Reactions: theBurtReynold
Neural nets are just a tool. There's no reason they couldn't be used to predict the actions of other road users. ...

I'm not entirely sure, but I suspect that you're describing telepathy. Since I don't believe in telepath in people or machines, I won't hold my breath for this.

What a Level 4 or 5 system does need to be able to do is map out how the trajectories of all the different vehicles will need to change to avoid coming too close to or colliding with each other, and prepare to alter its own trajectory. I.e.: That car is driving very close to the line, so that other car is likely to swerve into my lane to avoid it; what's a safe trajectory for me that will avoid a collision with that car or any of the others around me?

A Level 3 system only needs to be able to recognize that one car may swerve to avoid another, and alert the driver to take over with sufficient advance warning.

I think a separate issue for City NoA is that an overly-cautious driver can be a hazard. Right now, EAP brakes for cars that turn left in front of me even when they are so far away that there's no need. City NoA will have to make left turns across oncoming traffic. If it waits until there's no car within half a city block it may never make the turn. It needs to get much better at calculating when another car is actually in the way.
 
What a Level 4 or 5 system does need to be able to do is map out how the trajectories of all the different vehicles will need to change to avoid coming too close to or colliding with each other, and prepare to alter its own trajectory. I.e.: That car is driving very close to the line, so that other car is likely to swerve into my lane to avoid it; what's a safe trajectory for me that will avoid a collision with that car or any of the others around me?
That's what I'm describing, it's not telepathy, it's path prediction. It's definitely one of the trickier parts of the problem and one that I've seen discussed in a few presentations.
 
Last edited:
  • Like
Reactions: daniel
This is a great discussion, and as someone who bought my Tesla mostly for the AP features it has been a bit disappointing to see them walk back the FSD capabilities. I always knew it was a bit suspect that we would achieve L3+ on city streets with the current sensor suite, but there is magic in hitting the steep part of the adoption curve, so I was optimistic about the fleet learning potential. Not really sure just how much that is happening.

Getting stuck on the frequency of the nags isn't the core of it. As others pointed out the nags were not there in the first AP release, and are only somewhat related to the current driving danger level. In my opinion the core definition it's what you consider the car driving safely? How much better than the average human do you have to be, to define safely? Is that 1x or 10x better than a human?

As others have pointed out, now the trick is defining the use case where its safe to function in full L3+ at the agreed upon factor of safety. I can see the current sensors capable of an L3 system driving the speed limit on a divided highway under good weather and no road construction. Certainly the mass release Tesla system is not L3 with HW 3 and the current software even on divided freeways.

Having a good transition between the L3+ back to driver assisted L2 will be a must for this incremental approach that Tesla is taking.
 
This is a great discussion, and as someone who bought my Tesla mostly for the AP features it has been a bit disappointing to see them walk back the FSD capabilities. I always knew it was a bit suspect that we would achieve L3+ on city streets with the current sensor suite, but there is magic in hitting the steep part of the adoption curve, so I was optimistic about the fleet learning potential. Not really sure just how much that is happening.

Getting stuck on the frequency of the nags isn't the core of it. As others pointed out the nags were not there in the first AP release, and are only somewhat related to the current driving danger level. In my opinion the core definition it's what you consider the car driving safely? How much better than the average human do you have to be, to define safely? Is that 1x or 10x better than a human?

As others have pointed out, now the trick is defining the use case where its safe to function in full L3+ at the agreed upon factor of safety. I can see the current sensors capable of an L3 system driving the speed limit on a divided highway under good weather and no road construction. Certainly the mass release Tesla system is not L3 with HW 3 and the current software even on divided freeways.

Having a good transition between the L3+ back to driver assisted L2 will be a must for this incremental approach that Tesla is taking.

I'd say the system is safer than a human driver when on average it has fewer accidents than the average for human drivers. But since some humans are better than others, I'd say that when the system has the same number of accidents with serious injuries or fatalities as sober human drivers on average it's time to permit their use, and when it has half as many, it's time to aggressively promote their use and discourage people from driving. Obviously the change-over will take years as people with functioning cars won't want to junk them and autonomous cars are over the price range of most car buyers. But the law could prohibit car makers from selling cars without autonomous systems, just as the law now prohibits them from selling cars without seat belts.
 
Back on the topic of automatic city driving, Tesla is making really nice progress with the visualizations. Tesla just added more road markings, including parking spots for the disabled in the FSD visualizations (2019.40.50.7).

View attachment 495952

See more here:


This does make me more optimistic for automatic city driving. If the car can really see all these things and have the right driving policy, we might actually get city self-driving.
It misses many pedestrians.
 
It misses many pedestrians.
There's a difference between what Autopilot identifies and what is rendered as visualizations. If you've looked at Autopilot visualizations on city streets, you've probably noticed parked vehicles appearing then disappearing as the visualizations seem to want to show only vehicles that are driving in your current or immediately adjacent lanes. Similarly, bicycles are only rendered when they're in a driving lane but not in a bike lane or sidewalk.

So moving objects including vehicles and bicycles and pedestrians have to pass some filtering behavior to get rendered in the visualization. But other "fixed" objects, e.g., cones, trash cans, bollards, traffic lights, stop signs, can currently get rendered as long as they're in view and identified by the neural network.
 
. Your idea of an SAE Level between 2 and 3 where some sort of partial attentiveness is required is very dangerous and will never exist.
The context already stated several times for this scenario is that AP is better than a human. Elon has already said that AP is safer than a human in stop and go traffic on limited access freeway.
Remember the context if you can please. It is a shame the forum software doesn't default to multilevel context quoting to help people in this situation.
 
Last edited:
There's a difference between what Autopilot identifies and what is rendered as visualizations. If you've looked at Autopilot visualizations on city streets, you've probably noticed parked vehicles appearing then disappearing as the visualizations seem to want to show only vehicles that are driving in your current or immediately adjacent lanes. Similarly, bicycles are only rendered when they're in a driving lane but not in a bike lane or sidewalk.

So moving objects including vehicles and bicycles and pedestrians have to pass some filtering behavior to get rendered in the visualization. But other "fixed" objects, e.g., cones, trash cans, bollards, traffic lights, stop signs, can currently get rendered as long as they're in view and identified by the neural network.

So... it shows trash cans so that you can avoid hitting them, but does not show bicycles because... ???
 
  • Like
Reactions: diezel_dave
it shows trash cans so that you can avoid hitting them, but does not show bicycles because... ???
It shows trash cans and other "fixed" objects as a quick feature to show off what Autopilot can detect without changing other behavior. The car already had existing functionality to react to bicyclists, so I would guess it was too risky to change the existing behavior for the visualizations preview / sneak peek release. As others have pointed out, the new visualizations can be quite buggy, but Tesla / Elon Musk wanted something to show in 2019.
 
Neural nets are just a tool. ... Tesla talked about using a neural net to predict when other drivers would change lanes at autonomy day (though I swear that they haven't actually put it in my car!)

This is a misunderstanding as to what neural nets do. Here is my understanding of the major “blocks” (and their outputs):
  • [Real world surroundings at a given moment in time] ==>
  • Cameras (video frames / pictures) ==>
  • Neural Nets (digital representation of objects in given video frames / pictures) ==>
  • Code that fuses NN outputs, ultrasonic sensor input, radar input, etc. — I think Karpathy refers to this as the “the controller” (digital vector space representation of surroundings) ==>
  • Driving logic — I think Karpathy refers to this as “the planner” (what to do based on surroundings, rule set) ==>
  • Vehicle control systems (steering, signal, brake commands)
  • —loop—
It’s nuanced, but Tesla did not say NNs were used to “predict” cut-ins, Karpathy said they were used to identify indicators of cut-ins in images sourced via the “data engine”.

Please correct my understanding if incorrect.
 
Last edited:
  • 1 [Real world surroundings at a given moment in time] ==>
  • 2 Cameras (video frames / pictures) ==>
  • 3 Neural Nets (digital representation of objects in given video frames / pictures) ==>
  • ...
  • 4 Driving logic (what to do based on surroundings, rule set) ==>
  • ...
  • —loop—
It’s nuanced, but Tesla did not say NNs were used to learn how to identify cut-ins ... Karpathy was describing the entire “data engine” loop.

yeah, but why aren’t solid objects in front, on the road, eg fire trucks, recognized in item 3 and automatically braked for in item 4 ??

similarly why aren’t cars/trucks *seen* at 200-300 feet in front of the Tesla crossing the street and then the Tesla slows down to Zero if needed until the pathway is clear?

it doesn’t seem like the neural net sees a crossing or stopped object in the pathway or it doesn’t have driving logic to act on injects that far away.

I think these edge cases require autopilot fixes before city driving with NOA is released. I’d like them in the current autopilot...
 
yeah, but why aren’t solid objects in front, on the road, eg fire trucks, recognized in item 3 and automatically braked for in item 4 ??

I think these edge cases require autopilot fixes before city driving with NOA is released. I’d like them in the current autopilot...

I’m not qualified to answer that — my previous comment was just pointing out the role of NNs, not making claims about AP’s nuances and capabilities.

That said, I think the consensus (or maybe this came from GreenTheOnly?) is that the logic for braking is heavily (or entirely) weighted toward radar input, which is known to be poor at detecting stationary objects.

Said another way (again, this is merely repeating what I’ve read): camera input currently isn’t given much “weight” when it comes to making braking decisions.
 
Last edited:
The context already stated several times for this scenario is that AP is better than a human. Elon has already said that AP is safer than a human in stop and go traffic on limited access freeway.
Remember the context if you can please. It is a shame the forum software doesn't default to multilevel context quoting to help people in this situation.
I’d really like to read Elon Musk’s quote on that. He’s usually tries not to say things that aren’t true. I think you might have misinterpreted him.
Your hypothetical seems impossible. If there were widespread abuse of a Level 2 system (people not being vigilant) that system would almost certainly be banned. If a company were to claim that their Level 2 system was safer than a human they would open themselves up to liability if there are accidents.

ps. You can quote multiple messages.