Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Elon: "Feature complete for full self driving this year"

This site may earn commission on affiliate links.
I don't understand this argument. Just because the law says it is ok does not mean that we shouldn't do better with autonomous cars. Put differently, why should autonomous cars only have to meet the basic minimum standard of what we let humans do? Why not do better than humans and put more sensors to give autonomous cars superhuman vision and perception so that autonomous cars are even better than humans?

For any car, I can make a theoretically better car by adding feature X to it. Therefore, since no car is as good as it could be, every car is lacking and therefore cannot be allowed onto the streets.
Or:
Only people with a 10 year clean driving record should be allowed to drive.
 
For any car, I can make a theoretically better car by adding feature X to it. Therefore, since no car is as good as it could be, every car is lacking and therefore cannot be allowed onto the streets.
Or:
Only people with a 10 year clean driving record should be allowed to drive.

I am not saying that autonomous cars need to be 100% perfect. My point is that autonomous cars are not required or limited to only do things the way humans do things. Autonomous cars are not limited to only using the same sensors that humans use. We can think outside the box when it comes to designing autonomous cars. So just because humans don't use lidar or radar or hd maps or smart traffic lights, does not mean that autonomous cars can't.

I do think we should have a high standard for autonomous cars. And I also think that it makes sense to use cameras, radar and lidar combined to maximize safety and reliability. And if improving infrastructure by installing smart traffic lights that communicate with autonomous cars can help maximize safety, then why not? Or if HD maps can help give autonomous cars more reliable and more accurate awareness of the map, then why not?
 
I am not saying that autonomous cars need to be 100% perfect. My point is that autonomous cars are not required or limited to only do things the way humans do things. Autonomous cars are not limited to only using the same sensors that humans use. We can think outside the box when it comes to designing autonomous cars. So just because humans don't use lidar or radar or hd maps or smart traffic lights, does not mean that autonomous cars can't.

I do think we should have a high standard for autonomous cars. And I also think that it makes sense to use cameras, radar and lidar combined to maximize safety and reliability. And if improving infrastructure by installing smart traffic lights that communicate with autonomous cars can help maximize safety, then why not? Or if HD maps can help give autonomous cars more reliable and more accurate awareness of the map, then why not?

True, autonomous cars are not required to do things the same way people do them. However, there is also no requirement that autonomous cars need more sensors than what humans have.

If one is adding a sensor modality, it is either to waste money or cover for a failing in the current suite.

I2V is to cover for failure to read lights. People do not need it, therefore, a camera based car with sufficient processing does not need it. It also assumes that the data from the traffic light is correct and actionable. Everything has a false positive rate and a false negative rate. By adding more sensors you either combine the false negative rate (overly optimistic) or combine the false positive rate (overly pessimistic). What if another car runs the light? What if there is a pedestrian? Or an open manhole? A "Green" message is in no way a signal to go. Nor is "red" a definite signal to stop (approaching emergency vehicle behind you). Driving weighs the entire environment.

Yes, by adding infrastructure, you could have the system more efficient, but that is not required for the min viable product.
 
  • Like
Reactions: willow_hiller
True, autonomous cars are not required to do things the same way people do them. However, there is also no requirement that autonomous cars need more sensors than what humans have.

Autonomous cars need whatever sensors will achieve safe and reliable autonomous driving. That's the requirement that the sensors have to meet. And if the goal is L5, then the sensors need to achieve safe and reliable L5 autonomous driving. I just happen to think that autonomous cars need more sensors than humans in order to guarantee that they can achieve that standard.

I2V is to cover for failure to read lights. People do not need it, therefore, a camera based car with sufficient processing does not need it. It also assumes that the data from the traffic light is correct and actionable. Everything has a false positive rate and a false negative rate. By adding more sensors you either combine the false negative rate (overly optimistic) or combine the false positive rate (overly pessimistic). What if another car runs the light? What if there is a pedestrian? Or an open manhole? A "Green" message is in no way a signal to go. Nor is "red" a definite signal to stop (approaching emergency vehicle behind you). Driving weighs the entire environment.

Well now you are talking about more than traffic light response. I completely agree that an autonomous car needs to detect other vehicles and objects and have the right driving policy to know when and how to cross an intersection. It is not as simple as just go on green and stop on red. But even more reason why you need enough sensors to achieve that goal safely and reliably.

Yes, by adding infrastructure, you could have the system more efficient, but that is not required for the min viable product.

No, I2V is not required for minimum viable product but we need to do better than a minimum viable product. The goal is safe and reliable L5. Can I sleep in the back seat and completely trust the driverless robotaxi to take me to my destination safely every single time (or pull over if it can't)? When approaching an intersection, you have the front cameras and radar to see the front and just the wide cameras and B pillar camera to see cross traffic. Sure, if everything is working perfectly and the software is good enough, I am sure it could do it. But it basically leaves no margin for error.
 
Well, we got a description from Tesla.

Honestly, this sounds outright dangerous.

Courtesy of @greentheonly Twitter account.

EUJNd21XsAQyP0-
 
Well, we got a description from Tesla.

Honestly, this sounds outright dangerous.

Courtesy of @greentheonly Twitter account.

EUJNd21XsAQyP0-

Agreed. So the car will automatically slow down when approaching green lights and only proceed through the green light if it gets the right driver input? I can see a lot of rear end crashes in the future if the driver is not paying attention. Drivers behind us are going to be thinking "why the F is the car slowing down at a green light?"

This line is also troubling: ""As with all Autopilot features, you must continue to pay attention, and be ready to take immediate action, including braking because this feature may not stop for all traffic controls"

So Tesla is releasing a feature that requires active driver supervision, but is not providing proper driver monitoring (no driver facing camera) and saying "oops the feature may fail but you need to pay attention". That is terrible! Tesla is being incredibly irresponsible with this feature.
 
Well, we got a description from Tesla.

Honestly, this sounds outright dangerous.

Courtesy of @greentheonly Twitter account.

Thank you! I guess this confirms I that it will stop at all lights.

This does not sound dangerous at all to me. Drivers will do NOT want to stop at green lights, they will hit confirm long before the car starts slowing down....

If someone is abusing Tesla AP and not paying attention... they will feel the car slowing down and the more it slows the more obvious it will be... and they will hit confirm to continue....

I am not saying... that no one will ever fall asleep or being doing something really stupid and not pay attention and result in someone someday stopping at a green light and being rear ended.... Just like all of the other autopilot features have resulted in accidents when people are abusing the system.

This problem in particular... I feel is of least concern... when compared to someone stop paying attention in other situations that having nothing to do with traffic lights..... that can results in accidents.

OR without this feature.... some could be not paying attention and run a red light...

Overall, I am confident this feature will increase safety far more than it has potential to decrease safety.
 
So Tesla is releasing a feature that requires active driver supervision, but is not providing proper driver monitoring (no driver facing camera) and saying "oops the feature may fail but you need to pay attention". That is terrible! Tesla is being incredibly irresponsible with this feature.

This is just the exact same thing that Tesla has been doing for 5 years... There is nothing about this feature in particular that sticks out... compared to auto steer in general. and other features.

And Tesla is not the only OEM that has done this... (I know many OEMS have released DMS), but not all of them have.
 
Why don't traffic lights emit radio signals to tell cars what the traffic light colour is. In fact, maybe in the future cars wont have to read any signs, autonomous cars will be told what the signs are by radio signals.

I fully believe that 'active roads' are THE biggest safety feature we could add (as a society). reason we don't: we are averse to spending money on infrastructure (but its ok to give tax breaks to the rich...)

yes, things can be setup to 'talk' and respond and that is another degree-of-freedom that mostly does not get blocked by weather or light/dark, even objects. radio is GOOD STUFF - and yet I keep hearing people download the v2x concept.

like it or not, v2x is coming. where I work, we're already planning on it and looking into chipsets and what it takes to add it to our antenna system. other vendors are doing the same. you can't stop progress.

telsa: "lead, follow or get the hell out of the way" (lol).

(I hope they follow; but I don't see them leading v2x at all, sadly. they'll have to integrate it, whether they like it or not)
 
So it’ll stop for all traffic control devices. Except the ones where it won’t. And you have to be ready to press the accelerator, as well as the brake, at all times.

10 year old me would be so disappointed by what the future looks like. Me from a couple years ago is similarly disappointed by what his $8000 will theoretically (AP2.0 representing!) get him.
 
  • Like
Reactions: diplomat33
I really wonder about pressing the accel, to say 'ok'.

just not sure this is fully thought out. I'm thinking this is the best compromise they could come up with, but I have doubts if its going to be the way it is, going forward.

confirm to CONTINUE driving thru green? wow. that's really legal CYA oriented, to me; and not very driver-friendly.
 
Also, it requires latest maps and may not stop for all traffic controls.

So it will be map based, and may miss a light, stop sign, or two. No biggie, right?

it can't be ENTIRELY map based.

maps are static.

lights are not always the same color (duh!).

SOMETHING has to know the current oper-status of the light.

Green has the owner's manual updates on the new stopping at red lights that gives all the details. But it's mostly GPS based with some vision assist. In other words, our cars use map data and GPS to know where the traffic lights are then uses a little vision to detect if the light is red or green.
green on Twitter
 
Given the very real chance of both false positives and false negatives - I don't see what else they can do.

I'd still use it - because I currently have to monitor anyway.

I don't think there is a perfect answer, yet. and so, the safest (including legally) thing for tesla to do is push all responsibility on the driver.

it does make sense that 'stopping' is generally safer than 'moving'. that's what cars will end up doing if their internal systems don't agree and they're in L4 mode. if computer(a) != computer(b) then pullover_to_safe_area_and_stop()

its not the ideal thing, but its what *can* be done with what is available today. and you hope that computers A and B agree with 'lots of nines', so that the pullover and stop routine almost never gets called in real life driving. still, you have to plan for it via design and code and testing.

so, I get where they're coming from. if the driver ignores repeated warnings, then stop the car. what else *can* you do.

but wow, stopping at green lights. I seem to remember that there's an old joke about that:

two guys are in a car, going down the road, when they come to a red light and the driver floors it and just makes it thru the intersection. "what are you doing!?" asks the passenger. "dont worry, my brother does this all the time". they drive some more and, yes, the same thing happens again. "yo, man!" "don't worry, my brother does this ALL the time, its ok". finally, they are driving and come to an intersection with a green light. the driver slams on his brakes! the passenger, totally confused at this point, asks what's up. "I have to stop at green lights; I never know if my brother is ...". (yeah, you can finish that sentence yourself)

a joke meant for tesla? its a really old joke. unless tesla also has a time machine...
 
  • Funny
Reactions: Cheburashka
The manual has a long list of disclaimers of where traffic light response won't work:

"Stopping at traffic lights and stop signs is not designed to stop at:
- Railroad crossings
- Keep out zones
- Toll booths
- Crosswalk systems
- Yield signs or temporary traffic lights and stop signs (such as at construction zones)
- Miscellaneous traffic u-turn lights, bicycle and pedestrian crossing lights, lane availability lights, etc...

In addition, stopping at traffic lights and stop signs is particularly unlikely to operate as intended, can disengage, or may not operate when one or more of the following conditions are present:
- Visibility is poor (heavy rain, snow, etc) or weather conditions are interferring with camera or sensor operation.
- Bright light (such as direct sunlight) is interfering with the view of camera(s)
- A camera is obstructed, covered, damaged or not properly calibrated.
- Driving on a hill or on a road that has a sharp curves on which the cameras are unable to see upcoming traffic lights or stop signs.
- A traffic light, stop sign, or road marking is obstructed (for example, a tree, a large vehicle etc)
- Model 3/Y is being driven very close to a vehicle in front of it, which is blocking the view of camera

Warning: the limitations listed above are not an exhaustive list of reasons why Model 3/Y may not operare as expected. Many unforeseen circumstances can adversely impoact the accurate operation of stopping at traffic lights and stop signs. Using this feature does not reduce or eliminate the need to drive attentively and responsibly. You must be prepared to take appropriate and immediate action at all times."

2KraYQJ.jpg