Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Elon: "Feature complete for full self driving this year"

This site may earn commission on affiliate links.
The crazy thing about this update is once they roll it out, it'd have to reliably stop at all red lights.... Lol I'd be impressed if they roll this out within the next 3 months

Of course, it could take several months for the wide roll out. Currently Autopilot doesn't reliably stop at all stoplights... (it stops at none).... I assume you are thinking that if the system is now acting like it will be stopping at a light, then users will abuse/misuse the system more leading to greater risk? Well, I think that concern is mitigated because it looks like Tesla is going to require the user to confirm to proceed every time after "acting like its going to come to a stop."
 
The risk is that AP needs to recognize all lights. If it doesn't recognize one as a light and it happens to be red, you just ran a red light.

I think the suggestion I was making went over your head possibly.

With this update, you can still stop for the lights on your own.

With autopilot as of today, and over the last 5 years... you will run a red light if you don't stop on your own.

If it doesn't recognize one as a light and it happens to be red, you just ran a red light.

Not all the way true... most of the time there are multiple lights at every intersection and other clues and hints too. And even if it doesn't detect any of them as red... it still doesn't mean its going through... it needs to detect the lights as green.
 
  • Like
Reactions: pilotSteve
it still doesn't mean its going through... it needs to detect the lights as green.

that wont cover enough cases. lights being obscurred or blinded by sun or any number of camera related things.

it really needs a rich fusion of info to be able to get safely thru an *intersection*.

I still am very doubtful it even can. I stand to be corrected, but I don't think the 3 can, with its current sensors and cameras. there isn't enough side intelligence or redundancy and lives *matter* when you are crossing an intersection. you really need high levels of confidence and I just don't see it with the current hw sensors.
 
that wont cover enough cases. lights being obscurred or blinded by sun or any number of camera related things.

it really needs a rich fusion of info to be able to get safely thru an *intersection*.

I still am very doubtful it even can. I stand to be corrected, but I don't think the 3 can, with its current sensors and cameras. there isn't enough side intelligence or redundancy and lives *matter* when you are crossing an intersection. you really need high levels of confidence and I just don't see it with the current hw sensors.

I agree. That's why it requires a human driver to make sure it gets through each intersection safely.
 
that wont cover enough cases. lights being obscurred or blinded by sun or any number of camera related things.

it really needs a rich fusion of info to be able to get safely thru an *intersection*.

I still am very doubtful it even can. I stand to be corrected, but I don't think the 3 can, with its current sensors and cameras. there isn't enough side intelligence or redundancy and lives *matter* when you are crossing an intersection. you really need high levels of confidence and I just don't see it with the current hw sensors.

How does a canera being blinded cause false positive detection of a green light?

What additional sensors do you think are necessary to detect a green light source?
 
  • Like
Reactions: mikes_fsd
that wont cover enough cases. lights being obscurred or blinded by sun or any number of camera related things.

it really needs a rich fusion of info to be able to get safely thru an *intersection*.

I still am very doubtful it even can. I stand to be corrected, but I don't think the 3 can, with its current sensors and cameras. there isn't enough side intelligence or redundancy and lives *matter* when you are crossing an intersection. you really need high levels of confidence and I just don't see it with the current hw sensors.

I am skeptical as well. If you look at every other successful autonomous driving vehicle, like Cruise or waymo, they have that rich fusion of lidar, camera and radar info in order to get an accurate 3D map of the objects around the car because safely getting through an intersection requires more than just detecting traffic lights. You also need to detect cross traffic, vehicles turning into your path, pedestrians jay walking etc... Waymo cars now have lidar that can see 300 meters 360 degrees around the car as well as high res cameras that can see 500 meters 360 degrees around the car. Waymo cars also have perimeter lidar, camera and radars that can help see cross traffic and around large objects.

For example, what happens when the light is green but a car on the left, perpendicular to your path, blows through its red light and would hit you? Just because the light is green, does not automatically mean it is safe to drive through. So accurately tracking the path of all objects approaching an intersection is critical. And Tesla is relying on just the front wide camera and the side B pillar camera to track these objects driving perpendicular to you at intersections. That's not a lot.
 
I agree. That's why it requires a human driver to make sure it gets through each intersection safely.

Yes but I think a big question is how long will the human driver need to supervise. Is it just a matter of improving the software and then the driver can be removed or does the system require more hardware upgrades? Because if the system always requires the driver to supervise, it could still be a very convenient "city driver assist" but it won't be true autonomous driving.
 
Currently, in my neighborhood, we have a blinking yellow light.

With the latest .12 firmware, the red light warning goes off every time I pass that light on AP. It's quite annoying and frustrating. Now that I think of it, there is also a stop sign there too, but it's for the adjacent joining road.

Hopefully, it won't stop in the middle of the highway.

A4kmpd5.png
 
  • Funny
Reactions: powertoold
How does a canera being blinded cause false positive detection of a green light?

I believe the worry is a blinded camera won't see a stop light, whether or not it's green or red. If it happens to be red, it'll blow through it. If it's green, no harm.

I think this could be avoided by checking GPS / map data for intersections. That's not HD level mapping data, just basic if there's a light at the upcoming intersection.
 
  • Like
Reactions: mongo
Why don't traffic lights emit radio signals to tell cars what the traffic light colour is. In fact, maybe in the future cars wont have to read any signs, autonomous cars will be told what the signs are by radio signals.

On that note i heard that once that first responders were able to switch traffic lights to green, remotely as they approached them in their vehicles.
 
Why don't traffic lights emit radio signals to tell cars what the traffic light colour is. In fact, maybe in the future cars wont have to read any signs, autonomous cars will be told what the signs are by radio signals.

They have this in the lights on the strip in Vegas for example. It would be a herculean effort to make sure all lights in the country are equipped with this.

On that note i heard that once that first responders were able to switch traffic lights to green, remotely as they approached them in their vehicles.

You can still do this on many lights with an IR transmitter.
 
Currently, in my neighborhood, we have a blinking yellow light.

With the latest .12 firmware, the red light warning goes off every time I pass that light on AP. It's quite annoying and frustrating. Now that I think of it, there is also a stop sign there too, but it's for the adjacent joining road.

Hopefully, it won't stop in the middle of the highway.

Wow, I can see why that sign would confuse it.
 
that wont cover enough cases. lights being obscurred or blinded by sun or any number of camera related things.

it really needs a rich fusion of info to be able to get safely thru an *intersection*.

I still am very doubtful it even can. I stand to be corrected, but I don't think the 3 can, with its current sensors and cameras. there isn't enough side intelligence or redundancy and lives *matter* when you are crossing an intersection. you really need high levels of confidence and I just don't see it with the current hw sensors.

A one-eyed 85 year old grandpa with slight cataract and below average IQ is legally considered safe enough to drive through an intersection with his sensor suite. His CPU can even have an early Alzheimer's disease. No second brain or heart is required for stroke, epilepsy or cardiac arrest redundancy.

Here in Finland you can have a license even if you are deaf (hearing is only required for heavy vehicle drivers). No laser beam emission from the driver is required.

I'm not sure what you think is lacking on the car's sensors to achieve at least human level safety.
 
Last edited:
85 year old can turn his head and has a lifetime of 'ai rules' in his brain to sort of noise from actual content.

when even slow summon can't know about edges of things (lots of reports of scratched cars when summon is used) - what makes you think that driving FASTER is going to make things better?

if it can't do summon 99.9% well, there's no hope of it having enough 360 view to do city driving well.

I'm pretty sure its going to need extra help (I am a big proponent of v2x) if its going to be safe enough to be trustable with our lives.
 
A one-eyed 85 year old grandpa with slight cataract and below average IQ is legally considered safe enough to drive through an intersection with his sensor suite. His CPU can even have an early Alzheimer's disease. No second brain or heart is required for stroke or cardiac arrest redundancy.

I don't understand this argument. Just because the law says it is ok does not mean that we shouldn't do better with autonomous cars. Put differently, why should autonomous cars only have to meet the basic minimum standard of what we let humans do? Why not do better than humans and put more sensors to give autonomous cars superhuman vision and perception so that autonomous cars are even better than humans?
 
I don't understand this argument. Just because the law says it is ok does not mean that we shouldn't do better with autonomous cars. Put differently, why should autonomous cars only have to meet the basic minimum standard of what we let humans do? Why not do better than humans and put more sensors to give autonomous cars superhuman vision and perception so that autonomous cars are even better than humans?

1) They will do better just with vision, I wasn't arguing that we should only aim for the "old geezer" level of performance
2) Excess sensor suite limits fleet size and thus data collection
3) You can always add as much as you want later when the basic system is working
 
  • Helpful
Reactions: diplomat33