Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Elon: "Feature complete for full self driving this year"

This site may earn commission on affiliate links.
I don't fault Tesla for using map data. But it is funny how Tesla fans say that HD maps are a crutch. Yet, Tesla uses map data.

I wouldn't call roads and intersections HD.

And I noted in another post that all these limitations would probably be solved with extra sensors and/or HD mapping. These limitations admit that there are situations where cameras alone are not sufficient. So really, the need for HD mapping should be obvious since there are cases where cameras alone won't be enough like when a traffic light or stop sign is obstructed. You need HD mapping to tell the car about these objects that your sensors can't detect.

Yeah... you said there is some sensor that would fix that, but gave no details on how these unspecified sensors would see through solid objects. Nor how HD maps can tell you what state a light is in.

The limitations do not mean cameras are insufficient (else people could not handle it either), they mean that the processing is not currently proven reliable in those edge cases.

And not just use it, its vitally critical. No map? It won't stop at the intersection.
There is a map but no light? It will still stop at the intersection.

Which is why they said it won't work with temporary lights or intersection.

But don't worry, the tesla people will revise history to reshape their narrative and make this totally okay.

Example:

Not sure how you came to your conclusions about how the system operates. If there is a known intesection: it stops. If there is a traffic control device detected, it stops. Stopping where there isn't a light protects against the cases of pedestrians and impropper cross traffic.

I think maybe you were referring to this warning:
"Yield signs or temporary traffic lights or stop signs (such as at construction areas)"

As in, atypical arrangements NOT in terms of unmapped intersections. Big difference, huge.

Yes, there are using maps to help and train. However, the system is being designed to not require them to function. In other words, it will be able to drive on a street the first time.
 
  • Like
Reactions: EinSV
Why do you think it’s mutually exclusive? Right now you can press the pedal while on Autosteer to temporarily override the speed and make it go faster. With light recognition you will also temporarily make car go faster, at the same time confirming the light pass.

I don't WANT it to 'go faster', I want it to CONTINUE ONWARD.

big diff, don't you think?

tapping accel for anything other than a bit more torque is just weird. I dont like the idea and I'm not sure I want to train my leg to do any 'taps' that are not directly related to going faster. to maintain, you maintain. its been that way for decades and decades. you don't get to redefine that, now.
 
just to be clear, I'm basically ok with the stalk as a confirm (although, having only regular AP and not anything like NOA, I don't know if I ever had to use the stalk for confirm; regular AP has no confirm modes, does it?).

I think tesla should remove the accel pedal as anything other than 'add or lower torque'. ie, gas pedal, pure and simple.

the car should stop where it should and go where its not supposed to be stopped. the stalk should be for *mode changes* but not frequent 'are you sure?' stuff.

too many confirms is bad, and we're going to get them at every turn!
 
tapping accel for anything other than a bit more torque is just weird. I dont like the idea and I'm not sure I want to train my leg to do any 'taps' that are not directly related to going faster. to maintain, you maintain. its been that way for decades and decades. you don't get to redefine that, now.
Not when using cruise control. When using cruise control pressing the accelerator does not give more torque if the cruise control is already applying torque.
 
don't you have to exceed the point, on the pedal, where you would have had to press down, to get it going at the current rate? ie, overcome that before it has effect?
Of course. I bet this feature is exactly the same way.
I think they should also add that dead zone when going to downhill so that you can press the accelerator without causing acceleration.
 
I never liked dead spots in steering wheels and I don't like them in accel pedals.

I understand that we rest our feet on the pedal. perhaps that's part of the behavior that should change - and there's no extra room to move your foot over and its hard to avoid being near the accel pedal. why haven't we taken the revolution a bit further? I think we still can, if we show 'courage' (sorry apple, lol)
 
New I never liked dead spots in steering wheels and I don't like them in accel pedals.
There isn't a dead spot for the accelerator pedal in regular driving, but the unpressed "zero" behavior is different especially with Hold stopping mode. If we say the unpressed behavior is -10% torque, pressing the accelerator just slightly maybe increases to -5% torque, i.e., still slowing down the car but the green regenerative braking bar is not as large.

And if Autopilot is already maintaining a speed with some positive torque, pressing the accelerator could seem like there's "dead spots" because you need to get through the negative torque portion then some more to match what Autopilot is already applying.
 
Yeah... you said there is some sensor that would fix that, but gave no details on how these unspecified sensors would see through solid objects. Nor how HD maps can tell you what state a light is in.

HD maps would tell you where the object is. So HD maps would tell you where a stop sign is that your vision can't see so you would know where to stop even if the vision can't see the stop sign. For traffic lights, HD maps would only tell where it is and would not tell you if the light is green or not. But knowing where the traffic light is half the battle. That's why Tesla is relying on map data for traffic lights because with map data, you at least know where the traffic light is and the camera vision then can focus on the color of the light. Also, if it knows where a traffic light should be, it will help avoid problems of your camera vision thinking it sees a traffic light that is not a traffic light.
 
HD maps would tell you where the object is. So HD maps would tell you where a stop sign is that your vision can't see so you would know where to stop even if the vision can't see the stop sign. For traffic lights, HD maps would only tell where it is and would not tell you if the light is green or not. But knowing where the traffic light is half the battle. That's why Tesla is relying on map data for traffic lights because with map data, you at least know where the traffic light is and the camera vision then can focus on the color of the light. Also, if it knows where a traffic light should be, it will help avoid problems of your camera vision thinking it sees a traffic light that is not a traffic light.
Yes, but you said:
In fact, extra sensors and/or HD maps would solve all these limitations:

- Visibility is poor (heavy rain, snow, etc) or weather conditions are interfering with camera or sensor operation.
- Bright light (such as direct sunlight) is interfering with the view of camera(s)
- A camera is obstructed, covered, damaged or not properly calibrated.
- Driving on a hill or on a road that has a sharp curves on which the cameras are unable to see upcoming traffic lights or stop signs.
- A traffic light, stop sign, or road marking is obstructed (for example, a tree, a large vehicle etc)
- Model 3/Y is being driven very close to a vehicle in front of it, which is blocking the view of camera
Which you then self referenced a few posts later.
Agree. And Tesla itself puts in the manual that traffic light response may not be reliable in these conditions:

And I noted in another post that all these limitations would probably be solved with extra sensors and/or HD mapping. These limitations admit that there are situations where cameras alone are not sufficient.

So, what what sensor technologies or level of HD map allows you to know the color of a traffic light you cannot see so that you do not need to slow before getting to the point where it is not occluded?
Further, what level of sensor or HD map allows you to see any unexpected obstacle around a corner or over a hill?

Or are you saying cameras could do it, but the current software is not 100%? Your second quotes seem to lean that way versus other sensors being required.

I've seen on coming headlights and traffic lights around cars and corners with my eyes due to reflections, but I don't know any non camera sensor that can do that.


I'm drifting on a tangent here:
I've mentioned previously that humans to not practice safe drive specifically because we drive assuming the road is clear. FSD with high accident avoidance would not operate like that and thus would be less popular.

It is akin to the issue of:
What speed is needed to approach a green light four way intersection to 100% ensure one would not get T-boned by soneone on the cross street?
Sane deal for longitudinal separation on a two lane road to prevent getting hit head on. Safe and practical live on a spectrum.
 
So, what what sensor technologies or level of HD map allows you to know the color of a traffic light you cannot see so that you do not need to slow before getting to the point where it is not occluded?
Further, what level of sensor or HD map allows you to see any unexpected obstacle around a corner or over a hill?

Seeing the color of traffic lights would require I2V like smart traffic lights that send the info to the car. HD maps only help with the location of traffic lights, not the color of the lights. But HD maps would give the location so you could at least prepare for a traffic light or stop sign.

For example, in theses limitations:
- Driving on a hill or on a road that has a sharp curves on which the cameras are unable to see upcoming traffic lights or stop signs.
- A traffic light, stop sign, or road marking is obstructed (for example, a tree, a large vehicle etc)
- Model 3/Y is being driven very close to a vehicle in front of it, which is blocking the view of camera

These are all cases where if the cars waits a bit, the cameras will be able to see the traffic light or stop sign. So if the HD map can tell the car in advance to expect a temporarily obstructed traffic light or stop sign, the car can prepare, maybe slow down, until the cameras can see the traffic light or stop sign. But again, I2V would be helpful in sending the color of the traffic light to the car in advance.

In terms of seeing around corners, Waymo cars have perimeter sensors with cameras, radar and lidar, on the sides of the car pointed perpendicular to the car to help see "around corners" since these sensors will detect objects first before the entire car is fully in the path.

Transpo-nextgen_2.jpg


iPace-lineart-sensor_calloutv2_03022020-01.png


Or are you saying cameras could do it, but the current software is not 100%? Your second quotes seem to lean that way versus other sensors being required.

With the first two limitations:
- Visibility is poor (heavy rain, snow, etc) or weather conditions are interfering with camera or sensor operation.
- Bright light (such as direct sunlight) is interfering with the view of camera(s)

Better camera vision software could help. With good enough camera vision NN, cameras could solve these.

But we know that there are other sensors that can handle these limitations very well. Radar is not affected by rain or snow. Lidar can work in some rain conditions. Lidar and radar are not affected by bright or direct sunlight. So I think having lidar and radar to supplement camera vision is helpful to mitigate these limitations.

It is akin to the issue of:
What speed is needed to approach a green light four way intersection to 100% ensure one would not get T-boned by soneone on the cross street?
Sane deal for longitudinal separation on a two lane road to prevent getting hit head on. Safe and practical live on a spectrum.

Well, this is all driving policy stuff. Yes, there are definitely a lot of rules and exceptions that you need to program into an autonomous car in order to get it to handle different driving cases safely. There is no one size fits all driving rule.
 
Last edited:
With the first two limitations:
- Visibility is poor (heavy rain, snow, etc) or weather conditions are interfering with camera or sensor operation.
- Bright light (such as direct sunlight) is interfering with the view of camera(s)

Better camera vision software could help. With good enough camera vision NN, cameras could solve these.

But we know that there are other sensors that can handle these limitations very well. Radar is not affected by rain or snow. Lidar can work in some rain conditions. Lidar and radar are not affected by bright or direct sunlight. So I think having lidar and radar to supplement camera vision is helpful to mitigate these limitations.
You're right that radar and lidar can provide functionality especially when visibility is poor, but how do they help in determining the color of the traffic light? Determining the color of a traffic light is a visible-spectrum problem, so addressing the limitations needs improvements involving cameras potentially involving different sensors, camera placement, vision NN.
 
  • Like
Reactions: mongo
But we know that there are other sensors that can handle these limitations very well. Radar is not affected by rain or snow. Lidar can work in some rain conditions. Lidar and radar are not affected by bright or direct sunlight. So I think having lidar and radar to supplement camera vision is helpful to mitigate these limitations.
Neither lidar nor radar can determine traffic light colors though, which was the topic those limitations are from.

These are all cases where if the cars waits a bit, the cameras will be able to see the traffic light or stop sign. So if the HD map can tell the car in advance to expect a temporarily obstructed traffic light or stop sign, the car can prepare, maybe slow down, until the cameras can see the traffic light or stop sign. But again, I2V would be helpful in sending the color of the traffic light to the car in advance.

Exactly, the car needs to slow down on a curve if the traffic device is too close. Or if the curve is such that the car can't stop in the visibility distance.

In terms of seeing around corners, Waymo cars have perimeter sensors with cameras, radar and lidar, on the sides of the car pointed perpendicular to the car to help see "around corners" since these sensors will detect objects first before the entire car is fully in the path.

That is seeing around an intersection type corner. Unless the sensor mounted on the outside of a curve looks across the car, it doesn't help with seeing around a curve. Even then, mounting the sensors 4 feet forward from the B pillar only provides 91 mS of earlier warning at 30 MPH. It does help when creeping past an obstruction where the cross traffic is tight to the obstruction.

I2V would help with colors, and prepping to stop on red, but doesn't help with people not reacting to the green light. Red is a stop, but green is not a go. If the situation requires I2V (which is more than a sensor), then it can only be L5. ( also means the road/ intersection is currently unsafe).
 
You're right that radar and lidar can provide functionality especially when visibility is poor, but how do they help in determining the color of the traffic light? Determining the color of a traffic light is a visible-spectrum problem, so addressing the limitations needs improvements involving cameras potentially involving different sensors, camera placement, vision NN.

You are right that lidar and radar won't tell you the color of a traffic light. Like I said, I think you would need smart traffic lights to tell the car the color. That's pretty much the only way to 100% reliably get the color of the traffic light if the cameras can't see it. Alternatively, differently placed cameras could help depending on the position of the cameras. For example, a camera placed high on the roof might see over a truck and see the traffic light that a camera placed lower in the front windshield like on Teslas couldn't.
 
I
Not sure how you came to your conclusions about how the system operates. If there is a known intesection: it stops. If there is a traffic control device detected, it stops. Stopping where there isn't a light protects against the cases of pedestrians and impropper cross traffic.

I think maybe you were referring to this warning:
"Yield signs or temporary traffic lights or stop signs (such as at construction areas)"

As in, atypical arrangements NOT in terms of unmapped intersections. Big difference, huge.

Yes, there are using maps to help and train. However, the system is being designed to not require them to function. In other words, it will be able to drive on a street the first time.

"I did test 2020.12.1 with new map, never panic with AP on while trying to hit new traffic light recently developed. Tried it 10 times and the car can see the traffic light but never gave me alert about it, while it always gave me alerts about old stop sign and traffic lights"
Mike Alani on Twitter
 
Yes, there are using maps to help and train. However, the system is being designed to not require them to function. In other words, it will be able to drive on a street the first time.

No they are using maps PERIOD and in your own words if you use maps for ANYTHING other than routes its not general.

Now they use it for smart summon, pot holes, intersections, stop signs & traffic lights control, etc. Map usage will only increase as AP's operating domain increases.

Infact i can probably find you saying worse quotes than that if i do some digging, such as if its not general then its "worthless", "useless", etc.

Time has sailed for coulda woulda shouldas.
What you are saying is simply another myth and fairytale among other myths and fairytale from 2016.
It sounds like a constant excuses of "its just being trained now, soon its going to be so good!"

If any of those myths were actually true then we would have a Level 5 car in 2018. Radar wouldn't be used anymore as Elon said in 2016 and they will have have radar that's better than lidar and they won't be adding radar heater into Model Y.
 
Last edited:
Seeing the color of traffic lights would require I2V like smart traffic lights that send the info to the car.

Which will work for exactly one week before somebody figures out that they can send out a stronger signal that says that the light is green in random directions and make all the autonomous vehicles get t-boned.

I2V is a dead end. It cannot be feasibly made safe and secure, for approximately exactly the same reason that DVD and Blu-Ray copy protection is hopeless.

Still, I don't see why people are worried about not being able to tell the color of a traffic light. All you need is multiple cameras, to ensure that at least one of them is pointing in a direction where flare doesn't block the view of the light. Also, modern anti-flare coatings should make that problem mostly moot anyway, assuming the cameras are designed well, and assuming you don't let your window get too greasy. :)
 
"I did test 2020.12.1 with new map, never panic with AP on while trying to hit new traffic light recently developed. Tried it 10 times and the car can see the traffic light but never gave me alert about it, while it always gave me alerts about old stop sign and traffic lights"
Mike Alani on Twitter

I can confirm. I did similar test today. A certain intersection with a very obvious stop sign, that is detected and visualized.... however, an alert will never go off at that spot.... At other intersections I can get the alert to go off everytime.
 
Imgur

What do you guys think of this intersection?

Scroll down to see all the images. I think the upcoming feature update would drive right into the intersection and then stop in the intersection... what do you think?

With Smart Summons we have to update parking lots in OpenStreetMaps

With AP we'll simply have to carry a white spraycan with us to fix the lines. :p