Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Elon Confirms that the FSD sneak peak is for car with HW3 only

This site may earn commission on affiliate links.
Why don’t they just geocode the stop signs?

90% of my driving is to work and back. If I could manually enter stop signs, lights, and which lane to prioritize, it could be a practically hands off experience already.

Solving computer vision means that if situations change (construction, someone runs over a stop sign), the car will do the right thing. If it relies on pre-programmed information, it's going to fail when the real world doesn't match.
 
They may already be using mapping data for traffic lights and stop signs. Earlier this year, hackers found that the prototype firmware only fully detected traffic lights and stop signs that were mapped. That may or may not have changed since.

Lofty goals about recognizing and understanding the world purely by vision are nice and well, but at the end of the day it has to work ...
 
They may already be using mapping data for traffic lights and stop signs. Earlier this year, hackers found that the prototype firmware only fully detected traffic lights and stop signs that were mapped. That may or may not have changed since.

Lofty goals about recognizing and understanding the world purely by vision are nice and well, but at the end of the day it has to work ...

yeah pretty sure speed limits are currently based on a database, and where i live, there's lots of construction on an interstate that has reconfigured a lot of exit ramps. The car gets quite confused in these parts, including sudden braking on the highway when it thinks it's on an old local road with a much lower speed limit.

So.... today both techniques may or may not work "at the end of the day", but I think their long-term strategy focusing on AI vision is the right one.
 
1 - It seems that it's only seeing the stop signs and stop lights from a relatively short distance. I'm curious as to how far out the car is going to react to these signs and signals. It's fine if you're going 35mph but what about when going 45/50+? It seems like the stops are going to be extremely abrupt.
The obvious question is why are you assuming they do not see the stoplight at a much earlier point than when they made a UI decision on what point in time to display it to the users. In other words, just because it is not on the UI, does NOT mean they have not detected it long before that.
 
Indeed, the fact that we've had red light warnings for a long time demonstrates that the car is aware of more than what is being exposed on the MFD.
OTOH, the red light warning (the few times it worked at all) always came very late for me. Also, when approaching a red light with stopped cars the car starts slowing down way too late for my taste when driving faster than 35 mph, which again seems to indicate that the car currently cannot "see" very far ahead. Let's hope that changes.
 
It seems that the car, when stopping itself in a non-emergent situation, prefers to use regen braking where possible. And it's most efficient to maximize the energy you get back when doing so (in that you get more energy back into the battery with higher current over a short duration than low current over a long duration), which might explain the late-seeming deceleration on TACC. It took me a while to get used to that also, because I would generally tend toward a more gentle deceleration curve when left to my own devices.

As for the RLW being "late", keep in mind that it was not intended to be a "you are approaching a red light" warning (as that would be happening all the time and be supremely annoying), but a "you are about to run a red light) warning. I should hope you'd not be getting that a lot, as you don't want to be running reds in your $50,000 car.
 
  • Like
Reactions: JBT66 and MP3Mike
It seems that the car, when stopping itself in a non-emergent situation, prefers to use regen braking where possible. And it's most efficient to maximize the energy you get back when doing so (in that you get more energy back into the battery with higher current over a short duration than low current over a long duration), which might explain the late-seeming deceleration on TACC. It took me a while to get used to that also, because I would generally tend toward a more gentle deceleration curve when left to my own devices.
That really makes no sense. Regen braking is more efficient when decelerating gently, not less (the higher the current, the more heat loss).
 
How would computer vision identify a stop sign that has been run over?
How do your human eyes tell it was run over? Can you see parts of it? Shape of the back of it? Part of the post that was up still attached pershaps. If at a 4-way stop can you see the other 3 stop signs (ie. the shape of the one directly across from you)? Did the car in front of you stop, to the left/right/across as a context clue?

Could the NN/ML be feed 1000s of knocked over stop signs to help recognize those situations?
 
The obvious question is why are you assuming they do not see the stoplight at a much earlier point than when they made a UI decision on what point in time to display it to the users. In other words, just because it is not on the UI, does NOT mean they have not detected it long before that.
Why would you assume that it does actually see it before it displays it? Why not display it when the car first sees it?

Also - when it first displays the light, it's very vague and small and as you get closer it gets more accurate. I don't think it's too far fetched of a thought to assume that it's displaying it when it sees it.

But if you have a good logical reason to think otherwise I'd definitely be interested in hearing it.
 
How would computer vision identify a stop sign that has been run over?

I don't know if there is any evidence of Tesla doing this, but I think we need to have two systems that keep each other up to date the same way a human does it.

In this design that maps would have the location/information of every stop sign, and the visual system would be programmed to see stop signs.

If the two systems ever disagreed the map would take precedence, and it would alert the Tesla mothership that the vision system didn't see what was in the map.

Then a human would review the footage sent back to see what happened. In this case they would call the city road maintenance crew to fix the sign.

If the road construction removed the sign on purpose then Tesla would update the map. it wouldn't be large navigation map update, but would be a map tile.
 
Why would you assume that it does actually see it before it displays it? Why not display it when the car first sees it?

Also - when it first displays the light, it's very vague and small and as you get closer it gets more accurate. I don't think it's too far fetched of a thought to assume that it's displaying it when it sees it.

But if you have a good logical reason to think otherwise I'd definitely be interested in hearing it.

I don't have HW3 yet so I can't test it out, but I wonder why it shows the lights the way it does.

It is great that they show all the stop lights, but from the pics I've seen cases where it shows all 5 lights. Why do I care about it showing 5 lights? Why not just the one that's in the lane I'm going to be in (assuming navigation is on)? Or a more simplified standard light UI?

I think that would be easier and they could show it as soon as the light is detected. I don't think any of knows for a fact whether the light is seen as it's shown in the visualization or before. With the exception of course to any EAP person that has braking on stop signs or stop lights. They would know as they can simply see when it starts to slow down.
 
That really makes no sense. Regen braking is more efficient when decelerating gently, not less (the higher the current, the more heat loss).
It makes sense in the context of the current flows expected to be sustained in a Tesla battery as opposed to, say, the one in a Prius. Yes, at higher current levels there are losses to heat, but as people who watch their charging closely well know, there are also intrinsic losses to charging from the wall which remain generally flat rather than proportional to the rate of charge. As such, given the relatively low amount of current generated by regen braking, I would assert that more is in this case objectively better than less. It would, though, be interesting to see some Science showing where the break-point is in terms of rate of charge between flat charging overhead and losses to Entropy concurrent with current flow rate.
 
It makes sense in the context of the current flows expected to be sustained in a Tesla battery as opposed to, say, the one in a Prius. Yes, at higher current levels there are losses to heat, but as people who watch their charging closely well know, there are also intrinsic losses to charging from the wall which remain generally flat rather than proportional to the rate of charge. As such, given the relatively low amount of current generated by regen braking, I would assert that more is in this case objectively better than less. It would, though, be interesting to see some Science showing where the break-point is in terms of rate of charge between flat charging overhead and losses to Entropy concurrent with current flow rate.
Parasitic losses while charging from an outlet occur mostly in the AC-DC converter, which is not used for regen braking. If the car is cold, some shore energy may also be used for battery heating, which is not applicable while the car is driving. Also, recuperated power from regen braking can easily exceed the power that home AC outlets can deliver even at moderate deceleration (can be easily observed e.g. on the power meter in a Model S).

Bottomline, don't decelarate strongly thinking it's more efficient. It's not.
 
Last edited:
I'd be curious to see more around what Tesla understands about the lights. For example, I was making a left turn and waiting with the red arrow. Tesla saw both the regular light and the turn light (but there were two left turn arrows - the one straight ahead, and one to the left corner of the street). Tesla also saw the light left of the last left turn arrow on the corner of the street for the cross street direction.

I wonder if it accurate can tell right now which of those far left lights it detected were for turning left and which one was for the cross traffic. The visualization doesn't show arrows (unless my eyes are just bad). But I would ASSUME/HOPE Tesla actually is seeing the arrow vs regular circular light (but just doesn't display that in the visualization) so that it would have known the far left light it detected wasn't an arrow light but the cross traffic light.
 
Based on how simplistic the 3D models of the traffic signals are, I'd be not at all surprised if the car's computronics were able to distinguish arrow lights, even if the actual rendering of them is at a rather low level of detail. I have observed it, for example, showing a green light on the MFD when the arrow light on that signal was green, but zooming in showed that the actual UI rendering did not have an arrow.