Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

FSD Beta Videos (and questions for FSD Beta drivers)

This site may earn commission on affiliate links.
In those situations, the car should go to the junction and stop - and then proceed if nobody is at the crossing. That is the correct "defence driving" thing to do - esp. in that junction with so many obstructions. Infact, I'd say that should be the driving policy.

Yeah. I don't think coming to a full stop is actually advisable (probably not in this situation, but in some situations that non-human action could increase the possibility of being hit from behind), especially if it results in continued lack of visibility - but it's a complicated situation and the exact sequence will be complicated. Definitely there should be a lot of slowing, and the car should be looking to the pedestrian for a signal (explicit or implicit) about intent, and be strongly biased towards yielding in this situation. Driving is hard!

The speed and rate of approach to such an intersection, for me at least, would be determined by whether or not someone was following me. If no one is following, I could fly up to the turn pretty aggressively and make abrupt decisions based on what I find once I can see. If someone is following, I need to slow down well in advance to get the person behind me to slow down, since there is a pretty high probability I will not be able to clear the traffic lane.

A lot of driving behavior is based on understanding what you can't see! It's interesting. Situations like this happen all the time when driving but I guess it's easy to forget.
 
Last edited:
... The Tesla did not need to see the pedestrian at every point in time to continue to place him on the screen and make reasonable path predictions. Just plot out that pedestrian's path and know that he's in the vicinity of the crosswalk.

I think (correct me if I'm wrong) that Tesla is currently doing more of a snapshot model of its environment. It seems to forget about objects that it's detected as soon as they are blocked from view. Object persistence doesn't appear to be part of its current strategy.

...guy was in a strip mall parking lot, apparently carrying a drink, and based on his attire and his trajectory in the parking lot, was probably not going to be getting into a car in the parking lot
In any case, it's doubtful it's predicting what the pedestrian is likely to do in human terms. Are any autonomous vehicles able to make those kind of predictions?

The colours do indicate it's making decisions about direction and path-crossing of other vehicles, so it is doing path prediction (just not object persistence?)
 
  • Like
Reactions: WattPwr
Successive Turns & Busy Residential Street - 2020.48.10.1 - 15 Dec 2020 - 2:03 - James Locke
Somewhat similar to the Brandon Tweet earlier. Note the very slight swerve at 1:16 to avoid an open truck door.
Screenshot Locke truck door 2020-12-15 200750.jpg
 
I don't think coming to a full stop is actually advisable (probably not in this situation, but in some situations that non-human action could increase the possibility of being hit from behind),
Absolutely stop and go in this situation. If you can't see the junction properly but the pedestrian light is green, you should absolutely stop - for this kind of situation. Infact that electrical box is directly in front of the crosswalk and anyone standing there to cross can't be seen.

Nobody should follow you so closely as to come and hit from behind. They should expect you to stop for pedestrians at cross walk anyway. Also, I'd absolutely prefer getting hit from behind than running over people.
 
Absolutely stop and go in this situation.

I think it's easy to misinterpret meaning in a forum context like this. I think we're saying essentially the same thing. You proceed, possibly extremely slowly, so as to aid visibility. You can't just sit there before the crosswalk on the street you're coming from hoping that you'll be able to see around the massive metal box.

So, it may involve coming to a stop. But since the light is green, there's no reason to stop, unless you determine there is someone in the crosswalk you must yield to. And it would be legal to proceed into the intersection (since you must do so in order to be able to see) as you turn, and then stop, to allow the pedestrian to cross.

I certainly would not stop in this situation. But I'd probably be going 3-5 mph, or slower, possibly going down to 0 at some point during the turn, not 14mph. And I'd be going as fast as I can, up until the point when I should be going much slower. The speed would be whatever is appropriate, between 15mph and 0mph, and would vary continuously between those limits based on circumstances.

Nobody should follow you so closely as to come and hit from behind.

No one should, but defensive driving requires that you drive in such a way so as to minimize the chance of them doing so, no matter how closely they are following. That behavior is all I was describing. There's not any formula for it. You just make sure (as much as possible) they don't hit you.

They should expect you to stop for pedestrians at cross walk anyway.

They should, but often you can identify in advance the driver who is not going to expect you to stop.

Are any autonomous vehicles able to make those kind of predictions?

I sure hope so. Is any AV company attempting to develop "understanding" human behavior? Perhaps not. But probably there are AVs looking at behavior, orientation wrt crosswalk, current speed, and hand signals from pedestrians and making decisions accordingly. It's essentially required from an AV to be able to understand and interpret human signals. I don't know what is the state-of-the-art currently, but all of this is certainly going to be required - there is no way to avoid it.

. It seems to forget about objects that it's detected as soon as they are blocked from view. Object persistence doesn't appear to be part of its current strategy.

That garbage truck video somewhere still showed the garbage cans on the opposite side of the truck after it passed it. Not sure if that is because it was able to see under the truck (I doubt it...it was a garbage truck!). Object persistence is really important, of course. The idea is to build as good a model of the entire environment as possible. For that you have to have object persistence.

For example, imagine FSD identifying a child running towards the street, but as the child enters the street, she passes in front of a parked car, which blocks your view (and FSD's view), and the parked car is low enough that it is not possible to see the child's feet. In that case, you have to have object persistence. The child is not gone simply because she is no longer in view. The car (just like a human) must react accordingly. This sort of situation happens all the time, many times a day. It's a super easy problem for a human, and if the car is high enough, the child is not lost from view and can be tracked via the feet. I'm not sure if FSD has this capability yet - I am surprised no one with the alpha/beta has set up situations specifically to test for it, but it will need it before wide release. (I apologize if I've missed the video of this!)

In the end, as was mentioned, driving behavior is often determined by what you can't see currently but have seen in the past. So persistence is very important.

we need a release of FSD to the public soon

you're making my point.

Release to the public soon? No, I don't think I'm making that point for you. Quite the opposite!
 
Last edited:
Since this is a "question for beta FSD drivers" thread, I guess I will ask a question:

Can someone who has the beta experiment (not with random pedestrians) the ability of FSD to track people who are obscured from view by parked vehicles, except for their legs? Find a large parked van, have a friend stay hidden, then start walking in front of the parked vehicle, perpendicular to the traffic lane, towards the traffic lane, as the Tesla approaches, completely obscured from view of the pedestrian. Obviously don't do any testing where the friend actually walks into the path of the Tesla (we have to assume the Tesla will not stop). Just enough to see whether the Tesla reacts to the trajectory of the (mostly) hidden pedestrian.
 
  • Like
  • Love
Reactions: EVNow and scottf200
Since this is a "question for beta FSD drivers" thread, I guess I will ask a question:

Can someone who has the beta experiment (not with random pedestrians) the ability of FSD to track people who are obscured from view by parked vehicles, except for their legs? Find a large parked van, have a friend stay hidden, then start walking in front of the parked vehicle, perpendicular to the traffic lane, towards the traffic lane, as the Tesla approaches, completely obscured from view of the pedestrian. Obviously don't do any testing where the friend actually walks into the path of the Tesla (we have to assume the Tesla will not stop). Just enough to see whether the Tesla reacts to the trajectory of the (mostly) hidden pedestrian.

Push a life-sized inflatable Santa out. Or Rudolph would be better as Tesla swerves for deer.
(I'm not being facetious, it's a valid way to test)
 
  • Funny
Reactions: AlanSubie4Life
Push a life-sized inflatable Santa out. Or Rudolph would be better as Tesla swerves for deer.
(I'm not being facetious, it's a valid way to test)

Yeah, I guess you could. But remember, I'm assuming the Tesla would slow down if an object comes out (may or may not be a valid assumption, and not too interested in testing it). I'm more interested in if the Tesla slows down BEFORE the object becomes visible in its entirety (will the presence of moving feet, which most drivers will notice and react to, be enough to have the Tesla identify movement & respond?).

The idea is to avoid an emergency stop, or a situation where there may be insufficient warning to be able to stop in time, even if the Tesla immediately reacts to the pedestrian when he appears from behind the parked vehicle.

It's probably a little bit more difficult for the Tesla than it is for most drivers, since the cameras are slightly higher than a driver's eyes, so it will be slightly more difficult to see under vehicles.
 
The car should have seen him, and yielded. There's no excuse for this failure.

For myself, and others like me that's the entire point of FSD for us. So that we don't have to torture ourselves over some mistake we made while driving. I wouldn't have seen this pedestrian because the box was blocking the view, and he was too far from the cross walk for me to see him before turning.

In this particular case I agree with you, but I also agree with Mike.

Lots of mistakes led to this situation:

Putting a box like that so close to the crosswalk.
The Pedestrian not realizing this big box was blocking anyone from seeing him,
The Pedestrian choosing where to cross
The car not properly tracking the pedestrian from when it did see him
The car not properly handing a cross walk signal with due diligence
The car failing to emergency stop

This is exactly how most true accidents happen. When multiple people/things make mistakes.

If I was the car the lesson I would tell myself is always pause to check ANYTIME the cross walk is on as someone likely pushed it. Where I live most if not all the cross walks are activated by pushing it so its rare for the cross walk symbol to be on without someone in it somewhere. Probably hiding behind just waiting to jump out.
 
The bush isn't blocking the view of the crosswalk. I was just commenting on the intersection design where a pedestrian crossing at the crosswalk could be behind the giant box. It would be a good test for FSD because it would be able to see the pedestrian approaching the intersection but then the view of the pedestrian would be blocked by the box. I was wondering if the path prediction would still assume the pedestrian was going to cross the crosswalk even when they were out of view.

It does not appear that at any point the elect box blocks the sidewalk.

Bush blocking the view of potential pedestrian crossing tho. That bush takes up a significant amount of the view. The tan-ish sign columns are not that wide and I think most humans would be notices if that bush was not there. ie. the human would not easily (or for very much time) stay hidden behind the tan-sih columns.

HTHs explain my comment.

eyhH4xB.jpg
 
Since this is a "question for beta FSD drivers" thread, I guess I will ask a question:

Can someone who has the beta experiment (not with random pedestrians) the ability of FSD to track people who are obscured from view by parked vehicles, except for their legs? Find a large parked van, have a friend stay hidden, then start walking in front of the parked vehicle, perpendicular to the traffic lane, towards the traffic lane, as the Tesla approaches, completely obscured from view of the pedestrian. Obviously don't do any testing where the friend actually walks into the path of the Tesla (we have to assume the Tesla will not stop). Just enough to see whether the Tesla reacts to the trajectory of the (mostly) hidden pedestrian.
Excellent idea. The beta tester that has run the most scenarios like that is AIDRIVR and his wife. I have not seen any Twitter account for him and have no indication he follows this thread. You might want to post a comment with this proposed test on his most recent YouTube video.
 
Push a life-sized inflatable Santa out. Or Rudolph would be better as Tesla swerves for deer.
(I'm not being facetious, it's a valid way to test)
Oddly in a way this sorta exposes how inferior AI actually is and will be for years. Even a 4 year old would instantly identify it as an inflatable object and NOT a human or deer. So we would know we could actually hit this with almost NO consequences and would only laugh.
 
An alert good human driver, who would have definitely seen this pedestrian about 5 seconds before making the turn, and made predictive inferences about them being on a collision course at the intersection (guy was in a strip mall parking lot, apparently carrying a drink, and based on his attire and his trajectory in the parking lot, was probably not going to be getting into a car in the parking lot), would have taken appropriate action and would have already slowed down and been looking to ascertain intent before beginning to turn! This is all easy stuff to do.

Brandon was there, and, if anything, I would say he is being hyper-alert. He was surprised (scared) to see the pedestrian step off the curb as he rounded the corner.

Once we slow the video down and we know a human is there it's easy to extract him from the scene. While driving at full speed, he is only visible for a brief time between the sign posts. If you weren't looking there at that exact time, you would miss him. Our focus is constantly scanning the scene from left to right to see if other vehicles might be coming from the left, for pedestrians, etc.

Not everything is being displayed on the UI. We don't know what parameters determine what they display. Maybe it has to be visible for a certain amount of time if it's not an immediate threat. It's possible fsd missed him between the posts, but we can't know that for sure.
 
Oddly in a way this sorta exposes how inferior AI actually is and will be for years. Even a 4 year old would instantly identify it as an inflatable object and NOT a human or deer. So we would know we could actually hit this with almost NO consequences and would only laugh.
...until it learns the difference using visual clues just like humans do. How long will that take? That is the question we are all speculating on here. Frankly, it is sort of a moot point in that I think we all agree it will happen at some point. I believe within a year. You do not. We can argue all day long (which the last 50 pages or so can attest), but it will still just be conjecture and speculation. It happens when it happens and the driving world will be a MUCH safer place when it does. Heck, it already is as plain old Autopilot has proven statistically. It sure has saved my butt more than once. Can't wait to see FSD when it is released to the masses.

Dan
 
Brandon was there, and, if anything, I would say he is being hyper-alert. He was surprised (scared) to see the pedestrian step off the curb as he rounded the corner.

Yes Brandon seems like a pretty responsible and alert driver. I suspect he easily would have noticed and dealt with the pedestrian if he had not been both driving the vehicle and monitoring FSD. Monitoring FSD is substantial extra load on top of the driving task right now! So it is easier to miss things when that additional responsibility is added, and you have to do that AND drive the vehicle. I think most people who have used AP have probably had a similar experience (just not on city streets, so probably not involving a pedestrian) - I know I have!
 
Yes Brandon seems like a pretty responsible and alert driver. I suspect he easily would have noticed and dealt with the pedestrian if he had not been both driving the vehicle and monitoring FSD. Monitoring FSD is substantial extra load on top of the driving task right now! So it is easier to miss things when that additional responsibility is added, and you have to do that AND drive the vehicle. I think most people who have used AP have probably had a similar experience (just not on city streets, so probably not involving a pedestrian) - I know I have!

The extra load is in concentrating on the environment more intently, looking for mistakes in perception the car might make. The car is handling most of the driving tasks. He would be more likely to see something that's amiss than someone who was driving normally. Especially during turns, all of the testers are taking extra effort to make sure the car is performing properly.

A lot of people feel that using ap leaves them feeling less drained on long drives.