Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

FSD Beta 10.13

This site may earn commission on affiliate links.
You may be counting 20 seconds, but Chuck says 3 seconds in that video.
Wanted to go back and clarify this. Chuck is not saying the cameras have 3 seconds of visibility (due to obstructions or whatever). He's saying that if they have 80m range, that's how long a window you'd have to make the crossing. He says this at 2:45 in the video you linked.

That's completely different than a discussion about visibility. If this limitation actually applied, even with absolutely zero obstructions, and no creeping required, you'd be forced to always cross the road in less than 3 seconds. (It currently takes 6-8 seconds to cross three lanes, last I checked in the new 10.13 release candidate from Chuck's spy videos.)
 
Last edited:
  • Like
Reactions: Daniel in SD
Wanted to go back and clarify this. Chuck is not saying the cameras have 3 seconds of visibility (due to obstructions or whatever). He's saying that if they have 80m range, that's how long a window you'd have to make the crossing. He says this at 2:45 in the video you linked.

That's completely different than a discussion about visibility. If this limitation actually applied, even with absolutely zero obstructions, and no creeping required, you'd be forced to always cross the road in less than 3 seconds. (It currently takes 6-8 seconds to cross three lanes, last I checked in the new 10.13 release candidate from Chuck's spy videos.)
I stand by my comments and your posting history regarding FSD beta speaks for itself.
 
  • Disagree
Reactions: Daniel in SD


75095ECA-8D50-41EF-A16A-2BEA51F14599.gif
 
  • Funny
Reactions: FSDtester#1
Just leave it up to the neural nets. I bet I could do the velocity estimation at 200m with the B-pillar camera video feed alone.


These are no problem for me when looking at Chuck's top-mounted video feed.

The only issue with obstructions I saw was at 10:07 in Chuck's video where the vehicle stopped in obviously the wrong place (1-2 feet back of the start of adequate vision, so with at least 8 feet still to advance), and missed a vehicle mixed into branches (maybe - it was tough for me to see with high confidence). But it wasn't even close to being in the correct spot.

Spend some time watching the video oncoming traffic. It's amazing how good the human brain is.
Of course humans can do the velocity estimation, but there are no human eyes nor brains doing the calculations.
 
I stand by my comments and your posting history regarding FSD beta speaks for itself.
I’m not sure what that means. I’m most interested in what I say being credible. I think it is pretty clear from my post history that:

1) I want FSD Beta to work well, since I bought it.
2) I don’t believe current deliveries do work very well.
3) I don’t believe FSD Beta is smooth or generally usable (meaning: with passengers).
4) I don’t think FSD Beta is all that important to Tesla, and presents significant liability or regulatory risks, and since they have very little real competition in this area, the cost/benefit is not clear. So safety is key, as a stockholder.
5) I think people underestimate what is possible with the current sensor suite. I am not so sure about the compute though.
6) I don’t think it is helpful when people artificially limit FSD Beta, by claiming limitations that are not substantiated by observations or what seems reasonable based on what we know.
7) I don’t want to give Tesla a pass on stuff that is actually a failure by any reasonable interpretation, otherwise we won’t ever have a good product to enjoy.
8) Out of a sample of at least 10 attempts, I don’t think they’ll complete Chuck’s turn successfully more than 90% of the time, within the next few weeks. And I hope I am wrong.
9) I think the progression of FSD improvement over the next few years will be a lot slower than people expect.

I guess we’ll see. Maybe I’ll be wrong and we’ll have Tesla robotaxis in 2024.

Of course humans can do the velocity estimation, but there are no human eyes nor brains doing the calculations.
Right, but that wasn’t how this discussion started. Remember, people thought the Tesla could not see the oncoming traffic. They perhaps do not believe in Tesla.
 
Last edited:
I still think camera/hardware is the main issues for more rapid progression, but now they have to solve for their 5+ year old FSD chip plus inferior camera setup because they have cornered themselfs here with 3+ million "FSD capable" cars on the road soon with this hardware.

Imagine logistics nightmare of releasing HW4 + new camera setup that WILL solve FSD but you now have to deal with upgrading millions of cars to this as well.
But how many of those cars have actually purchased FSD (and thus need your hypothetical HW4 upgrade)?
 
But how many of those cars have actually purchased FSD (and thus need your hypothetical HW4 upgrade)?
The point is not how many - probably only 10% of the cars Tesla sold. The point is an update to the hardware most likely will be necessary, which is perhaps not that difficult or expensive. But a redesign of where the front/side cameras are located - if needed - will be well impossible.
 
You should go back to this intersection and look again. Detection RANGE may be an issue (I have no idea), given vehicle speeds, but obscuration is not an issue, even with the current limited sensor placements.
I don't need to. Chuck uses that corner case because it is right around the corner from his home. But there are probably 10-100 million occluded intersections, some more or less blinded that Tesla will need to solve besides Chuck's corner. You can play games with your micrometer and calculator on a video frame grab but the reality is video camera can't see through objects blocking view nor do photons bounce off stuff to be detected around a corner.
If the camera is blocked Tesla FSD goes into a creep mode and as demonstrated by Chuck many times placed him in a dangerous situation of being hit where he took over quickly. Sometimes the creep works, sometimes not. That is not good enough for autonomous driving.
FSD should never need to creep to where it can see and then decide to go or just stop and give up.

Chuck's video alerted Tesla to several problems with ULT. I'm only suggesting occlusion of any intersection that causes the car to creep to see is dangerous. If you don't like mounting a camera near the front of the car, how about a mirror. :)

Let's see what they do in 10.69 where Musk claims it works 9 out of 10 times.
 
I don't need to. Chuck uses that corner case because it is right around the corner from his home. But there are probably 10-100 million occluded intersections, some more or less blinded that Tesla will need to solve besides Chuck's corner. You can play games with your micrometer and calculator on a video frame grab but the reality is video camera can't see through objects blocking view nor do photons bounce off stuff to be detected around a corner.
If the camera is blocked Tesla FSD goes into a creep mode and as demonstrated by Chuck many times placed him in a dangerous situation of being hit where he took over quickly. Sometimes the creep works, sometimes not. That is not good enough for autonomous driving.
FSD should never need to creep to where it can see and then decide to go or just stop and give up.

Chuck's video alerted Tesla to several problems with ULT. I'm only suggesting occlusion of any intersection that causes the car to creep to see is dangerous. If you don't like mounting a camera near the front of the car, how about a mirror. :)

Let's see what they do in 10.69 where Musk claims it works 9 out of 10 times.
It doesn't matter where you mount the cameras, if you stop at the stop line (as legally required!) you need to creep for visibility. It's perfectly safe to creep.
 
I still think camera/hardware is the main issues for more rapid progression, but now they have to solve for their 5+ year old FSD chip plus inferior camera setup because they have cornered themselfs here with 3+ million "FSD capable" cars on the road soon with this hardware.

Imagine logistics nightmare of releasing HW4 + new camera setup that WILL solve FSD but you now have to deal with upgrading millions of cars to this as well.
Only the cars with FSD purchased would get the free hardware upgrade if Tesla determines that is need to fulfill their listed features. I already got one chip set upgrade on my Model S for free as HW3 came out right at the time my Model S was being built with the remaining HW2.5. That was a much easier retrofit than cameras which I understand is HW4. I don't even know if we will get HW4 for free.
Anyway I did suggest an easy way for Tesla to retrofit front cameras some time ago. They could redesign the headlights with a camera built in against the lens and then add an extension cable from the B Pillar camera each side to the new one in the headlight. The retrofit would be simple and easy, maybe even done with the Mobile service truck. The computer programming may not even need changing as the only difference would be the camera's new view. It would replace the B-Pillar location camera because that location is now still covered by the headlight camera which is aimed perpendicular to the car's line of travel. The lower vertical height of the headlight camera is not critical. With this mod, there would never be the nose of the car sticking out in front of a camera's vision.
 
  • Funny
Reactions: AlanSubie4Life
It's perfectly safe to creep.
That's where we disagree. I've seen it too many times creep into a T-bone collision unless I abort and back up quick, same as Chuck had to do. It's what humans do because we don't have eyes at the front of the car. But unlike FSD-beta Humans can put it in reverse and back out of the lane or step on it to get out of the way. I don't call that "perfectly safe"
In a perfect world, every corner will have the line you reference and in a perfect world there are no occluded intersections. Wishful thinking is not a strategy to solve this problem. that some deny exists.

My guess is Tesla team has resolved the ability to use the median 9 out of 10 times, but they will still creep into danger and then abort. Or as Chuck once observed abort to a right turn and go around the block over and over in an endless loop until he had enough of that nonsense. I duplicated that experiment in my neighborhood.
 
Is it just me or does FSD fail pretty bad on UPLs even with no trees or other obstructions? The attention that Chuck has brought to UPLs is great but it’s not like dead simple UPLs work well either. My Tesla pulls right in front of oncoming traffic and blocks traffic in both directions if I allow it to attempt an UPL. It seems to me that it thinks there is median / suicide lane when there isn’t one. I am 0 for 20 on UPLs.
 
That's where we disagree. I've seen it too many times creep into a T-bone collision unless I abort and back up quick, same as Chuck had to do. It's what humans do because we don't have eyes at the front of the car. But unlike FSD-beta Humans can put it in reverse and back out of the lane or step on it to get out of the way. I don't call that "perfectly safe"
In a perfect world, every corner will have the line you reference and in a perfect world there are no occluded intersections. Wishful thinking is not a strategy to solve this problem. that some deny exists.

My guess is Tesla team has resolved the ability to use the median 9 out of 10 times, but they will still creep into danger and then abort. Or as Chuck once observed abort to a right turn and go around the block over and over in an endless loop until he had enough of that nonsense. I duplicated that experiment in my neighborhood.
I was taught to never back up at intersections because there could be a pedestrian behind you. Nobody is suggesting that the car should creep into a cross traffic lane.
Even if it had cameras on the front of the vehicle it would still need to creep on Chuck's ULT!
One problem with having cameras that can see more than the driver is that the driver has no way of knowing if the path is clear, the car would stop too far back.
Personally I think there should be a forward and side facing camera on the A pillar above the driver's head, this would allow the car to better look around stopped traffic too.
Remember when Tesla's we're supposed to creep and then back up to avoid danger? I even called Pepperidge Farm to see if they remembered.
That would be crazy dangerous to have the car shift into reverse without warning the driver. How would the driver know whether to be looking forward or backward? What if it backs into someone?
 
  • Like
Reactions: AlanSubie4Life
That would be crazy dangerous to have the car shift into reverse without warning the driver. How would the driver know whether to be looking forward or backward? What if it backs into someone?
 
Yeah, I remember reports of people witnessing this. I said it was a bad idea at the time too!
That video only seems to show it going back and forth a few inches for no reason at all. Seems safe enough but also very stupid.
 
  • Like
Reactions: AlanSubie4Life
but the reality is video camera can't see through objects blocking view nor do photons bounce off stuff to be detected around a corner.
No it can’t. But no such object exists here, other than some light poles, and of course the hedge and trees if you don’t creep forward to the right spot.

It’s extremely common and completely normal to have to stop at a stop line, then creep forward to the right spot. Numerous stop signs in my residential neighborhood work this way, otherwise you just don’t have the visibility. And there are no hedges or anything.
If the camera is blocked Tesla FSD goes into a creep mode and as demonstrated by Chuck many times placed him in a dangerous situation of being hit where he took over quickly. Sometimes the creep works, sometimes not. That is not good enough for autonomous driving
It needs to go to the right spot. What it should not do is creep all the way into the road, which is unnecessary. It should leave a decent margin, and there is a large window of adequate positions.
FSD should never need to creep to where it can see and then decide to go or just stop and give up.
I’m not sure what you mean. I think after the initial stop, it should quickly proceed to the correct spot like a human. And then go when clear.

Isn’t this what everyone does when they drive? It seems reasonable that it should drive the corner the same way a human would.
 
It's what humans do because we don't have eyes at the front of the car.
I don’t do this backing up thing. I only proceed as far as I need to, so I can see.
I've seen it too many times creep into a T-bone collision unless I abort and back up quick, same as Chuck had to do.
Yes, this seems terrible. The question is why it does this.

Anyway, it seems very clear to me that FSD Beta needs to understand when and how its vision is obscured. And I think short of hard-coding this intersection to stop in the right spot, that problem will need to be solved to generally solve turning (left or right). This turn has great visibility, so definitely there are plenty of other cases which are much worse!

There are so many turns where it is important to know how far you have to proceed to get visibility (and for most of these, while you may go into the road, there is no issue with being too close to traffic, even after creeping). We do this all the time automatically as humans. Just do some driving in any residential neighborhood with cars parked by the side of the road around intersections.

I don’t know how/whether FSD Beta currently does this. Anyway, knowing what it can see, and knowing how to creep, seem to be critical capabilities.

I can do this from existing video feeds for everything but special cases (which would also be a challenge for humans), so sensor placement is a non issue for what is being discussed here.