Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

FSD Beta Videos (and questions for FSD Beta drivers)

This site may earn commission on affiliate links.
Huawei Autopilot which will be released in Q4 on the 2021 Arcfox Alpha S works anywhere in china.
Mobileye's Supervision which will be released on the 2021 Zeekr 001 works anywhere in the world.

They are all doing that without 100k or even 1k cars roaming around. You're simply wrong from every angle. The industry is doing more sim and less real world.
Not the other way around.

Will be will be will be.... Elon says December.
 
Regarding brake lights, there are certain lighting conditions and brake light designs where it is often possible to confuse a brake light as being on. Hopefully the car only uses the brake light as a hint that something may be happening so a false positive of a 'brake light' does not cause the car to react adversely.

Which got me wondering if the visualisation may provide better feedback if it showed visualised brake lights when it detected vehicles slowing/stopped regardless of whether the brake light was actually on or not? That would give a true indication of what the car thinks rather than just a mimic of what it sees.
 
#FSDBeta 9.0 - 2021.4.18.12 - Unprotected Left Turns Stuck in a Navigation Loop

I need some more eyes on this to help me understand what is going on. Check it out.
Interesting. Tesla starting to code in measures to avoid difficult unprotected lefts, maybe, and just not doing it well so far? The ability to re-route on the fly to account for missed turns and unexpected traffic and road conditions is something that will be necessary eventually.
 
  • Informative
Reactions: mikes_fsd
There's no confusion. The problem is you unable to grasp simple simulation concepts. Just weeks ago you were saying data augmentation is the same as simulation.



This is something that you seemingly are unable to grasp. The industry is not going the other way. They are not trying to get more real world data.
The opposite is completely true. They are improving B to the point that they need less A. The more realistic and complex B is, the less A is required and the more they can improve their software exponentially using more B.

You simply can't grasp that, that's why you run around saying " Simulations are not creating weird cases that may only happen in Fargo, South Dakota because Waymo engineers haven't come across those weird cases before." or "simulation doesn't replace real world edge cases".

Sim does exactly that. Not only is the AV industry moving towards more complex and realistic simulation as seen with Waymo's Simulation Cities. The AI industry at large is doing so as well.

Your sim can get so good that you only need a tiny amount of real world data as a bootstrap or none at all.

"Today, a team of researchers from Facebook AI, UC Berkeley, and Carnegie Mellon University’s School of Computer Science are announcing Rapid Motor Adaptation (RMA), a breakthrough in artificial intelligence that enables legged robots to adapt intelligently in real time to challenging, unfamiliar new terrain and circumstances. RMA uses a novel combination of two policies, both learned entirely in simulation"


10000000_245580527069280_3121028667806589750_n.gif


10000000_536111880852330_4251087931059216264_n.gif






Huawei Autopilot which will be released in Q4 on the 2021 Arcfox Alpha S works anywhere in china.
Mobileye's Supervision which will be released on the 2021 Zeekr 001 works anywhere in the world.

They are all doing that without 100k or even 1k cars roaming around. You're simply wrong from every angle. The industry is doing more sim and less real world.
Not the other way around.


I am not bashing simulation's usefulness. In fact I said years ago on this forum how useful reinforcement learning using a simulator could be for helping develop the end-to-end architecture of driving policy models, because of the extreme benefit of iterations via self play. What still matters in that regard is the same as a quote direct from the Facebook AI article you linked to:

However, a number of challenges emerge when these skills are first learned in simulation and then deployed in the real world. The physical robot and its model in the simulator are often different in small but important ways. There might be a slight latency between a control signal being sent and the actuator moving, for example, or a scuff on a foot that makes it less slippery than before, or the angle of a joint might be off by a hundredth of a degree.

The physical world itself also presents intricacies that a simulator, which is modeled on rigid bodies moving in free space, cannot accurately capture. Surfaces like a mattress or a mud puddle can deform on contact. An environment that’s fairly standardized in simulation becomes much more varied and complex in the real world, moreso when one factors the multitude of terrains that can exist in both indoor and outdoor spaces. And of course, factors in the real world are never static, so one real-world environment that a legged robot is able to master can be completely different from another.

So, first of all, before getting into their solution to this issue - I hope you can acknowledge the issues they bring up here. Even with advanced simulations, there are differences with "reality" and the trained model will fail on deployment without any additional considerations of the how the model differs with the real world.

Do you notice that's what I've been saying? Even with intricate simulations, they know the model will fail at points in the real world because those simulations can't cover everything.

Now, their solution is from what I gather - creating a secondary model that handles the errors between expected joint movement and actual joint movement, this essentially acts as an adaptive model that augments the robot joints' output torques when the expected output joint angles are off (because, for instance, the robot steps in something softer than expected).

This is a pretty cool solution but I'm not sure what the analogy is for self driving. This is like saying they've developed a second model to make sure the car maintains the proper driving velocity and path when running into mud or ice.

Of course all these companies are going to pursue more simulations, that's all they can do! They can't get more data anyways....

You are confident that simulations will make the cars robust to whatever edge cases they haven't seen before. Maybe if you consider stopping and parking in the middle of the road a success, then possibly.

But realistically, just like Facebook AI says, the simulation trained self driving car will still encounter real world issues it doesn't know how to proper handle.
 
  • Like
Reactions: mikes_fsd
#FSDBeta 9.0 - 2021.4.18.12 - Unprotected Left Turns Stuck in a Navigation Loop

I need some more eyes on this to help me understand what is going on. Check it out.

Thanks for sharing. It looks to me like FSD Beta was not confident enough with doing the unprotected left turn across lanes of traffic so it tried to take a detour by turning right instead. But perhaps the rerouting is not quite good enough that is why it is stuck in the loop?

Honestly, this should not be too surprising. It makes sense for autonomous vehicles to simply reroute to avoid certain situations if it is not deemed safe enough. Like you said, even humans sometimes do this. Why take unnecessary risk if you can take a safer route?
 
#FSDBeta 9.0 - 2021.4.18.12 - Unprotected Left Turns Stuck in a Navigation Loop

I need some more eyes on this to help me understand what is going on. Check it out.
Looks like Tesla is spending more time on perception tasks than planning tasks right now. Which makes sense, given the removal of radar and reliance on a camera-only solution. Definitely interesting to watch, though!
 
  • Like
Reactions: diplomat33
#FSDBeta 9.0 - 2021.4.18.12 - Unprotected Left Turns Stuck in a Navigation Loop

I need some more eyes on this to help me understand what is going on. Check it out.
Must be confusing for drivers behind you… you’re signaling left but the car keeps going right. A more aggressive driver might go to the right of you for a right turn since your car is signaling its intention to wait for a left turn.
 
  • Like
Reactions: Matias
Literally an "infinite loop". ;)

I wonder if they added some logic that weighed waiting for the left versus finding "another" route and got got caught in a loop trying to minimize time. perhaps the car should go for that traffic light even though it'd be longer distance and time.

Just think of the case when looking for a parking spot downtown or a robotaxi--
Would it be acceptable if a car just waited there (even if no one was behind you) or maybe you'd want to loop around hoping a parking spot would open up or (robotaxi) traffic would lighten/change.

#FSDBeta 9.0 - 2021.4.18.12 - Unprotected Left Turns Stuck in a Navigation Loop

I need some more eyes on this to help me understand what is going on. Check it out.
 
Must be confusing for drivers behind you… you’re signaling left but the car keeps going right. A more aggressive driver might go to the right of you for a right turn since your car is signaling its intention to wait for a left turn.
It's confusing overall. Why is the signaling module doing something different from the steering module?
 
  • Like
Reactions: Matias
Some of these drives are approaching Waymo performance in areas more complicated than Chandler (IMO):

What....have you watched this? What a joke... A simple unprotected left turn and it fails consistently and tries to cause a dozen accident and head on collisons.
You do realize that performance includes safety right?
Here are 122 unprotected left from one rider (@JJRicks wondering when you are going to post the compilation)

versus v9

Mobileye is done for, has been since Tesla broke up with them. V9 is so so impressive. Tesla Vision is a software / technical / engineering marvel. I'd even hazard to say that Waymo is done for as well (just watched a Waymo video to balance out my perspective). It's sad that everyone else is pursuing fsd the wrong way, but I totally understand how immensely difficult it'd be to change their approaches.

If this is the performance of your so-called leader of AV that is supposedly 5-10 years ahead. Then we will never solve AV.
 
However, I think a limitation is the number and protocol of sensor connectors in the HW3 computer. IDK but it seems unlikely that the radar data port can be repurposed in-situ for additional camera(s). Possibly a video pre-processor module that blends say the left & right windshield wide-view camera feeds with the new left & right corner camera feeds, so that the stream into the existing camera ports can use it without a complete redo of the FSD hardware configuration.
It looks like their RADAR hardware uses CAN bus (1 Mbps, or *maybe* 10 Mbps). I was assuming it used something much faster, like automotive Ethernet. That design also means Tesla can't ever retrofit LIDAR or 4D RADAR over that connection. *sigh*

If those CAN buses are driven by a PLC that can be reprogrammed to treat the input as an LVDS camera feed, then it's trivial, but failing that, you'd have to do something a bit more clever for now. It is generally believed that the wide-field camera is useless. And when you're deciding whether it is safe to make a turn, the main rear camera is also basically useless. So:
  • Repurpose the CAN bus from the SDC to drive a small PLC that acts like a 4x2 routing switcher, in which each of the two inputs gets video data from either the LVDS feed from one of existing cameras above or from one of the newly added cameras.
  • Repurpose the wiring from the RADAR unit to the SDC so that it becomes two LVDS feeds from the new cameras.
  • Add short LVDS cables from the routing switcher to the SDC.
  • Add an electrical source somewhere.
  • Mount the routing switcher somewhere.
And you're done except for software. Then, in the next SDC revision, add more LVDS ports and drop the switcher.
 
  • Like
Reactions: JHCCAZ
What....have you watched this? What a joke... A simple unprotected left turn and it fails consistently and tries to cause a dozen accident and head on collisons.

versus v9
I feel it's very important that we get a proper Tesla response to this.

Have some backbone, take responsibility and issue a real analysis of what's going on here.

If they KNOW that FSD Beta cannot handle this situation, they should tell us. These testers are risking their life and on-coming drivers lives to attempt to test the system. Why? Because they have been given the privilege to test it but given no clear guidelines on how (except that it may do the wrong thing; how callous).

FSD Beta appears to be attempting to drive right out into on-coming traffic time and again!

C'mon Tesla, respond to this!
 
What....have you watched this? What a joke... A simple unprotected left turn and it fails consistently and tries to cause a dozen accident and head on collisons.
You do realize that performance includes safety right?
Here are 122 unprotected left from one rider (@JJRicks wondering when you are going to post the compilation)

versus v9



If this is the performance of your so-called leader of AV that is supposedly 5-10 years ahead. Then we will never solve AV.

Clearly not ready for primetime. I notice the visualizations don't start showing vehicles until they are within a certain distance. But for fast moving vehicles, this doesn't inspire confidence that the vehicle is detecting them further out. And maybe they are not. Tesla needs to show vehicles not just by distance but also by expected arrival time.

One other note, this guy also likes to reengage FSD right when ready to make a turn. Is the FSD algorithm running when not engaged? Otherwise, I would think it's unsafe to think FSD can make the proper decision instantaneously with probably no memory of the last 6 seconds or whatever they are using.
 
  • Like
Reactions: JHCCAZ
In the future and perhaps even now on some occasions, people will be disengaging fsd thinking that it's not seeing something, but it's possible that fsd is seeing it. For example, the wide angle forward camera is placed much further forward than the human's perspective.