Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

FSD Beta Videos (and questions for FSD Beta drivers)

This site may earn commission on affiliate links.
So anyway joking aside, I know an intersection in the east SF Bay Area in Pinole that is almost certainly going to cause the beta to fail miserably. Anyone wanna go try it and get video? It's the sort of thing you don't see *super* commonly but will definitely need to be addressed if they ever want to do a real release.

I'd be shocked if they could give multiple seconds notice with this intersection. Honestly just the entertainment value of seeing how badly it would screw it up would be fun
New thread perhaps. Street names or GPS coordinates? Google maps link?
 
Yes, it is a big change to how the software works "under the hood".
VS
it's more refinements of the existing behavior like making behavior smoother and more confident.

Probably worth at least rereading what you wrote before posting.


To be clear, on top of the refinements, they've changed the underlying stack!

That's not a "minor" thing.
 
  • Like
Reactions: EinSV and mark95476
Elon's comment on observations made by Earl
Leading the autonomy software team for the Tesla Autopilot.

My team's main focus areas are:
- Creation of large scale automatic ground truth pipelines to train neural networks with massive amounts of diverse, high-quality data. Use this fleet-learning approach to replace potentially brittle run-time algorithms with robust learned models.
- Developing an accurate and detailed geometric and semantic understanding of the world using the best of both machine-learned and engineered models.
- Building robust, causal, predictive models for other agents in both geometry and semantic state spaces.
- Decision making, motion planning, and control modules using state-of-the-art AI techniques including methods for high-dimensional search, trajectory optimization, reinforcement learning, model-predictive control, etc.



Both a manager and a technical contributor, I've brought up Tesla's successive Autopilot HW2 and HW3 generations of computers and software stacks, integrated them across all Tesla vehicle types and pushed them all the way to mass-production. I've been regularly shipping software updates to hundreds of thousands of vehicles across the world, including compute optimizations, new features, and system stability fixes.

I've been scaling Tesla Autopilot's software- and hardware-in-the-loop continuous integration infrastructure, and developed productivity tools to accelerate our R&D team's development cycles towards full-autonomy.

I currently report directly to Tesla CEO.

In particular, I currently lead:
- Overall System Software & middleware (C/C++ middleware, IPC, process scheduling, Logging, Watchdog, ...)
- Computer Vision system software (GPU kernels for post-processing, Neural Network integration, C++ Compute Graph Framework for efficient compute scheduling across multiple devices)
- Camera software stack (across all Tesla vehicle types)
- Platform Software (Linux kernel/drivers, security, power, board bring-up)
- Continuous Integration infrastructure (automated & on-demand support of regression tests, performance tests, Simulation tests, with scheduling on either x86 emulation, as well as on true hardware-in-the-loop setups)
- Build System (including remote-caching)
- Performance & optimization (responsible for the Autopilot framerate across all platforms)
- Telemetry (on-vehicle data capture software, and back-end ingestion services)
- Machine Learning infrastructure (training stabilization & scalability, workflow automation)
- Tools (sensor clips visualization, data plotting for logs analysis or live debugging)



Both guys seems pretty busy. I guess developing the FSD software stack is a pretty massive task…

So basically:
Andrej: Train neural networks
Ashok: Generate dataset for neural networks
Milan: Software 1.0 surrounding software 2.0 to deploy on HW3 and DOJO
 
Last edited:
Leading the autonomy software team for the Tesla Autopilot.

My team's main focus areas are:
- Creation of large scale automatic ground truth pipelines to train neural networks with massive amounts of diverse, high-quality data. Use this fleet-learning approach to replace potentially brittle run-time algorithms with robust learned models.
- Developing an accurate and detailed geometric and semantic understanding of the world using the best of both machine-learned and engineered models.
- Building robust, causal, predictive models for other agents in both geometry and semantic state spaces.
- Decision making, motion planning, and control modules using state-of-the-art AI techniques including methods for high-dimensional search, trajectory optimization, reinforcement learning, model-predictive control, etc.


Both a manager and a technical contributor, I've brought up Tesla's successive Autopilot HW2 and HW3 generations of computers and software stacks, integrated them across all Tesla vehicle types and pushed them all the way to mass-production. I've been regularly shipping software updates to hundreds of thousands of vehicles across the world, including compute optimizations, new features, and system stability fixes.

I've been scaling Tesla Autopilot's software- and hardware-in-the-loop continuous integration infrastructure, and developed productivity tools to accelerate our R&D team's development cycles towards full-autonomy.

I currently report directly to Tesla CEO.

In particular, I currently lead:
- Overall System Software & middleware (C/C++ middleware, IPC, process scheduling, Logging, Watchdog, ...)
- Computer Vision system software (GPU kernels for post-processing, Neural Network integration, C++ Compute Graph Framework for efficient compute scheduling across multiple devices)
- Camera software stack (across all Tesla vehicle types)
- Platform Software (Linux kernel/drivers, security, power, board bring-up)
- Continuous Integration infrastructure (automated & on-demand support of regression tests, performance tests, Simulation tests, with scheduling on either x86 emulation, as well as on true hardware-in-the-loop setups)
- Build System (including remote-caching)
- Performance & optimization (responsible for the Autopilot framerate across all platforms)
- Telemetry (on-vehicle data capture software, and back-end ingestion services)
- Machine Learning infrastructure (training stabilization & scalability, workflow automation)
- Tools (sensor clips visualization, data plotting for logs analysis or live debugging)
Yeah, that's them!
 

Cliffs: 30-40% better, not perfect
Looks like little or no change to reliance on map data, so I would expect similar continued failures that others have noticed especially in downtown areas. Notably, ending up in a left-turn only lane at 10:30 where OSM believes there's 3 lanes with no turn lane attributes as well as changing lanes during an intersection at 12:25 because map data says 4 lanes without indicating which direction has how many, so Autopilot probably assumed 1 and cut-off/"merged" in front of the other car.

seattle 1st stewart.jpg
 
Last edited:
Looks like it makes driving much more relaxing!
This is why it's in beta with a limited user base!

I know you don't like this, don't agree with their approach, but is it really that hard to actually remove the snark and focus on the functionality?

What's the path forward?
Does this mean the whole thing should be scraped?
Can they solve this within their system?
 

Cliffs: 30-40% better, not perfect
On the other hand, he said that the intervention frequency was about the same as before.

Starting from 16:26 fsd doesn’t see monorail pillars and is about to collide with them.

18:00 fsd doesn’t see planters and is about to collide with them.

I don’t see any huge improvement, just incremental.
 
Last edited:
This is why it's in beta with a limited user base!

I know you don't like this, don't agree with their approach, but is it really that hard to actually remove the snark and focus on the functionality?

What's the path forward?
Does this mean the whole thing should be scraped?
Can they solve this within their system?
Those are good questions. I guess I don't understand what the goal is. I assume they'll be able to do way better than they're doing now for that maneuver. I'm not sure I understand the point of real world testing is when I'm sure this also fails in simulation. I think they should scrap the idea of doing unprotected lefts like that until the safety is greater than a human. Monitoring the system while it's doing that maneuver looks very difficult. Obviously fearing for your life helps keep focus but what happens when it gets 100x better? Will people still have the fear of death or will they lose focus for a split second with disastrous consequences?