Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
My 2023 Model S LR with Tesla Vision thinks the weather is poor in cloudy/sunny days, and can't see well enough to allow FSD to keep working when it is raining.
Tesla has implemented FSD Beta as a driver assistance feature, so the current behavior of showing a warning even in fair weather prepares the driver for the increased likelihood of FSD Beta suddenly cancelling or generally acting less predictably. Indeed, Tesla will need to do something better for more automation, but just because they don't now doesn't mean some future software won't be able to with the current hardware.

End-to-end has potential to drive better than 11.x as controls don't need to strictly follow uncertain and low confidence perception in heavy rain. Often times it may be okay to drive based on what was previously seen, so end-to-end could learn to rely on its memory from previous frames somewhat like object persistence when vision is occluded. The obvious danger then is if expectations from memory don't match up with real world behavior such as the lead vehicle suddenly braking, but potentially a blur of bright red even without a clearly identifiable vehicle can still result in appropriate braking control.
 
If all of the other cameras were simultaneously blinded, it could clean the forward-facing cameras in order to find a safe location to achieve a minimal risk condition.
How would this work? MRC is for L4/L5. Can you explain how this would work as you proposed? Does the system just YOLO the lane changes? I just don’t understand how this would work as proposed.
I’m thinking L3 is earliest at 2026 based on what I wrote before.
With current hardware I don’t see any L3 happening. I’m surprised to hear you say that they could surprise you. With future hardware, anything is possible, but that was not the reference here.

We should be realistic about what we have with v11. Likely the best, most capable L2 system available (regardless of what Consumer Reports (lol) may say), which has very serious flaws (all quite easily dealt with by a competent user). City Streets is amazing - though quite limited in utility with frequent safety interventions required. I haven’t seen any L2 system that seems comparable. But L3 onwards is another level of capability entirely. No one is close (in consumer applications of course, with exceptions for limited L3). v12 has not been released, so is currently inferior to v11. No one knows how big an incremental improvement it will bring, if it ever reaches v11 capability.
 
How would this work? MRC is for L4/L5. Can you explain how this would work as you proposed? Does the system just YOLO the lane changes? I just don’t understand how this would work as proposed.
MRC does not require a lane change. It's allowed for a L4 vehicle to just come to a gradual stop in its lane (in fact that is frequently what Cruise/Waymo does, to the complaint of the public). That's obviously even more acceptable if it knows the sensors to the sides are blinded.

I think an argument can be made for a L4 vehicle to qualify as L4, it should at least attempt to pull to the side when conditions allow, but there is no such requirement in any of the definitions (it doesn't even have to attempt to do that to qualify as L4).
 
detect emergency vehicles (flashing lights trigger UI warning message and slow down).
I just had a test of this on a long night trip when using AP. First I experienced a consistent false positive when there was a car ahead of me that had an unusual taillight configuration (seems to be aftermarket) where it had super bright white reverse lights that were permanently on while it was going forward. The car kept detecting it as an emergency vehicle and automatically slowing down.

Then there was a successful example, where it detected the emergency lights of a cruiser on the side of the road well before I got anywhere close (and I didn't even notice it yet) and slowed down automatically.
 
  • Informative
Reactions: APotatoGod
MRC does not require a lane change. It's allowed for a L4 vehicle to just come to a gradual stop in its lane (in fact that is frequently what Cruise/Waymo does, to the complaint of the public). That's obviously even more acceptable if it knows the sensors to the sides are blinded
Yeah I know that is not a requirement per the definition.

I would argue (as you do) that this would not be achieving a minimum risk condition. Also you might have to change lanes before you can come to a stop!

Anyway, hopefully @willow_hiller can explain to me how this would work!

The “stop in the middle of the road” approach because the car cannot see (while all the humans in the area can see to some extent all around their cars!) I think we can set aside for now.

I guess to me the only solution is to just keep driving in the current lane, which seems incredibly unsafe if you can’t see behind you (it’s routine to react to events behind you to minimize risk). And of course, it may not be possible to do so (merges, etc.).

I just can’t think of any solution. Hopefully there is some solution.
 
Last edited:
  • Informative
Reactions: APotatoGod
Do you believe Tesla is actually trying to reach robotaxi capabilities? Outside of Autopilot, Tesla has probably already spent a significant amount in vehicle design and engineering for a potential robotaxi future, so it would seem reasonable to expect Tesla focusing efforts on what they believe will get to robotaxi sooner.
My opinion is that there is not way Tesla is getting to Robotaxi. There is zero chance of it on hw3/hw4. Of course they could get to Robotaxi on some other future hardware. I think there is a close-to-zero chance of them getting to ANY form of meaningful autonomy on current hardware. Removing the driver is hard.
Even if you disagree with their timelines, understanding their goals helps provide context of why Tesla may not waste resources with potential distractions such as conditional driving automation. Training end-to-end to specially handle behaviors not needed for robotaxi could make things worse for the long-term goal, but then again Tesla can change directions when they need to such as this v12 rewrite.
I think their goal in the near term is to make as much money as possible selling a capable driver assistance system with dreams of autonomy.
 
Last edited:
Anyway, hopefully @willow_hiller can explain to me how this would work!

It would depend on the extent of the cameras blinded. Backup camera could watch for rear traffic in the event the repeaters are blinded, or vice versa. The wide angle gives some peripheral vision about vehicles to the side. And worst case scenario, in the event of all cameras being blinded, it could use a stored internal memory of the last visible scene, plus physics projections of the trajectories of the last known surrounding vehicles to make it to a break-down lane.

It's not perfect, but would still probably perform better than a human driver if they suddenly found themselves blind while driving.
 
  • Informative
Reactions: APotatoGod
I just had a test of this on a long night trip when using AP. First I experienced a consistent false positive when there was a car ahead of me that had an unusual taillight configuration (seems to be aftermarket) where it had super bright white reverse lights that were permanently on while it was going forward. The car kept detecting it as an emergency vehicle and automatically slowing down.
Yeah... that lighting configuration is not legal in the US...
Specifically because it can be confused with emergency vehicles and/or incoming traffic.
 
I think their goal in the near term is to make as much money as possible selling a capable driver assistance system with dreams of autonomy
If that were the case, Tesla seems to have misallocated resources for nearly a whole year in directing Autopilot team to focus on end-to-end instead of polishing 11.x to make more money. Even if you believe HW3/HW4 will be incapable of robotaxi, Tesla can leverage the existing fleet to learn what works with v12 and what requires future hardware. Perhaps if Tesla concludes the hardware is insufficient as you've already done, Tesla could pivot to make money from what is actually capable, but even then Tesla would probably still focus on the bigger goal of robotaxi.

Early 12.x will have limitations, but there will probably be many issues that get resolved with "just" neural network training, architectural adjustments, software workarounds, etc. before needing to consider a different approach.
 
If that were the case, Tesla seems to have misallocated resources for nearly a whole year in directing Autopilot team to focus on end-to-end instead of polishing 11.x to make more money. Even if you believe HW3/HW4 will be incapable of robotaxi, Tesla can leverage the existing fleet to learn what works with v12 and what requires future hardware. Perhaps if Tesla concludes the hardware is insufficient as you've already done, Tesla could pivot to make money from what is actually capable, but even then Tesla would probably still focus on the bigger goal of robotaxi.

Early 12.x will have limitations, but there will probably be many issues that get resolved with "just" neural network training, architectural adjustments, software workarounds, etc. before needing to consider a different approach.
I'm pretty sure they all understand that autonomy is 5-10 years away at least with their hardware configuration. They are maximising profits by having budget hardware and pushing it as far as it can go. Progress is important if they're going to charge 2x the value for the dreamers.
 
  • Disagree
Reactions: STUtoday
And worst case scenario, in the event of all cameras being blinded, it could use a stored internal memory of the last visible scene, plus physics projections of the trajectories of the last known surrounding vehicles to make it to a break-down lane.
You proposed this scenario. This physics projections seems like a very unlikely solution given how things evolve and change in a matter of seconds. Seems unlikely to work in some cases.

I feel like the solution would be to turn on hazards and make very slow lane changes and maneuvers so that any competent driver can avoid the vehicle.

There is definitely still risk, and extremely common corner-case situations which would rule out slow movements. But I guess it would have to work.

Seems very unlikely to be sufficient for L4/L5 to exceed human safety though. I’m surprised on Cybertruck that they did not at least plan to clean the rear camera, which is the most likely to need it, even for L2.
 
This physics projections seems like a very unlikely solution given how things evolve and change in a matter of seconds. Seems unlikely to work in some cases.

At least as of the 2021 AI Day presentation, Ashok explained they use physics-based models to simulate the future trajectories of other cars to plan lane changes:


It's not a single physics sim, but thousands in a couple of milliseconds, so the output is likely a probability distribution of the future locations of other cars. Again, not perfect, but I still think it's possible to quickly and safely pull over using the memory of the last visible frames.
 
It's not a single physics sim, but thousands in a couple of milliseconds, so the output is likely a probability distribution of the future locations of other cars. Again, not perfect, but I still think it's possible to quickly and safely pull over using the memory of the last visible frames.
Certainly. It is pretty clear they do this now - you can tell when requesting lane changes. Anyway, it is a question of how quickly things can be done (it might not be possible immediately) before information goes stale. But to me it just seems hazardous to require a quick lane change to maximize certainly, immediately after camera visibility is lost.

I just am still not convinced your hypothetical original claim: that the car can reliably perform a safe pullover maneuver if it only has front cameras, is correct.

I think sometimes it could be successful and it would know with high probability it would be (no-traffic situations). But I doubt that it would be safe in many situations. And there is no driver to fall back on of course.

Anyway, maybe v12 will just intuit it. All input is error.
 
  • Informative
Reactions: APotatoGod
I still don't know why people still think V12 uses any of the V11 perceptual assets.

If V12 was simply a NN planner on top of V11 assets, Tesla would have built V12 gradually, in a software 2.0 style, where the NN would slowly replace more and more heuristic code.

There would have been no need to go all nets all at once.
 
  • Like
Reactions: gsmith123