Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Autonomous Car Progress

This site may earn commission on affiliate links.
I have no doubt if Tesla had spent 10 years working just on Chandler and SF, they could be running robotaxi now. Heck, in all head to head comparison they have come out ahead in both SF and Chandler with no disengagements.
For a single drive with a safety driver, yes, sure. You still don't seem to understand the difference in doing it once and doing it 1000 drives in a row in different weather conditions.
 
Last edited:
You don't seem to understand how to use search. Stop BSing and search what I've written before.
It doesn't seem likely that I would search for anything you've written voluntarily given your general tone.

I don't get why you keep arguing with me when we both are mostly in agreement on the general timeline and viability of Tesla's FSD becoming L4 in a wide ODD. Enlighten me?
 
Every single day I drive 11.4.2, my conviction is stronger. It's incredible. I'd say it's 5x better than 11.3.6, however you want to interpret that.

I never thought HW3 would be able to predict parking lots like 4.2 does. It's like magic.

Tesla is getting close to generating HD maps on a first pass.
 
Yes (and you said next year), and as I've said, given the current state of computer vision, it likely cannot in a 2023 Waymo equivalent ODD. If it will happen in 24 months? I give that a 1% chance. Will it happen before 2030? I'm thinking a 50% chance.

At this point, whatever you say doesn't have much weight because you don't point out specifics as to why CV is deficient vs LIDAR.

I pointed out that you can have remote intervention every 5-10 miles and still be L4. The bar for L4 isn't as high as you thought. You said something about 2000-4000x improvement in reliability for fsdb, it's nonsense.
 
Heck, in all head to head comparison they have come out ahead in both SF and Chandler with no disengagements.

That's misleading. The comparisons were cherry picked to make FSD beta look better. Someone could easily have picked a different route at a different time of day where FSD beta would have had a disengagement. Comparing a single cherry picked route is not a valid comparison and you know it. A proper comparison requires looking at safety intervention rates over millions of miles. If we did that, Tesla FSD beta would be shown to be far less reliable than Waymo and Cruise. The fact that Waymo and Cruise are doing tens of thousands of completely driverless rides every week, over 2M driverless miles so far, with no severe accidents and Tesla FSD Beta still requires safety interventions and constant supervision supports my point. But if you really believe that FSD beta could do better robotaxis than Waymo or Cruise if Tesla geofenced, then why doesn't Tesla do it to prove they have reliable driverless? Should be super easy if FSD beta really is ahead of Waymo and Cruise as you claim.
 
At this point, whatever you say doesn't have much weight because you don't point out specifics as to why CV is deficient vs LIDAR.

I pointed out that you can have remote intervention every 5-10 miles and still be L4. The bar for L4 isn't as high as you thought. You said something about 2000-4000x improvement in reliability for fsdb, it's nonsense.
OK let's summarise why HW3/4 isn't good enough.

1. Guessing range from a 2D image with only semantic cues is not safe enough, especially when there are few reference objects and at night. Tesla's been running into things (first responders, motor cyclists) at night. It is safer to physically measure the distance, so why wouldn't you?

2. If you travel at 60 mph, and there is smoke or fog on the road, cameras cannot see. If the cameras are blinded by oncoming traffic or the sun, they cannot see. There are too many failure modes for cameras for them to be safe enough to trust with your life.

3. Even if Tesla would add more cameras (in the a-pillars and the front for example) why wouldn't you want to use more sensor modalities to make a 10x safer product for almost zero cost? Tesla won't even bother to add the HW4 radar to 3/Y. Tesla picks lowering cost over improving FSD safety every time.
 
ODD, OEDR, DMS, CV, ML, AI
ODD: Operational Design Domain
The domain in which the vehicle is designed and verified to operate in. One example of an ODD would be "highway only, dry roads, no low sun, no precipitation, up to 60 mph, no tunnels"

OEDR: Object and Event Detection and Response
In a driver assist system (SAE J3016 Level 2) the system performs only partial OEDR, which means that the human need to "be in the loop" and keep eyes on the road at all times. In an autonomous system (SAE J3016 Level 3 and above) the car is performing the full OEDR and the person in the driver's seat is not driving (and can hence do other things).

DMS: Driver monitoring system
Good DMS have IR + a camera in the dash pointed at the driver's face, so that it can function at night and with sunglasses and caps. They detect drowsiness and irresponsible behavior.

CV, ML, AI
Computer vision, machine learning and artificial intelligence. CV is an application of ML to process images. AI is a misleading marketing term for ML.
 
Last edited:
Can some one put out a list of meanings for the posts to be readable

ODD, OEDR, DMS, CV, ML, AI

ODD = Operational Design Domain.
It is the when and where an automated driving system can be engaged. It can include road types, geofences, time of day, weather, speed, traffic conditions etc...

OEDR = Object Event Detection Response
It refers to the ability to detect objects or events on the road and respond appropriately. For example, stopping at a red light, going around a doubled parked car, changing lanes due to construction, avoiding a deer on the road etc...

DMS = Driver Monitoring System
This is a system that monitors the driver's attentiveness to make sure they are paying attention to the road. It usually involves a driver facing camera mounted on the dashboard. The system will usually give visual and/or auditory alerts when it detects that the driver is not attentive.

CV = Camera Vision
This refers to the cameras on the self-driving car and the software that processes the data from the cameras to allow a self-driving system to understand the world around it.

ML = Machine Learning
This is a technique involving using data to train a computer to be able to perform a desired task

AI = Artificial Intelligence
This is a very broad term that refers to a computer's "intelligence", where the computer is able to perform complex tasks on its own, without direct human input.
 
Do you have proof of that ?

Yes. The fact that the comparison videos only show one drive and don't show the drives where FSD beta required a safety disengagement. And if you pay attention to the route, you will notice that the conditions are always ideal for FSD Beta, ie short trips, little traffic, clear day, and with none or very few unprotected turns. Also, FSD beta is not designed to handle pick ups and drop offs but Waymo is. So the videos cut out the pick up and drop offs parts that Waymo can do and FSD Beta can't. So it is not actually comparing the full drive, only the "middle" part of the drive that FSD beta can do.

The real comparison would be to let FSD Beta and Waymo go without intervention in the same ODD and see how long before a collision.
 
I think it's important when making comparisons to point out these two are essentialy operating in a petri dish in the lab (ODD). Tesla is an animal in the wild.

Really any comparison is dumb because Tesla is L2 and Waymo is L4. FSD Beta requires a human driver to perform some driving tasks. Waymo does not need a human driver to perform any tasks. So yes, one is geofenced while the other isn't, but they are very different driving systems.
 
I think it's important when making comparisons to point out these two are essentialy operating in a petri dish in the lab (ODD). Tesla is an animal in the wild.
Cruise's and Waymo's solutions generalize better than you think. They can add a new city extremely fast so that's not a scaling bottle neck anymore - other things like permits, building up operations and service locations is. Tesla isn't even playing the same game. Robotaxi is a completely different use case than driver assistance systems.