Why do you think that? Living in the bay area, i see more waymo and cruise vehicles than any other ai car in the downtown areas. Waymo has the deepest pockets while cruise uses lidar. Everyone and their mom have a tesla in the bay, but I can't imagine that tesla will be the first company due to poor data with drivers behind the wheel.
Tesla is in a fantastic position here; they're running the autopilot on all cars with the 3.0 (and possibly 2.5/2.0) hardware in "shadow" mode. This means that the car is mumbling to itself "I would have hit the brakes" or "I would have sped up" or "I would have changed lanes" the entire time human is driving. Any time the car really disagrees, or any time some noteworthy event happens (human slams on brakes or saws away at the steering wheel when over some preset speed) the computer uploads all the telemetry data to the mothership. Similarly the robot will upload any telemetry data of things that terrify / confuse it.
Tesla can then review all these weird exceptions and add them to the training set for the next iteration of autopilot.
For instance, I'm sure the first tesla to see something like this
had questions. But after training, *all* fleet teslas now recognize that trucks can sometimes tow trucks that tow trucks that tow trucks, and that it is not something to expect to radically slow down from (IE *not* an accident up ahead).