I have got a couple of Aggressive Turns though I have been very slow and try taking a wider arc and slowing for curves.
I got dinged for an aggressive turn yesterday, which I believe was when I had to do a U-turn to correct for Navigate on Autopilot taking the wrong exit. My overall score was unaffected, though.
The one thing you will most likely get "dinged" for is close following distance which is only tracked above 50mph.
I've seen several reports of people getting "dings" on close following on trips that never exceeded 50 mph, so I suspect either there's a bug in the software on this point or it was mis-reported. Maybe the speed trigger is 5 mph or 10 mph, for instance, not 50 mph.
I requested full self drive beta a few days ago. I have the latest version of the app (4.1.0-663...) and my car's software is up to date (2021.32.22). I've rebooted my phone. I've deleted the app and reinstalled and I've rebooted the car. Still no safety score.
Is your phone an Apple iOS device or an Android device? With Android, at least, there are options to delete cached data and all data associated with an app. The latter would require you to log in again. Either might help. There may be something similar with iOS. In Android, go into Settings->Apps & Notifications->See All {x} Apps->Tesla->Storage; the options are called Clear Cache and Clear Storage. I'd try clearing the cache first, and if that doesn't help, clear all the storage.
Speaking more generally, I don't think that Tesla was looking for a perfect way to judge driver safety; they just wanted a way to control the trickle of new beta testers into a broader beta program using some sort of safe-driver-ish metric as the criterion. One might cynically say this was a CYA ploy, in case there are accidents and the company comes under criticism (or even is sued); or maybe they are genuinely trying to limit risk and acting responsibly by doing it this way. In either case, this isn't about fairly allocating the feature to paying customers. It is still
beta software, after all -- and "more beta," as it were, than the current Autopilot features. Tesla needs more data before it will be safe to release FSD on city streets to everybody who can afford it. When (not
if) people start using this feature the way some irresponsible people use Autopilot on highways (taking naps, driving drunk, etc.), innocent people who don't even own Teslas will be injured or killed. By limiting the beta-test pool, both in numbers and in terms of how responsible their testers are, Tesla is limiting the risk as they improve the software. It's important to remember that the version of the software that some of us will get in a week or so is
NOT the finished product, but it will still be controlling machines that weigh 3,500 to 5,500 pounds (plus passengers and cargo). Beta-testing software to control such a device is an awesome responsibility.
For those who are concerned about an "unfair" ding because another driver cut you off or you braked hard to avoid running over a kid who dashed into the street or whatever, my advice is to not sweat it. Unless you drive just a few miles in the initial evaluation period, most dings are unlikely to do more than pull your score down a point or so. (As I understand it, Autopilot disengagements count for more, though; and of course, if you regularly drive through areas where you encounter regular FCW false alarms, that will be a problem.) So if your score suffers slightly in a way you consider unfair, it likely won't affect when you get the FSD-on-city-streets feature by more than a few days -- and as I said, the point of the evaluation at all isn't to be fair to
us, it's to minimize population risks (to pedestrians, bicyclists, etc.). The measures are obviously imperfect, but almost certainly better than a random lottery, if the goal is to pick beta testers who are likely to be responsible in their beta testing. Could Tesla have created a better metric? Probably; however, it's also unclear what metric they
should be using, if the goal is to minimize negative externalities (that is, property damage, injuries and deaths) with a brand-new technology. They could take years figuring out what metric to use, during which time the software would go untested. That's part of the conundrum of developing self-driving technology generally -- real-world testing involves risks to people who aren't in the vehicles under test, but in the long run, fewer people will be killed by quick development of such technology (if its promises pan out), so quicker testing is beneficial. Tesla's admittedly imperfect beta-tester selection criteria are simply one manifestation of this paradox.