Def #1 .. nothing will happen until 10.2, which will arrive "late" (only in "Elon time", since in reality it will arrive when its ready). Even then they wont send out the 10.2 to new testers until there is some form of signup (even if its not a full NDA).
I'm also dubious about the entire methodology here. I'm fine with Tesla selecting beta testers carefully, but I think the way they have done so is borderline absurd. For a car that is about to try to drive itself, we get a "test" that penalizes drivers for braking at a yellow light??? Seriously? All those smart guys working on new AI and all they can come up with is a half dozen dumb metrics?
The problem is, Tesla *think* they are selecting cautious drivers so that they will get careful testers (and no crashes that would create headlines and cause government to step in). However, since they published the "game" rules they are going to get (a) people who fake being good drivers so they can get on the beta and (b) people who hardly ever drive the car (since they apparently dont care if you log 5 or 500 miles of driving).
Also, it's not clear to me that a "granny" driver is a good beta tester anyway .. someone who genuinely drives in the manner needed to have 100% score is very likely to be rather slow and cautious because they are bad drivers. Just the kind of person who will not be alert enough to step in when the car does something wrong.
To start, let me mention I almost always meet the ETAs in the nav - which are based on average traffic flow speed. I pass probably just about as many cars (going below the speed limit) than pass me (speeding, often - but just as often, I'm following a car going close enough to the speed limit and don't care to change). I've had to change only very little about my typical driving - mostly just disengaging AP to "count" following-distance positives that'd be lost if AP were on (which I usually use).
I've been at 99 the whole time - 451 miles so far - mostly commuting in SF Bay area medium-heavy commuting traffic.
FCWs - incredibly bad - because it often means you weren't paying attention enough to avoid the things it predicts. Note, "often". I haven't had a single FCW under my control/score, though today I did get an FCW while on Autopilot (thus masked/not counted) due to an aggressive lane-switcher slamming into the lane next to me - didn't come into my lane. You can avoid FCWs by always watching the road, and more importantly: predicting and reacting to what's happening around you. I have seen many "what would be FCWs if I didn't react", that I just let off the accelerator for, and FCW didn't notice.
Hard braking - honestly this saves tires and also shows you're reacting. Just slap AP on at a sudden yellow light (with traffic light detection enabled - as it should be all this time during this "game") and it'll absorb the hard braking without a ding. I also conclusively measured this in an iOS app called "G-Force" ($3, worth it) today as it measured -0.35G at least at a glance during that brake. AP had been paying attention the whole time and already made the decision to run/stop for the light - you just have to hand it the controls in time. Remember:
AP masks everything. So use it where you think you can't avoid braking or following dings. I've never yet seen this not to be the case. With this metric, the window to stay in is 0.1g-0.3g, and again - it shows that you can train the appropriate reaction to a given situation: not too soft, not too hard. Admittedly, this is near impossible to stay inside this window (on the low end) when following a car in traffic to a stop, so... hey.
Aggressive turning - this just saves tires. It's also a great way to gather stats on what speed to map to what curves, because it has a window of "good" turning: 0.2-0.4G. If you're taking that turn in that window, it knows it's a comfortable speed that's meant for that turn. Good training data.
Following distance - in dynamic traffic, it takes constant concentration to know what throttle input to give it to keep your distance in the 1-3 second window that it scores as "good". If you're too far behind, it's bad training. If you're too close, it's also bad (as is current Autopilot logic, I might add). Again, more proof that you're better than the current system, and can provide good training data.
AP strikeouts / forced disengagement - this is basically "game over" as you have shown to be completely incompetent at observing the on-screen warnings about your AP usage. There are so many warnings, there's no valid excuse for ever getting one of these. There are two ways to strikeout: either ignoring the warnings, or going over AP's allowable speed by manually holding the accelerator. Neither of those should ever be encountered by anyone wanting to get into FSD Beta.
If you take "Safety Scoreᴮᵉᵗᵃ" as just a name of the game (remember: it's not actually used for Tesla Insurance!), but deconstruct its real purpose as an FSD Beta proving ground mini-game, it makes a lot of sense.
(btw - what's in a name? You think I want to go slow - and more, you think I do? lol. I would want to go as fast as I can if I weren't constrained by physics saying that going over the speed limit costs exponentially more energy - for gas and EV alike - to little or no gain in minutes. I take my speed out elsewhere off the road. Some day I ought to play around on a track but always daunted by the rules/regs/customs there. I just work hard on balancing speed with safety/courtesy on the road, and Tesla's scoring seems to agree with most of my approach!)