Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla, TSLA & the Investment World: the Perpetual Investors' Roundtable

This site may earn commission on affiliate links.
No, the NTSB definitely has it out for Tesla, but the they really can't do anything but write nasty reports. They can't enact new regulations/laws, and they can't force Tesla to do anything.
Neither can all the FUD the media pumps out as if Elon Musk was Trump.
I have my theories why so many in the media and government are against Tesla but they are just theories.
What I believe is a fact is that the bias is real.
 
Great, so that basically means you need to be the slowest driver on the road on NJ highways or avoid them altogether...

So true! Except the official Tesla page on driver scoring doesn’t say anything about speed limits. So you’ll have to be the least-tailgating driver in New Jersey instead. :)
 
It wasn't too subtle.. but the thing is.. these things go way deeper into Management.. 😔
All engineers knew it was wrong and let Management sign it off to cover their asses. But did it anyway to not lose their job.

Edit:
A bit of explanation for my American friends here:
After WW II everyone said they "just followed orders". So this is never a valid excuse in court. Even in Military you have to disobey orders if they go against human rights etc..
As an engineer cheating on emissions you can be held accountable for this personally(!). Not the company. And to go to your boss any tell him "I want it written that I have to do it and you will take the blame" and then get such things signed.. that is a whole other level.
I don't know how those bosses covered their assess.. maybe change companies & destroy records? I don't know. But there surely were legal loopholes..
In Germany you always have to find the one responsible for it. If you can't, then they get away.
A famous lawyer once told a small story: in a big company one in accounting managed the "not so legal" Accounts. And made sure everyone on the floor knew the password. In court everyone testified they COULD have done it and everyone said nothing more. You could not find the individual. Company good a small fine, but no one went to prison.

In the states you got way more money out of VW then we did.. although I think we have more polluters here... -.-
I greatly appreciate your posts. Very honest and not easy to do sometimes.

Yes the assignment of responsibility is critical. It's also why the entire diesel gate scandal was such a farce, everyone at Bosch, the regulators at the EU, all top managers in MB, BMW, VW all knew ..they were all actively gaming the system. No country or organization is perfect but when they do become abusive they should be held accountable. FYI- the pain is not over for VW in the US- they still face huge state cases that were not covered in the settlement with the federal regulators. Frankly the EU regulators and the politicians that wrote the regs for air pollution should have been in jail.
 
486BD5E0-96EB-4F0D-B9AB-42D56C3EE9F4.jpeg
Strange. After the update and opt in on FSD beta.My supercharging session doesn’t increments the price. I don’t have free supercharger.
 
So I just went on a short drive -- first since updating the car software and phone app and clicking the Beta Button. I couldn't tell any difference. It didn't show the FSD visualization despite that option being selected on the Autopilot screen, it didn't give me real-time feedback on how to be a better driver, and I didn't see any driving score or information anywhere in the car or in the phone app. It definitely didn't show any special information when the drive ended, though I have plenty of signal. It's like it didn't take? I'm confused.

Edit: I see there's supposed to be a new safety score entry in the app -- it doesn't show for me though I do have the app version 4.1.0 for iOS.
 
Last edited:
So I just went on a short drive -- first since updating the car software and phone app and clicking the Beta Button. I couldn't tell any difference. It didn't show the FSD visualization despite that option being selected on the Autopilot screen, it didn't give me real-time feedback on how to be a better driver, and I didn't see any driving score or information anywhere in the car or in the phone app. It definitely didn't show any special information when the drive ended, though I have plenty of signal. It's like it didn't take? I'm confused.

Edit: I see there's supposed to be a new safety score entry in the app -- it doesn't show for me though I do have the app version 4.1.0 for iOS.
I read the software does not arrive until tomorrow.
 
So I just went on a short drive -- first since updating the car software and phone app and clicking the Beta Button. I couldn't tell any difference. It didn't show the FSD visualization despite that option being selected on the Autopilot screen, it didn't give me real-time feedback on how to be a better driver, and I didn't see any driving score or information anywhere in the car or in the phone app. It definitely didn't show any special information when the drive ended, though I have plenty of signal. It's like it didn't take? I'm confused.

Edit: I see there's supposed to be a new safety score entry in the app -- it doesn't show for me though I do have the app version 4.1.0 for iOS.

Tesla's safety score webpage states that it will provide you your safety score daily.

1632572811930.png
 
1) Different jurisdictions or nations will likely require testing to be done within their road system. They may even require separate NN fitting and had coding. So if 400M to 800M are required per jurisdiction, you could easily need on order of 6B on some 10 or so different jurisdictions. I doubt that Musk would be construing this as one giant test in which Tesla must demonstrate zero fatalities over the course of 6B miles. Indeed as your own calculation has show at 0.12 deaths per 100M miles (implying 7.2 fatalities), Tesla would have less than 0.1% chance of being about to go 6B miles with 0 fatalities. In practical terms, this is an nigh impossible test, doomed to failure. Maybe it would help to follow the implications setting 7.2 fatalities as your proposed null hypothesis. The test statistic, the actually number of fatalities over 6B miles, is Poisson distributed with mean 7.2 under the null hypothesis. It also has a standard deviation 2.68 = sqrt(7.2). With probability about 95%, the test statistic will be between 3 and 13 fatalities. So I think the test you are really try to set up is that your reject this null hypothesis if there are 2 or fewer deaths or 14 or more. In this case, we are talking about a two-sided test. So it is not really clear why Tesla would ever need to show that its fatality rate is so much lower than 0.12 per 100M miles that they'd be able to reject this 0.12 rate. If Tesla merely want to demonstrate they are safer than a human driver, it would be better just to estimate the rate and provide a confidence interval. The hypothesis testing framework is not really the best framing for that public communications objective. Rather, hypothesis testing is relevant when getting approval from a governing body; moreover, that body is likely to state what rate to test against. That is, the regulator sets the null hypothesis, while the Tesla needs to supply sufficient information to reject that regulatory hurdle.

2) The weighting and stratification I was writing about was in reference to testing for regulatory approval. Collecting data for training FSD is actually a much more complex task to do well. Certainly weighting and stratification can play a role in training a model, but that was not my point. Indeed, for training you want to be sure that you have broad data on the full scope of driving conditions, but you also want to oversample on certain data where critical and rare events happen. For example, you like will want to oversample on collisions, especially on collisions involving injuries and fatalities. Tesla is even using simulations of critical collisions to be able to augment data and revisit the scenario under varying conditions. Basically, Tesla want to learn as much as possible from each critical event so that FSD will never make mistakes again in such scenarios. So these are the sorts of consideration for curating training data.

Regulators will likely be interested in how Tesla curates training data and can do analysis on how representative the coverage is, the quality of the data, and many other issues. This is data review, but it is not road testing. For road testing, the regulator will likely want data on where and when the miles were driven and much more detailed information on critical events: collisions, injuries, fatalities, etc. They will analyze the data for representativeness and consider any weighting methodology deployed. The test day may well be segmented. For example, Tesla may require certain number of beta testers from each state. State or city segmentation helps assure representativeness. Time of day, day of year, weather conditions are other factors that may call for weighting or segmentation. So the regulator will need to be persuaded that the test exposure miles are sufficiently representative and have adequate coverage. They will also want to analyze the critical event data with special attention to any factors that may reveal weakness or flaw in the driving system.

But after all that work, the regulators are confronted with the final counts of types of critical events. The regulators will want data to show that the true frequencies of certain outcomes are far enough below a critical threshold that random error can be ruled out. This is where hypothesis testing comes in. The regulator may say to Tesla, you need to demonstrate that your fatality rate is below 1.2 per 100M miles. That's the null hypothesis. And Tesla might know that it is likely functioning is at or below 0.12 per 100M. So 0.2 is the alternative hypothesis they want to optimize around. The regulator doesn't care what the alternative hypothesis. But for Tesla it gives them the basis for planning how many test miles of exposure they will want to accumulate before they present there result to the regulators. The power of the test is extremely important to Tesla as they want a high probability that they will be able to submit enough data to pass the test, assuming their alternative hypothesis. But it is the regulator who cares about the significance of the test as they want there to be a low likelihood that they are fooled by statistical error. Type I error is an error that the regulator wants to avoid, while Type II is the error that the regulated entity wants to avoid.

Now suppose that Tesla makes significant advances in training its FSD neural nets, enough to convince them that their fatality rate is at or below 0.06 or bellow. In this situation, Tesla could chose 0.06 as there alternative hypothesis. What happens here is that for the same amount of test exposure, the power of their test goes up, they have a higher chance of passing the regulators test. Or put another way, they could proceed with less data and still have sufficient power. The implication here is that the choice of alternative hypothesis for Tesla drives how much exposure data they need. This is why I put the emphasis on Tesla engineering a better FSD system. The better it truly is, the less exposure data will be required to show that it can reject the null hypotheses posed by the regulator. This means Tesla can get through regulatory testing faster and cheaper. So how does Tesla engineer a safer FSD system, primarily it must do a damn careful job of curating training data. Any lack of vital experience in the training data exposes Tesla to incremental risk in beta testing (and full public release).

One other issue, suppose the regulators will be testing multiple outcomes. Say they require demonstration that the fatality rate is significantly below 1.2 per 100M miles, bicycle collisions are significantly below 10 cases per 100M miles, and pedestrian collisions are significantly below 6 cases per 100M miles. Now the power calculations become much more difficult. You need to have enough miles of exposure to have a very good chance of passing all three tests. So this multi-test situation could push Tesla to do more test miles than the fatality test alone would call for. This also could help explain question 1. Some jurisdictions might require multiple outcomes to be tested, and that could drive up the required sample size. Indeed I looked like the NHTSA was going on a fishing expedition, just looking for any outcome that might be higher than average. If a regulator aggressively pursues any finding any fault, they will likely succeed. And no amount of data would have an adequate chance of passing all the tests. But this is veering off into a hostile political situation, not good statistical or regulatory practice. At any rate, the point here is that, if regulators will be testing many end points, that can drive up the amount of exposure miles needed for regulatory approval. But again, even if that is what the regulators are demanding, the best strategy for Tesla is simply to work on improving the FSD along every conceivable test dimension, and to do that Tesla will need to curate substantial training data on every conceivable misstep a driver could make.

Just to get through regulatory approval, to pass all the stated and unstated tests, Tesla may well need FSD to be 10 times better that human drivers.
Great post and lots of detail.

But unless I missed it, you didn’t mention the hardest part of receiving regulatory approval…

Intervention analysis. E.g. FSD Beta has had 2k drivers over a year with zero accidents. And yet with just 10 people posting on YouTube we have seen lots of examples of heading towards concrete pillars, heading into traffic, parked cars pulling out onto highways in spite of oncoming traffic…. Very obviously had the driver not intervened there would have been lots of accidents.

Looking at accidents and fatalities can only show the safety of human + FSD. To see the safety of FSD alone, every single intervention has to be analyzed to understand whether it avoided a likely accident, was just frustration, just because the driver wanted to take over for fun…
 
The only bummer for me in that list is that I reasonably often get a forward collision warning when a car in front of me is turning off the road, and I can tell it will be out of my way by the time I get there (though prepared to give it some extra space if needed or brake if it suddenly stops) yet the Tesla flags a collision warning anyway. I keep hoping the vision stack will take over from radar and recognize that it’s not a generic obstacle but a car departing the roadway!

I would have though speed relative to limit or running red lights were factors… curious that they are not but I guess that’s what the data analysis told them.
Yes. It is very frustrating.

I do wonder if there’s latency in the system exacerbating this problem. There’s at least latency in the visualization. There’s also latency in reaction time on autopilot when the vehicle in front slows down. It feels like 0.2-0.3s or so to me.
 
  • Like
Reactions: Duffer
Acceleration is ok!

with this in place wish Tesla insurance would expand quickly to the other states
Waiting in VA to switch as soon as it becomes available
 
with this in place wish Tesla insurance would expand quickly to the other states
Waiting in VA to switch as soon as it becomes available
As a current P100D owner and very soon Plaid owner, I’m honestly thinking I might be better off with an insurance company that’s blissfully unaware of my driving habits beyond the conventional measures of age, accidents and tickets! 😬
 
This is an investor forum and people are different in both interests and means. I can only afford to invest in one company, and watch it like a hawk. [That is why I go non-linear when Lynch doesn't understand what slow boats from China do to inventory turns.]

For some, there may be an interest in other investments, so here is an old copy [1987 creation] of Elon's grocery [shopping] list. It is a list Porter's team put together of great industries and the countries where they are located before writing The Competitive Advantage of Nations book. If Tesla needs something that is the best in the world, they can scan this list as a first order search step. Then hire individuals or buy companies or components as makes sense.

I circled a few places that Tesla has been that are known:
  1. Automation in Germany
  2. Semiconductors in Korea
  3. Tiles in Italy [I would have guessed die casting would be there for Italy, but maybe it was grouped as factory automation?]
  4. Software in the USA
Someone in this forum may be able to anticipate needs, and possibly predict the future, by using what I conjecture to be the same starting point shopping list. [This conjecture is based on the Italian SpaceX tile engineer that spoke at the recent Italian Tech conference.]

Screen Shot 2021-09-25 at 7.15.59 AM.png


By the way, I do not find arrogance in the title play on The Wealth of Nations, as I think in today's world, the Porter version, The Competitive Advantage of Nations, is as helpful as the older Adam Smith version.

I hope this list of who is good at what is helpful to someone here.


Edit: A while back, with the acquisition of Grohmann in 2017 Tesla Grohmann Automation - Wikipedia it was clear that Elon was combining the competitive advantages of several nations into one company, Tesla. That he may be using this list as a start to combine the all world's competitive advantages into a single company dawned yesterday. It was also clear that he was going in that direction with Keke Rosberg's kid, Nico, associated with the Model 3 driving dynamics. Nico Rosberg - Wikipedia
 
Last edited by a moderator:
I don't know why people laugh & disagree here.
He has a fair point.

One can use Teslas chip to generate the needed top-down dataset for the own cameras.
Heck, If I had to do that I would do it this way (note: I am an ml engineer!). In real life there are seldom consequences for cheating your way out of work. Quite the opposite. You get paid good money to do that.
Congratulations, you have discovered how to copy a black box! Now try taking that to a regulator and having to show your work, like @jhm described.

For many technologies, there are shortcuts. But I am becoming convinced that for machine learning that is used widely in safety critical applications, you have to perform every single calculation. And to do that, you have to have a monumental mountain of data. And to do it in a timely fashion, you have to have computer systems that are too expensive for all but the largest companies. And to make those systems within a few generations of the state-of-the-art, you have to be a tech company.

Just look at OpenAI. They had to sell their souls for compute.
 
Few items of note

1. I went for a short drive and it took maybe an hour after for the safety score to show on my app. I also restarted the app a couple [hundred] times.

2. As previously reported everything is the same from a user perspective.

3. I’m VERY curious if putting my P3 into track mode makes it ignore the driving of that particular drive. Maybe my next outing I’ll give it a test. While I was happy that it doesn’t ding acceleration I was saddened to see it dings you for carving corners. That’s one of my many pleasures in life. It already dinged me for a “hard” turn while I was driving full on DMV driver’s test style.
597079D7-C1A7-42B8-A884-FCFF305F7DF1.png
Track mode…engage.
 
I wonder what Forced Autopilot Disengagements refers to. Sometimes I have to yank the steering wheel and turn autopilot off that way because it isn't doing what I want (like a lane change, or getting too close to a neighboring car on a curvy road). Also, I get forward collision warnings coming up my street on some left turns because it reacts to the parked cars on the right (the car doesn't seem to understand that I'm in a turn and won't hit them)....