Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla, TSLA & the Investment World: the Perpetual Investors' Roundtable

This site may earn commission on affiliate links.
Their approach seems endlessly flexible. That doesn't mean "ready soon", mind you. But it means that it's a practical approach.
What I'd like to find out is - how are they prioritizing to deal with scenarios causing disengagements. Is it all manual ? Can they automatically find out what scenarios cause most disengagements ?

For eg., if they get 1M disengagements a day, obviously they can't manually deal with them. Infact they can't probably deal with anything more than a hundred a day.

Once they find the top problematic scenarios, do they have to manually select the training set ? This would be not as bad as manually finding the important scenarios - but this is going to dramatically slow down the bug fix rate. Then they have to figure out whether heuristics need an update, which would be manual.

In other words, how much of this whole march of the 9s is automated vs manual. The more manual it is, the less exponential the pace of solution.

Ofcourse, currently they are just gathering scenarios / training data to be feature complete. I'm really curious what those set of features are … but that's a different story.
 
Once again we have reporters making up or at least implying causality, to the detriment of reader understanding. It's the Butterfly Effect used to explain TSLA stock volatility. Someone sneezes in Seattle ten seconds before Timmy's pet gerbil dies in Okeechobee, Florida, so therefore, the sneeze caused the gerbil to buy the farm. Tesla's Q1 suffers loss of $702 million, and clearly that is caused, as far as the NYT is concerned, by the fact that "deliveries of cars and solar systems tumbled." No need to mention the $900+ million debt Tesla repaid on time, as that would have changed the story, causing fewer clicks, less fear, uncertainty, and doubt. And we can't have that, can we.

In all fairness, the only effect that $920mm debt repayment had on the $702mm loss was a slight decrease in it for lower interest payments. It’s been pointed out a few times here but paying off debt doesn’t impact GAAP profit/loss. The effect of the payment was on total cash, which decreased by ~$1.5 billion. Take out the debt payment and they effectively lost $580mm in cash.

The real story is just decreased S/X sales due to changing the lines and not producing/selling enough 3’s to make up the difference(and some bad logistics in getting international 3’s where they needed to go).
 
What I'd like to find out is - how are they prioritizing to deal with scenarios causing disengagements. Is it all manual ? Can they find automatically find out what scenarios cause most disengagements ?

For eg., if they get 1M disengagements a day, obviously they can't manually deal with it. Infact they can't probably deal with anything more than a hundred a day.

Once they find the top problematic scenarios, do they have to manually select the training set ? This would be not as bad as manually finding the important scenarios - but this is going to dramatically slow down the bug fix rate. Then they have to figure out whether heuristics need an update, which would be manual.

In other words, how much of this whole march of the 9s is automated vs manual. The more manual it is, the less exponential the pace of solution.

Ofcourse, currently they are just gathering scenarios / training data to be feature complete. I'm really curious what those set of features are … but that's a different story.

From the presentation, all object classification is manually labeled and all path planning/behavior recognition/controls(note that controls are only in their dev environment, not yet in customer cars) are automatically labeled. Currently the automatic labels come from mimicking driver behavior(with some special weighting based on safety), but they’ll likely move to straight reinforcement learning at some point.
 
I'm in El Paso, finishing up a 5,500-mile road trip across 12 states. Would love to know how Texas traffic lights are different from others.

TSLA... hopefully we have seen the bottom of this recent fall. 376 to 239 is a big drop. Wasn't that far from the Spiegel Bottom!!!

Do you live in El Paso? We relocated recently. If you’re around when I fly back into town I’d be happy to provide a few Ludacris launches for a fellow TMC member.
 
Long time reader here, but first post from me as just received this email that Model 3 has arrived in Ireland - but no test drives yet apparently. Incredibly grateful to this forum for the valuable insights which helped inform my TSLA purchasing decisions. The depth of knowledge from the usual suspects here [contrary to popular belief my assumption is that Fact Checking is actually a team of people rather then just the one AI :)] is astounding and much appreciated, and really helps counter the horrible floods of anti-TSLA everywhere you look.

so thanks all, from the Emerald Isle

Capture.PNG
 
I'm in El Paso, finishing up a 5,500-mile road trip across 12 states. Would love to know how Texas traffic lights are different from others.

TSLA... hopefully we have seen the bottom of this recent fall. 376 to 239 is a big drop. Wasn't that far from the Spiegel Bottom!!!

As mentioned in prior posts, sometimes 5-6 traffic lights are hung on a single wire/cable crisscrossing intersections, facing different parts of streets. I often get confuse when driving towards such intersections as those lights are pointed at different angles. Add in the wind factor and those lights are going be shaking and turning during bad weather. In other areas of Houston lights are hung sideways.
 

Attachments

  • 6C1293AC-6ABB-479F-8E61-FBC1B3B4E232.jpeg
    6C1293AC-6ABB-479F-8E61-FBC1B3B4E232.jpeg
    14.5 KB · Views: 51
  • 69ACB65B-A890-4989-A560-95F41F0A7E6D.jpeg
    69ACB65B-A890-4989-A560-95F41F0A7E6D.jpeg
    222.9 KB · Views: 51
From the presentation, all object classification is manually labeled and all path planning/behavior recognition/controls(note that controls are only in their dev environment, not yet in customer cars) are automatically labeled. Currently the automatic labels come from mimicking driver behavior(with some special weighting based on safety), but they’ll likely move to straight reinforcement learning at some point.
Yes - this is the second part - which is quite manual. Hopefully they are tooling to make it fast.

I'm also curious about the first part i.e. how do you pick the scenarios to train. That can take up a lot of time & effort - and also they can pickup scenarios which are not causing the most disengagements.
 
  • Love
Reactions: neroden
What I'd like to find out is - how are they prioritizing to deal with scenarios causing disengagements. Is it all manual ? Can they automatically find out what scenarios cause most disengagements ?

For eg., if they get 1M disengagements a day, obviously they can't manually deal with them. Infact they can't probably deal with anything more than a hundred a day.

Once they find the top problematic scenarios, do they have to manually select the training set ? This would be not as bad as manually finding the important scenarios - but this is going to dramatically slow down the bug fix rate. Then they have to figure out whether heuristics need an update, which would be manual.

In other words, how much of this whole march of the 9s is automated vs manual. The more manual it is, the less exponential the pace of solution.

Ofcourse, currently they are just gathering scenarios / training data to be feature complete. I'm really curious what those set of features are … but that's a different story.

Disengagements create only a brief report - no usable training data (e.g. no imagery or whatnot), just a couple kilobytes of "situational" data. They're designed to help Tesla figure out what sort of situations are the most common and problematic reasons for disengagement. Then Tesla launches a campaign with "triggers" (which can be very complex) to gather potential training data for dealing with their most problematic situations. The training data is filtered down to examples that illustrate nuances - including things that should be both positive and negative detections - rather than being included as a flood, which can overreinforce "normal" situations and lead to the exclusion of edge cases. These are manually labeled and then added to the dataset.

Source: a combination of Green's analysis and the Tesla autonomy presentation.
 
Los Angeles sets dramatic new goals for electric cars and clean buildings | HeraldNet.com

LA Mayor Gil Garcetti is proposing requiring all autonomous vehicles to be pure BEV.
Wow, what a nice idea and imagine if implemented how it would look on Waymo(especially if others follow it):
The plan also mentions self-driving cars, saying the city should “ensure all autonomous vehicles (AVs) used for sharing services will be electric by 2021."

Most people will be primarily getting into autonomous vehicles if we look 20, 30 years out. If we mandate that autonomous vehicles have to be electric, then we will move people into electric vehicles,” he said.
Pure gold :D
 
Should be good. Inventory M3's are very tight.
Just 1 within 200 miles of Atlanta and its a LRAWD

How important is the "within 200 miles of Atlanta"? - how many stores outside of Atlanta in that 200 mile radius? ZERO

So, really, there's one M3 available in Atlanta. Throwing in the within 200 miles of Atlanta" makes it sounds as if there are several other stores all around Atlanta that are sold out as well.
upload_2019-4-30_12-17-27.png
 
  • Helpful
Reactions: SW2Fiddler
Maybe kick it. Mine works fine and not new.

Hmm I wonder if you’re talking about different things... mine shows cars pretty accurately whenever actually driving. Only when I’m completely stopped with cars(or anything else) surrounding me do I get cars/RVs/trucks dancing(this happens in my garage sometimes as well, before I leave it). Kinda weird but ultimately harmless as it only shows up when I’m not actively driving anyway.