Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla, TSLA & the Investment World: the Perpetual Investors' Roundtable

This site may earn commission on affiliate links.
Many of us have seen this show before. Spring of 2019. Spring of 2020. TSLA's share price was collapsing, in 2019 from 300+ to 180 (pre-split), in 2020 from 960 to 350 (pre-split). Many new investors were selling, often with a loss. The stock was toxic, but the believers were hodling and buying. And reaped the rewards.

During the 2019 drop I doubled down and went from 250 to 1000 shares. That was my best financial decision ever.

During the 2020 drop I sinned against the hodl-principles. I didn't trust the market after Covid started spreading in Europe and sold at 800. I got back in at 450 and 500.

During this 2021 drop I have sinned again. I got out two weeks ago at 762, securing most of my profits of the last two years and making sure I have what I need to pay our new penthouse in cash (the build will soon start), plus a substantial financial buffer. I'll continue investing with 40% of the proceeds of the sale and have slowly started buying back in during this drop: at 700 and 650. I will continue to do so if we drop further, I expect an order to be executed soon. I'm convinced that one day the stock will go back up again and reach new heights, if not within a few months then within a few years.

I know I'm a 'sinner' and I expect some scorn, but I needed to secure my financial future against a black swan event. Still I fully expect to have a bigger investment account in a few years time than when I sold last month. Of course holding through this drop instead of taking money off the table would make it even bigger, but I felt I needed to be prudent. And I don't think the extra money would make a difference in my life.
 
More chips for the dip? I salute those of you that are:

giphy.gif
 
All I know is when it turns around, the return to $901 will pump the account crazy fast.
Even if it takes a whole month, I'm patient.

A WHOLE month? That's outrageous.
The way I see it - more employment = more people with money to buy nice things. Tesla = A nice thing.

Side note: Employment gains today were mostly in the lower wage hospitality/leisure/entertainment group. That's where it starts!!
 
Last edited:
I thought I heard Douma say that they are at the self-supervised stage but not at the unsupervised stage. Interesting nevertheless. I'll re-watch.


To my knowledge the current (non beta) system does not "learn" on-car in any way (see Greentheonlys deep dive into how much more limited "shadow" mode is than many think)

Changes in behavior or programming only happen when Tesla makes changes to the master code and pushed it out to the fleet.

AFAIK that's ALSO true of the beta program (though a few folks have thought otherwise because the car acted differently sometimes, but there were external explanations for that, and same happens with current SW because they tried under different conditions and got different results).


The biggest "improvement" from a less-manual-work-needed perspective in the 4D video approach AFAIK is drastically reducing the amount of manual human labeling...you don't have to manually label each thing in each frame if you can get the system to figure out the thing you manually labeled in frame 1 is the SAME thing it sees in frame 2 just moved slightly in time...and same for future frames.... and even better if you tie all the video streams together it understands an object is the same object even when it changes cameras.

Labeling the training video is what they use to, well, train the NNs... (and what Dojo is intended for- to allow it to train against massively more video, massively faster).

This should also get rid of that annoying thing where surrounding vehicles jump around on the screen since the car would understand the truck it saw in one camera is the same truck it now sees in a different-location camera.
 
  • Like
Reactions: asburgers
This really shows how many members of this forum are completely ignoring the competition. It reminds me of the GME apes at WSB.

You should try to follow the conversation (which was about having enough competing cars on the road to make it worthwhile to join Tesla's Supercharger Network). Obviously, the competition doesn't have enough cars on the road. There is a difference between ignoring something that exists and recognizing when that thing doesn't exist.

In the future, eventually, things will be different. But it didn't happen in 2019 (as the bears claimed). It also didn't happen in 2020 (as the bears claimed). And it's obviously not happening in 2021 (as the bears claimed).
 
To my knowledge the current (non beta) system does not "learn" on-car in any way (see Greentheonlys deep dive into how much more limited "shadow" mode is than many think)
That was also my thinking. But then I watched a video (AI Driver I think), attempt a right turn 3 times and it improved from a fail to more and more success, no edits, a difficult maneuver, swear it learned it within 10 min to perfection, anticipating the timing.
 
  • Informative
Reactions: floydboy
To my knowledge the current (non beta) system does not "learn" on-car in any way (see Greentheonlys deep dive into how much more limited "shadow" mode is than many think)

Changes in behavior or programming only happen when Tesla makes changes to the master code and pushed it out to the fleet.

AFAIK that's ALSO true of the beta program (though a few folks have thought otherwise because the car acted differently sometimes, but there were external explanations for that, and same happens with current SW because they tried under different conditions and got different results).


The biggest "improvement" from a less-manual-work-needed perspective in the 4D video approach AFAIK is drastically reducing the amount of manual human labeling...you don't have to manually label each thing in each frame if you can get the system to figure out the thing you manually labeled in frame 1 is the SAME thing it sees in frame 2 just moved slightly in time...and same for future frames.... and even better if you tie all the video streams together it understands an object is the same object even when it changes cameras.

Labeling the training video is what they use to, well, train the NNs... (and what Dojo is intended for- to allow it to train against massively more video, massively faster).

This should also get rid of that annoying thing where surrounding vehicles jump around on the screen since the car would understand the truck it saw in one camera is the same truck it now sees in a different-location camera.

Using multiple cameras and physics to check and learn in 4D is exciting. I AM SO PUMPED FOR THIS TECHNOLOGY I CAN'T STOP YELLLINGGG!!!