Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla, TSLA & the Investment World: the Perpetual Investors' Roundtable

This site may earn commission on affiliate links.
Aka gradient descent the de facto standard (mostly) for training deep nets currently.

Indeed. Except that you're not doing gradient descent to train the neural net itself, but rather to choose an optimal neural net architecture. Meta-training.

Most of my work with optimization has been through the scipy.optimize libraries, which offer quite a selection of tools - although I've more recently been needing to do it in C++ to avoid having to implement python bindings. The scipy libraries are also annoyingly deficient in A) threading, and B) ability to resume - although you can work around (A) by doing threading inside your cost evaluation function, and (B) by cashing and saving the results of every cost evaluation, so that if you have to start over, it can just look up the answers up to the point where you left off.

Most of my previous optimization projects have been with CFD model optimization, although I've also used it for things like compression, and my most recent project is an attempt to use differential evolution of fluids with an arrhenius equation database to try to evolve a hypercycle :)
 
OK, time to come clean...

I am an absolute idiot...in regard to some things... I heard about the company back when it was just the Roadster and the Model S. I liked what they said they were trying to do... Find a company you believe in. Check it out. Experience what they offer. Then, if it feels right and you have some discretionary funds...pull the trigger and then ride it out knowing you are supporting a company that makes sense for you. Yeah, you could lose everything, but if you know that is a possibility going in, it shouldn't hurt as bad if it all goes belly up...
(feeling really happy about my little investment though)

Dan
Well, Dan, that is the way I made the first securities investment I made 53 years ago. Then I knew absolutely zero, just that the company that made the first product I had ever adored had shares I could buy, so I bought some with my very meager savings at the time. I bought this:
s600-coupe-side.jpg

I was in Thailand then, this was their first car and I loved it. I had heard of Honda because I had previously owned a Honda motorcycle. The GM of Honda Thailand (they were tiny, just started) told me to drive his spouses yellow S600 because he was so thrilled that Honda actually had a car! It was really a quasi motorcycle, but four cylinders dual overhead cam, 12,500 RPM because they used needle bearings, maybe because they only knew about motorcycles then. Anyway, I ended out with that car and four months later found out they had publicly buyable things called shares and I could buy some. A few years later their sale paid for most of my MBA, where I learned all about securities, Ben Graham and what happens when Saudi Arabia decides they want more share of their oil revenue.

Were I to have been much younger I like to think I would have chosen TSLA that way. That attitude has saved me from numerous disaster and helped me to make good decisions. Of course I consider myself only a "value investor".
The problem is all about how to define value.

Sorry to be so long-winded and slightly OT. However, your story reminds me of why I refuse to try to manage market timing and why I do not react much to market fluctuations. Still, days like yesterday make it quite hard to avoid being euphoric. I cannot lie: my spouse and I shared a spectacularly good Bordeaux yesterday, and I did have a very old Calvados afterwards.
 
Indeed. Except that you're not doing gradient descent to train the neural net itself, but rather to choose an optimal neural net architecture. Meta-training.

Most of my work with optimization has been through the scipy.optimize libraries, which offer quite a selection of tools - although I've more recently been needing to do it in C++ to avoid having to implement python bindings. The scipy libraries are also annoyingly deficient in A) threading, and B) ability to resume - although you can work around (A) by doing threading inside your cost evaluation function, and (B) by cashing and saving the results of every cost evaluation, so that if you have to start over, it can just look up the answers up to the point where you left off.

Most of my previous optimization projects have been with CFD model optimization, although I've also used it for things like compression, and my most recent project is an attempt to use differential evolution of fluids with an arrhenius equation database to try to evolve a hypercycle :)

Santé.

And we're greenish.

Edith: That escalated quickly.
 
What I have learned with options is that you'll probably lose your initial investment, so that should be your expected outcome, anything more is a bonus. (1)
(...)
General rule, if they go into the money, sell them, immediately! (2)
(...)
Long term options seem a good bet when the SP has suffered a massive drop, as happened recently - the 2021's were just stupid cheap and we "got lucky". (3)

Some remarks:
1) Yes, in case of short term options.

2) Disagree, this would limit your upside considerably. IMO it's best to gradually take profits off the table and let the last ones ride after you've taken out at least the original investment.

3) Agree, LEAPS are better bought on a dip, not at ATH.
 
Last I heard SP500 weight is based on share price, not market cap...:confused:
So, better not split to keep it higher, or, even a reverse split to hack the dis-functional system.

The S&P 500 is float adjusted market cap weighted, so a split would not directly change anything. I think the S&P 500 Equal Weighted index would be impacted due to having 0.2% of each company.

What Does the S&P 500 Index Measure and How Is It Calculated?
What's Inside the Most Popular Stock Index?

Edit: Equal weighted has different weights, but is still market cap based. Brief discussion of the two:
There are two versions of the S&P 500 index — this is the better investment
 
Last edited:
  • Informative
Reactions: printf42
screen-shot-2019-12-15-at-9-45-22-am-png.488438


Karpathy is actually making an extremely important point here. Andrej and Elon’s shared views on this topic are a key driver of why Tesla’s Robotaxi AI strategy is so different from the competition.
Andrej and Elon acknowledge just how difficult it is to solve driving with AI and how much of a head-start human drivers have. In contrast Waymo and everyone else going with their Data Light, Hardware Heavy strategy are instead trivialising just how much learning it takes to become competent at driving and how much data it will take to catch up with humans.

Views on exactly how human and animal learning work still vary greatly, but i’d say most common in the AI community is Yan Lecun’s view: “If intelligence is a cake, the bulk of the cake is unsupervised learning, the icing on the cake is supervised learning, and the cherry on the cake is reinforcement learning.”

Where broadly:
  • Unsupervised Learning - Learning patterns and correlations between actions and consequences as you begin to interact with the world.
  • Supervised Learning - Children asking questions and getting answers about the world.
  • Reinforcement learning - Rewards for good behaviour or correct answers.

However this is only part of the story and potentially only a relatively small part.

A large amount of the behaviour of animals is not actually learned during the animal’s life but is driven by behaviour algorithms present at birth. For example animals are born with innate abilities such as Spiders ability to hunt from birth, or mice's inherited burrowing techniques. Many animals are also born with extremely effective unsupervised learning algorithms that allow them to learn specific tasks very quickly with only a few unsupervised examples.
There is obvious evolutionary pressure for animals to be born with abilities that aid survival or an ability to learn to recognise food and predators quickly.

These innate behaviours and learning abilities are learned via hundreds of millions of years of training via natural selection. Natural selection is effectively a reinforcement learning algorithm which is rewarded when an animal produces offspring. However, learning via natural selection is very inefficient and is limited by 1) Very little useful information is transmitted from animals life - we only know whether or not the animal survived long enough to produce offspring, 2) Very little information can be stored in the genome.

Reinforcement learning via natural selection takes place via genes that encode particular circuits and connections of neurons corresponding to particular innate behaviours plus architectures for extremely effective learning algorithms to allow continued learning through the animals life. However the human genome only contains 1 GB of data and there is not enough storage capacity for the exact weights and wiring of a brain’s neurons to be specified in the genome. Instead the genome has to specify a set of rules for how to wire the brain as it develops. This is possibly one of the key reasons humans have developed relatively general intelligence. Given the limited amount of information that can be stored in the genome, there is pressure to develop very general patterns and algorithms which can be used for multiple applications. https://www.nature.com/articles/s41467-019-11786-6.pdf This is called the “genomic bottleneck” and potentially the very small size of the human genome has been a key driver of human intelligence - relative to for example the lungfish with a 40x larger genome and much less pressure to develop generalised algorithms.

To some extent this learning via natural selection can be thought of analogous to a more powerful form of pre-training or transfer learning used in many machine learning applications today.
It is important to note though that natural selection develops strong neural network architectures and wiring rules rather than optimising neural network weights.
There is some very interesting recent work on “weight agnostic” artificial neural networks, showing that if you carefully select the network architecture for a particular task, the neural network can actually perform some reinforcement learning tasks with fully randomised weights. https://arxiv.org/pdf/1906.04358.pdf This shows just how powerful the network architecture can be itself even before you start training on data.


So back to Robotiaxis - when we are trying to teach a car how to drive, in reality we are competing with a human who already has 100s of millions of years of data and reinforcement learning via natural selection and 20+ years of Unsupervised Learning, Supervised Learning and Reinforcement Learning via interaction with the world and teachers.

This is why a Robotaxi will need 10s of billions of miles of real world driving experience vs a human who can learn to drive with 1,000 miles of real world driving lessons. And this is why Waymo and every other potential Robotaxi competitor is wrong when they assume they can learn enough just from a few hundred extremely expensive test vehicles and 10 million+ miles of real world

So Tesla's Robotaxi strategy is built from the assumptions:
1. We cannot solve Robotaxis without 10 billion+ miles of real world experience.
2. We cannot get 10 billion+ miles of real world experience without a hardware suite affordable to install in a normal consumer owned car.
3. We cannot get Lidar this cheap in a reasonable timeframe and without the economies of scale of first having a functioning Robotaxi business.
4. Therefore we cannot use Lidar.
5. Hence we have to solve distance and velocity estimates using machine learning with Cameras and Radar data. If we can do this, Lidar has no extra value anyway as its capabilities will only be a subset of what we can already do with Machine Vision.

Hey mods, what are the criteria for the Merit Post Thread? It's your pick or can we propose?
This one, for example ;-)
 
I have thought about this, however, I do believe the SP has quite some distance to run yet and I think my current 10x could go 10x higher yet, which would buy my wife's (Made in Europe) Model Y.

Maybe a good strategy would be to sell 5 of the 10 and get 2022 LEAPs with the proceeds.

Blah, it's much easier when they expire worthless...

In other news - the re-market effort by shorty-bears to spook the SP seems to have been relatively ineffective - I expect the accumulation to continue at open. Not an advice.
What I would tend to do is to sell enough to cover the initial cost, which sounds like 2 of them, then the rest are free money for some future event.