Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla, TSLA & the Investment World: the Perpetual Investors' Roundtable

This site may earn commission on affiliate links.
This is probably ancien chapeau to you AI officianados and OT, but this is readable to a dullard like me.

Foundations Built for a General Theory of Neural Networks | Quanta Magazine

My question as the village idiot in this field, could the laborious testing of networks for efficiency through experiment be assisted by AI itself? And a followup. Is this the first step in developing a general intelligence machine?
Thank you! I’ve been looking for something introductory like this.
 
Mark B. Spiegel on Twitter

Mark BS is upset about people who switch from Mercedes A class to Tesla Model 3.
He is right, maybe they should switch to Mercedes AA class instead:

Wow that IS a telling Twit...……..

You can almost hear the tears of rage and frustration!

As he vomits out his toothbrush and bashes his laptop...…

I must be a sociopath as I feel no pain for him!
 
My question as the village idiot in this field, could the laborious testing of networks for efficiency through experiment be assisted by AI itself? And a followup. Is this the first step in developing a general intelligence machine?

There is an entire field of automatically finding the neural networks. Check out AutoML and NASNet
Google AI Blog: AutoML for large scale image classification and object detection
Google AI Blog: Using Evolutionary AutoML to Discover Neural Network Architectures
 
Spiegel's Stanphyl capital letter is out on seeking alpha now with the prediction that TSLA will revert to losses in 2019.
In essence:
foreign sales will not drive zev credit generation.
An increase in opex as predicted by the company will lower the amount of profit made by the company from 139 to 39 million, but without zev credit sales this will be closer to zero.
Weakness in the sales will result in 250k total car sales in 2019
There will be a GAAP loss of 1 billion.

---- then we enter spiegel's favorite zone, talking about the competition for about 20 pages---

Anyway, kudos to him for making it public.
 
Last edited:

Thanks for your help. I think both articles aside from being over my head are somewhat different from my question. The notion suggested was not starting from an existing data set and applying the machine to discover patterns in preparation to label future patterns, but iterating all possible neural net designs and computing the best for different applications. Kinda like my limited understanding how one of the experiments taught itself to play GO just by playing the game over and over. Probably the hitch is what are the rules of neural nets? Sounds like that requires people to design them. Might be possible if layers could be built in more than three dimensions, say over time.

This is probably where I need to admit I don't know what I'm trying to talk about.
 
Spiegel's Stanphyl capital letter is out on seeking alpha now with the prediction that TSLA will revert to losses in 2019.
In essence:
foreign sales will not drive zev credit generation.
An increase in opex as predicted by the company will lower the amount of profit made by the company from 139 to 39 million, but without zev credit sales this will be closer to zero.
Weakness in the sales will result in 250k total car sales in 2019
There will be a GAAP loss of 1 billion.

---- then we enter spiegel's favorite zone, talking about the competition for about 20 pages---

Anyway, kudos to him for making it public.
WTF. I am struggling to scroll past all the links to go to the comments section of Seeking alpacas article.

Edit: Funnily enough, the comments are gold :)
 
the question for me is also, will the "smarter" shorts already start slowly covering (which would provide some upwards pressure), or will they wait for a likely dip around release of Q1 ER (which will not be good) to cover?

It a pretty good bet that Q1 earnings won't be robust but that doesn't mean the share price will necessarily be below where it is now because that should already be priced in. Q1 could be the ultimate bear trap if the Tesla "story" looks like it's lining up properly.

A lot of bears would love to believe that Tesla share price should move in tandem with quarterly earnings but it probably shouldn't (and probably won't). Typically, the share price of established companies move with the difference between expectations and actual. But a company with growth prospects of Tesla should move less with qtr to qtr earnings and more with how the picture is unfolding. Qtr to qtr earnings are only a small window into that.
 
  • Like
Reactions: saniflash
Spiegel's Stanphyl capital letter is out on seeking alpha now
...
There will be a GAAP loss of 1 billion.

Thanks to the power of the ellipsis, this suddenly transforms into a very believable statement. ;)

(I kid, of course. Mark "My Bathroom Is My Office" Spiegel is multiple orders of magnitude away from having a billion dollars in investments)
 
Spiegel's Stanphyl capital letter is out on seeking alpha now with the prediction that TSLA will revert to losses in 2019.
In essence:
foreign sales will not drive zev credit generation.
An increase in opex as predicted by the company will lower the amount of profit made by the company from 139 to 39 million, but without zev credit sales this will be closer to zero.
Weakness in the sales will result in 250k total car sales in 2019
There will be a GAAP loss of 1 billion.

---- then we enter spiegel's favorite zone, talking about the competition for about 20 pages---

Anyway, kudos to him for making it public.
  • $TSLAQ crowd: "Tesla is burning money!"
  • Mark BS: "Hold my beer ..."
 
Spiegel's Stanphyl capital letter is out on seeking alpha now with the prediction that TSLA will revert to losses in 2019.
In essence:
foreign sales will not drive zev credit generation.
An increase in opex as predicted by the company will lower the amount of profit made by the company from 139 to 39 million, but without zev credit sales this will be closer to zero.
Weakness in the sales will result in 250k total car sales in 2019
There will be a GAAP loss of 1 billion.

---- then we enter spiegel's favorite zone, talking about the competition for about 20 pages---

Anyway, kudos to him for making it public.

"Since inception the fund has compounded at approximately 7.4% net annually vs. 11.9% for the S&P 500 and 9.2% for the Russell 2000."
It's never too late for a career change.
 
Spiegel's Stanphyl capital letter is out on seeking alpha now with the prediction that TSLA will revert to losses in 2019.
In essence:
foreign sales will not drive zev credit generation.
An increase in opex as predicted by the company will lower the amount of profit made by the company from 139 to 39 million, but without zev credit sales this will be closer to zero.
Weakness in the sales will result in 250k total car sales in 2019
There will be a GAAP loss of 1 billion.

---- then we enter spiegel's favorite zone, talking about the competition for about 20 pages---

Anyway, kudos to him for making it public.

On a more serious note, could someone ask the obvious probing questions on Seeking Alpha:
  • For 10 years Mark BS relentlessly spread his views that battery EVs are undesirable with no demand whatsoever, and that Tesla is so very structurally unprofitable with no hope left, then why does it take 20 pages to list all the Tesla wannabes? Is capitalism fundamentally broken, with new EV startups and ICE-OEMs-turned-EV-makers trying to emulate selling failed products absolutely no-one wants to buy?
  • Is the $TSLAQ cult fundamentally faith based, rejecting facts like the 2 billion dollars of free cash flow in the last two quarters?
  • When $TSLAQ cult members are buying their groceries to regain strength for another day of trolling, do they pay in "GAAP income", or in "U.S. dollars"?
Or do such contrarian questions result in an instant ban? ;)
 
Did you just see Audi commercial? Tesla is so screwed.


In 2025

Screen Shot 2019-02-04 at 1.32.48 AM.png

Screen Shot 2019-02-04 at 1.35.35 AM.png


Nice bait n switch by Audi. Entire commercial about e-tron gt. "Reserve yours (not e-tron gt) now."
 
Thanks for your help. I think both articles aside from being over my head are somewhat different from my question. The notion suggested was not starting from an existing data set and applying the machine to discover patterns in preparation to label future patterns, but iterating all possible neural net designs and computing the best for different applications. Kinda like my limited understanding how one of the experiments taught itself to play GO just by playing the game over and over. Probably the hitch is what are the rules of neural nets? Sounds like that requires people to design them. Might be possible if layers could be built in more than three dimensions, say over time.

This is probably where I need to admit I don't know what I'm trying to talk about.

I can take a crack at it!

Neural networks (typically) are one of many kinds of classifiers within supervised learning, the primary characteristic of all supervised learning methods being that you have a labeled data set (your comment leads me to believe that you're familiar with the idea of labeled data vs. unlabeled data).

So one option, in theory, given some set of square data (rows and columns), set each column as the label and then go use the remaining columns as independent variables and build models - many, many models. And while this can in theory be done, you still lack the human knowledge of what any of the results means.

An ML tool that is becoming increasingly available is one that uses your given (an existing data set), and an additional given - which column is the label - and goes from there to build a wide range of different kinds of models (neural net(s), logistic regression, linear regression, decision trees, random forest, fill-in-the-blank :) These tools are essentially automating the model building process, while still keeping a human in the loop at the beginning (what is the problem I'm solving, what is the relevant data that I can get for that problem, what do my inputs mean that lead to the outputs), while solving the model building itself through a combination of brute force (build millions of models and compare them for "goodness"), and clever tricks (keep track of the models and notice which ones are getting better, and use that knowledge to cut out branches in the brute force solution).

Some of these tools are also getting into feature engineering (combining the independent variables in interesting ways that are more predictive / descriptive than they are separately). Trivial example - you've got a shipping app with package dimensions of height, width, and length. Your app might be able to learn a pattern based on these three variables, and it also might get better results using volume (height*width*length) as a variable. Or maybe area (height * length) for some reason!


The trick with the GO program, is that a human made a bunch of decisions to set up the circumstances within which the machine could iterate and operate. A human decided that finding a solution to GO was worth doing (and then did the programming to do so). A human coded the rules of GO into the program. A human coded the win condition for GO into the program.

The program was then able to to "solve" the problem in a highly unintuitive fashion to us humans. It started making random plays and using those random plays as labeled data to start finding patterns about which ones were helpful to winning and which ones harmful and useless. It had to make millions / billions / trillions of useless plays to start finding patterns. That's where neural networks excel - amazingly large data sets to start learning patterns from.

A computer program with lots of compute can easily run as much of this as you want to feed it time and electricity to do. And it turns out that once the network gets traction and starts learning, it often learns in a seemingly very fast fashion. (This is where I leave my direct experience, and just go with what I've been reading - I work in the field, but so far not with neural nets - they don't much apply to my problem domain).


I personally can see line of sight to increasingly automated ML / model building, when the human has defined the problem.

I don't yet see line of sight to a program that can assess a random environment and decide what's a problem (or opportunity), what's a problem worth solving, and what a solution to that problem might look like, decide what data is relevant to the problem, go get that data, do the data prep / organization to get it into form for solving the problem, so it can then use the automated ML stuff to find and choose a model that solves the problem.
 
Recent V3 discussion has illuminated the sheer importance of range to me - there are so many primary and secondary order benefits for a larger battery. It is nothing like having a larger gas tank.

Tesla has been been hit IMO by announcing the $35k version years ahead of introduction. Of course it has had secondary benefits of meeting the mission requirements but I am not sure they outweigh the problems it has caused. My preference would be for MY to avoid the SR option completely if possible. This will force people to M3 if they need anything for much less than $50k. They can always announce a SR 2 years later if necessary. The $25k Golf competitor will also be unveiled by then most likely. Doing this would really ensure comparisons to MY are fruitless and provide a helping hand to Supercharger CAPEX etc.
 
A thought about supercharger V3. I wouldn't be surprised if it allows also upload of the energy. This could be a game changer where you download energy where is cheaper (home) and upload it at supercharger within average daily needs which is < than battery capacity.
Some intelligent software could offer good prices for peak demand.
 
  • Helpful
Reactions: Mobius484
Recent V3 discussion has illuminated the sheer importance of range to me - there are so many primary and secondary order benefits for a larger battery. It is nothing like having a larger gas tank.

Tesla has been been hit IMO by announcing the $35k version years ahead of introduction. Of course it has had secondary benefits of meeting the mission requirements but I am not sure they outweigh the problems it has caused. My preference would be for MY to avoid the SR option completely if possible. This will force people to M3 if they need anything for much less than $50k. They can always announce a SR 2 years later if necessary. The $25k Golf competitor will also be unveiled by then most likely. Doing this would really ensure comparisons to MY are fruitless and provide a helping hand to Supercharger CAPEX etc.

MR will be SR in MY ;) More energy consumption.

I suspect (although don't know for sure) that they'll make the MY packs physically larger to compensate.

(BTW, in case anyone cares - Elon is back from Waco; he's in LA now, so working either on SpaceX or Boring)