Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla, TSLA & the Investment World: the Perpetual Investors' Roundtable

This site may earn commission on affiliate links.
True, but you are not processing the redundant data, and every week somewhere between 5K and 10K data gathering cars are being added, so you're getting more input all the time. So it may not be quite exponential, but it's certainly a good deal better than linear.

Speaking of the long tail here (march of 9s), one would be lucky if it stayed linear. If the odds of an event are a uniform distribution, then you have a 1/x chance of capturing the event. If the event probability becomes 1/(10*x) then you need 10 times the samples to capture the same number of interesting events in the same amount of time as you used to. If you data collection increase rate is less than the event probability, you'll learn slower and slower.
 
I'll give a different scenario. Let's go for a true worst case scenario. Tesla NEVER gets "sleep in your car" done.

This assumes that computing power cannot reach the levels of processing of the human brain.

According to this estimate that's not the case:


The firing rate in the neocortex (which hosts 80% of the brain's neurons) is between 0.3 and 1.8 per second. With 80 billion neurons in the neocortex that's a firing rate of about 24-144 billion per second.

The average number of synapses per neuron is 10,000 - while the average information content of a synapse is 0.1 bits, or ~100 bits per neuron.

So the NN processing speed of the entire human neocortex is ~2,400-14,000 billion bits per second. Now the operations it performs per firing is addition and multiplication and capping - which we can recognize with a ~10x complexity factor, so the net speed is about 24,000-140,000 billion bits of simple arithmetic operations per second. (This is probably generous to the brain.)

The Tesla AI chip computes ~144,000 billion mini-floats per second (144 TOPS), where a mini-float is 8 bits. So the total processing power is ~1,152,000 billion bits of simple arithmetic operations per second.

So if we believe these estimates then the Tesla chip is already comfortably beyond the NN processing power of the human brain, by a factor of ~8x.

Put differently, every Tesla camera has as much NN computing power allocated as a single dedicated human brain watching that camera 24/7 ...

What the human brain arguably does much better is information storage: 1,000,000 billion synapses can store about 1,250 TB of data, which is a lot more than what Tesla can store in their NNs.

But if we accept that "legally safe driving" requires only a very small subset of the vast amount of data a human brain stores, then the Tesla AI chip can already do an order of magnitude better job, with vastly superior control latencies.

That is going to save lives, and this will be apparent from the accident statistics.
 
Last edited:
While I don't see the Robotaxi's on the roads in 2020(maybe they get approval for certain urban cities where they only operate in that city), I do think the rate of improvement is going to be exponential. Not only because the number of Tesla will go from 500k to a million by 2020 but cause they'll likely start releasing some FSD features....but that they require human supervision...so city surface streets, stop signs, stop lights, roundabouts, etc...) Also the expansion to other regions(especially Europe) will help with edges cases(roundabouts galore).The samples of edge cases and interventions I think will increase exponentially more than the number of total cars out there.

Btw, thank god Q1 report is here. So tired of hearing about Tesla returning to losing money from here on out, not making money on SR, lack of demand, blah blah blah.
 
Yeah, I know the social value of toning that down. It gets hard to do when I know more than /almost/ everyone about /almost/ everything, particularly when I'm listening to BS.

Please tell me why this negates Buddha's and some 12 step observations that the only person you can change is yourself. Sticking to the error is important. Who makes it is a side issue—projection—fear of yourself making one like others? "Well, at least I didn't make that mistake."

Who cares if a rose has thorns, we all have fears. Does the rose care?
 
The way I look at the demand problem is that if Tesla has a demand problem, then all the other car manufacturers who are planning millions of electric cars are screwed. As of right now, Tesla's cars and supercharger Network are vastly superior to anything that is coming out from the other car manufacturers. So if Tesla can't
If TOPS was all you needed then the FSD computer/Tesla fleet will achieve sentience soon...
 
The presentation gave us some very, very strong indications of what they're working on now. They have a long way to go. They're still working on relatively "easy" stuff.



Well, given that caveat, yes! But how about the problems which human drivers have trouble with? I've named several, Karen has named others. There are solutions to these, and if we want robotaxis to be conclusively better than human drivers, we have to solve them too.

You still get the data and train the NN, but you have to know what you're trying to train it to do.

I could train an NN to be a stereotypical teenage driver. Or a stereotypical Boston driver. Or a stereotypical New Jersey driver. Or a stereotypical Florida driver. Or a really bad driver like the guy who chopped his own head off. I don't think any of this would be desirable. It would be relatively easy to do, and I personally suspect this is what Uber is doing, given that they're Uber ;-)
Wouldn't the system literally "evolve" because poor driving choices eventually lead to accidents and that's easy to filter?
 
  • Disagree
Reactions: neroden
Why buy a civic that at most lasts 200k when you can buy a used 3 with 50k that is built to last 1,000k? Tesla already has the civic and will be here in 2 years.

Civics with over 200k miles can still fetch $1500 because they can still be driven for another 10 years.


upload_2019-4-23_12-33-34.png



https://www.cars.com/for-sale/searc...t&stkTypId=28881&zc=90210&localVehicles=false
 
  • Funny
Reactions: abasile
On the contrary I think most of the work needs to be done on the stage after Karpathy's computer vision system -- the driving policy stage. They seem to have barely started, and I expect they'll have to scrap everything they've done so far.

Is it impossible? No. Will it take a very long time to develop? Yes. Will it require going back and redoing the computer vision stack in order to detect additional things which they didn't realize were important? Yes.
Now I'll in turn surprise you by agreeing... I suspect this is where the lion's share of the work needs to be done.

This would seem to be exemplified by the portion of the demo where clearly the vision network could identify the bike. The system then needed to understand that it was a bike on a car carrier, not a separate object to track. I found it interesting that they apparently trained the neural net to classify these as a single object. I was wondering if they would allow a later layer to simply "understand" the nature of "captive items".

I'd like to know what happens if that bike falls off the carrier. Or what the system thinks of a trailer being pulled by a truck, and what it will do when it sees it detached from the truck pulling it and become a sperate entity.

I think it is indeed reasonable to assume that the policy stuff is less mature, if for no other reason than it has to sit atop the visual/environmental portions of the stack. And they had to start over on those parts when they dropped MobilEye.

But interestingly, Karpathy did discuss where it males sense to use heuristics, vs moving stuff in to a NN at that layer. I'm sure there will be a lot of work, and potentially churn here. I think we may have differences on the meaning of "long time" or "not any time soon", however...
 
Last edited:
A good start. It's almost teenage-driver level!

There's a semi-blind corner on the right at the start of that video -- car needs to learn to slow down in anticipation. Second one later on.
what struck me about it was at the very end where the oncoming car looks to be on the center line and the projected path appears to be going into the oncoming lane. ouch!
 
  • Informative
  • Like
Reactions: Sudre and neroden
Gotta agree with the other post that this is bordering on arrogant.
I'm the one who makes the correct predictions about what timelines they're gonna succeed at and what they won't. I get my arrogance honestly in this area.

I've made more inaccurate predictions when it comes to the finances, which are apparently harder for me! (Well, my background was mostly STEM)

Landing rockets on barges is indeed hard

I guess was using "hard" in a rather harsh sense. Let's put it this way: Landing rockets on barges is to full-self-driving-everywhere as winning a spelling bee is to winning the Nobel Prize.
 
Last edited:
Speaking of the long tail here (march of 9s), one would be lucky if it stayed linear. If the odds of an event are a uniform distribution, then you have a 1/x chance of capturing the event. If the event probability becomes 1/(10*x) then you need 10 times the samples to capture the same number of interesting events in the same amount of time as you used to. If you data collection increase rate is less than the event probability, you'll learn slower and slower.
Once an outlier is found, it's not hard to imagine similar but different cases, so the system can be gamed this way (this is one way where simulation can help because simulation helps if it can be imagined). Also, the fewer the instances of a particular category, the less likely they are to be encounters. If we take the lava flow example, the system doesn't know that it's a lava flow, but it does know the size (visual) that it's substantial (radar), and that it is stationary (visual), so it can fall back to the stop if something big is ahead and not moving. If there was thermal imaging, it would also know about heat. There is some question as to what would happen around a blind curve, but a human driver wouldn't fare any better and maybe worse due to reaction time.
 
  • Like
Reactions: abasile
So I just skimmed 5 analyst reports on yesterday's presentation. Every single one was skeptical of Elon's insistence that LIDAR was not useful.
Jesus Christ analysts are stupid.

If you look here, even those of us who are total "there are no robotaxis" skeptics think LIDAR is useless!!!

None of them provided justification for their belief, other than "everyone else is using LIDAR".
Wow.

I swear, financial analysts wouldn't recognize a game changing technology if it was handed to them on a silver platter.
I don't think they'd recognize a dollar bill if everyone else told them it wasn't a dollar bill.

No one thought FSD was going to happen next year.
Well, they're right about that.

The test drives had issues:

"Throughout the ride, the car performed relatively well but experienced a few rough maneuvers and had one disengagement where it failed to recognize cones blocking off some parked vehicles on the side of the road."

"Tesla demonstrated true Level 4/5 capable autonomous vehicles which, in our experience, traveled for more than 20 minutes over suburban and highway roads with absolutely no human engagement."

"While the vehicle was hesitant at times (RVs parked on the side of the road), took a turn or two tight, and was tentative in a collector merge, we equate the experience to the APTV rides in Las Vegas at CES 2017 (which was also a mix of on-and-off highway) —though a key differentiation is a lack of LIDAR or V2X installed/being deployed. Further, the vehicle was slightly more aggressive than those of OEM peers, but still did show some signs that improvement is needed (like all other test rides we have experienced). Altogether, we thought this was a positive showcase for where the company’s technology is currently, particularly the ability to navigate off-highway and recent internal push to enhance this development."

"Our ride handled both highway and off-highway roads well, though it was very cautious and at times hesitant to change lanes (even with no other vehicles nearby). This led to a somewhat jerkier ride."

"The biggest differentiator, in our view, is that Tesla conducted a complete, fully autonomous 20- minute test drive including on-highway and off-highway suburban streets without Lidar. Was it perfect? No. Did the driver have to manually intervene/disengage autopilot? On our drive,yes - one time when the car was about to miss a right-hand turn on a ramp. Did I feel safe? Yes. Would I want to fall asleep behind the wheel while in autopilot? Not yet."

These five analysts were bears, with low TSLA price targets. Basically, if Tesla was trying to attract valuations that other Autonomy companies are getting, it doesn't seem to have convinced these guys.

The thing these guys are missing (although some did pay lip service to this observation, but then didn't follow up on the implications), is that all other autonomy systems rely on very expensive and ugly LIDAR. For the past four years, Tesla has been the leader for autonomous driving, and based on yesterday's presentation, it's only going to get better for Tesla relative to the competition. They have a 7 year lead (and counting) on EV tech, and a growing lead on autonomous driving for the average Joe that wants to make long boring drives, or commuter traffic jams better. Analysts are completely missing the point that EV+AP makes any Tesla a much more valuable car, regardless of when FSD will arrive.
Wow, wow, wow. OK, so this didn't manage to get anything through the brains of analysts. Wow.

If I see anything from the more bullish analysts, I'll summarize them.

Thanks for accumulating the comments from analysts who took the testdrives.
 
You may be surprised, but I agree with most of this. I followed all the technical stuff. The other guys are (with the possible exception of Cruise) nowhere. Tesla's built some of the prerequisites.

Where it went off the rails was when he started promising robotaxis next year, which is not happening. There are entire very difficult problems which are only just creeping into Tesla's awareness.
I agree with that. I would think it will take easily 6 months just to get regulators to allow robot cars even after a massive amount of data is collected showing the relative safety compared with human drivers. If the tech actually rolls out by the start of 2020, figure at least 12-18 months of dealing with the initial rough edges/corner cases, and collecting the data. That would allow Tesla to then approach regulators perhaps as early as the end of 2020 or early 2021 with compelling data in a quest for allowing robot taxis. Nothing with Tesla goes perfectly, so figure this tech actually rolls out to the fleet (disguised as a seriously advanced driver assistance) sometime next year. If they manage to work through the majority of issues over 18-24 months, that would put us well into 2021 or even 2022 before starting to work on the regulation side. So, if everything goes fairly well, I would guess at least 2022 before robotaxis can become reality. Having said that, if this does go pretty well, there will be a LOT of enthusiasm generated about it well before 2022. In fact, even next year it may seem like it will be going live soon when in reality it will still probably take a few years from there.
 
Having watched the event yesterday in its entirety makes me a lot more confident about Tesla & the team. I love every speaker they brought to the stage (although Elon could show more patience when answering questions, and perhaps less optimism); overall, I think we’re on the right path and vastly ahead of everyone else in the FSD arena. From what Elon has said about “doing the right thing that you guys (analysts) want us to do..” my belief is that Tesla will raise capital.

Regarding the “game, set, match, Tesla...” comment I think Tesla is so far ahead of competitors that they are confident enough to show their cards on stage by delving deep into how their hardware is designed and software is implemented. Most companies would want these info top secret, but Tesla already knows they’re too far ahead so there is no use in keeping it a secret; it’s time to show everyone else their cards to demoralize the competition. If I was Waymo, Lyft or Uber I would be quite nervous about what Tesla’s technology will be able to do. Fundamentally, Tesla’s robotic fleet doesn’t need to go into every corner of the US, they just need to overtake the dense urban areas, starting with San Francisco or Silicon Valley and things should pick up from there. It won’t take long before Waymo and others realize their money losing operations won’t be able to keep up with Tesla and eventually sell their IP and write it off as a loss.

Concerning lidar, I’ve spoken to a computer programmer and he seems to agree with Elon, that lidar is crutch as it cannot “see” objects as well in terms of pixels, which will hard time to “predict” human and non human patterns of mobility. Having said that, he also thinks that cameras are a much better solution and thinks Tesla is also way past everyone else.