Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla, TSLA & the Investment World: the Perpetual Investors' Roundtable

This site may earn commission on affiliate links.
IMO, the major win for Tesla is getting Karpathy on board.
Karpathy is a genius, and what is more important, it looks like he is not going anywhere. He'll get this done for sure.
Also, the rest of the team seem to be at extreme level of knowledge to get their jobs done. This is very refreshing.

Now, the overwhelming majority of people are skeptics in terms of Tesla's/Elon's time estimates. I am not among them. I truly do think that all of those fail to appreciate the multitude of factors contributing to the advantage of Tesla, their system and AI in general. They have very good construct, the rest will happen very quickly.

On a side note, I find this incredible that two immigrants - Elon from South Africa and Andrjej from Slovakia, emigrated to Canada and ended up in US to work together at Tesla. Stunning.
 
The question I ask is, what is pushing TSLA to show their hands early now as opposed to waiting till it is ready to reveal like apple. The robotaxi should've been the "one more thing" moment. It is not like a competitor have achieved it before them forcing them to reveal something.
I was going to say that this is one way that Tesla is not like Apple, but then there was the Roadster 2.0

Of course, the ride sharing network isn't exactly secret: Musk has talked about it before.

IMO the reveal was because that's just the way Tesla rolls. Musk wanted to brag on his bright employees who have achieved so much and are not being recognized for it. And, because he really believes feature complete and regulatory approval by sometime next year, why not?

Of course a real businessman would have known answers to basic things like ... oh, I dunno, insurance before pitching it to investors. On the other hand, if Musk were a real businessman we wouldn't have SpaceX or Tesla...
 
They said that they were two years away in 2015 or 2016. I said they were wrong. I was right. Game, set, match.


Yeah, that's called "being an idiot" on your part. They are deluded, and so are you.

I have an awful lot of background knowledge here. I was not impressed by the technical detail of the presentation, and I did follow all of it. The Tesla guys are very smart, but they are too close to their work and are not seeing the next wave of problems they're going to hit -- and Musk is an incurable extreme optimist. I already know what they're going to hit. I'm waiting for them to notice it.

Once they notice that set of problems, it will take at least two years for them to resolve it (and that's optimistic). They haven't noticed it yet, therefore it will take more than two years.
The more you extol how much you know and correct you must be, and how wrong the leaders in the field intimately involved and actively working in it, the less credence I'm inclined to give your arguments.

In my experience, the folks most circumspect about the limits of their expertise, are the ones who tend to have the more accurate view of things.

Referring to those with reasonably stated opposing points of view as deluded idiots is just the cherry on top.
 
No, they specifically said that the path planned would have been the same even if they had never seen that road before or had gotten data on it from the fleet.

Curve prediction, then? I'm a little surprised it's taken them this long to do that. That's unreliable, of course, so there have to be some backup features for when the predicted curve slams into a brick wall (another edge case).
 
The more you extol how much you know and correct you must be, and how wrong the leaders in the field intimately involved and actively working in it, the less credence I'm inclined to give your arguments.

In my experience, the folks most circumspect about the limits of their expertise, are the ones who tend to have the more accurate view of things.

Referring to those with reasonably stated opposing points of view as deluded idiots is just the cherry on top.

Eh, you use your heuristics, I use mine. I agree with you on a lot of stuff.

But anyone who thinks they'll have robotaxis next year is deluded.

I'm now watching sentiment. I'm waiting for the Trough of Disillusionment. We aren't there yet. I'm still seeing blatantly deluded optimism. Maybe we're starting to crawl down the curve?

Once we're at the Trough of Disillusionment, self-driving technology (not full, but partial) may be investable...
 
Clearly OT...We start ‘em young in our family.

My 5-month old granddaughter's new White Tesla just arrived (haha, no notice from Tesla that it had shipped).
919577C5-06E8-4A37-A387-C9345651C289.jpeg
 
Now I'll in turn surprise you by agreeing... I suspect this is where the lion's share of the work needs to be done.

This would seem to be exemplified by the portion of the demo where clearly the vision network could identify the bike. The system then needed to understand that it was a bike on a car carrier, not a separate object to track. I found it interesting that they apparently trained the neural net to classify these as a single object. I was wondering if they would allow a later layer to simply "understand" the nature of "captive items".

I'd like to know what happens if that bike falls off the carrier. Or what the system thinks of a trailer being pulled by a truck, and what it will do when it sees it detached from the truck pulling it and become a sperate entity.

I think it is indeed reasonable to assume that the policy stuff is less mature, if for no other reason than it has to sit atop the visual/environmental portions of the stack. And they had to start over on those parts when they dropped MobilEye.

But interestingly, Karpathy did discuss where it males sense to use heuristics, vs moving stuff in to a NN at that layer. I'm sure there will be a lot of work, and potentially churn here. I think we may have differences on the meaning of "long time" or "not any time soon", however...
I don't think the scenario you described has anything todo with policy.

The trained NN should recognize the attached bike as the single object with the car, based on the fact that they are together. Once they are separated the NN would recognize the fallen bike as a separate object and road block. This should be achievable by the perception engine alone.
 
So I just skimmed 5 analyst reports on yesterday's presentation. Every single one was skeptical of Elon's insistence that LIDAR was not useful. None of them provided justification for their belief, other than "everyone else is using LIDAR". I swear, financial analysts wouldn't recognize a game changing technology if it was handed to them on a silver platter.

No one thought FSD was going to happen next year. The test drives had issues:

"Throughout the ride, the car performed relatively well but experienced a few rough maneuvers and had one disengagement where it failed to recognize cones blocking off some parked vehicles on the side of the road."

"Tesla demonstrated true Level 4/5 capable autonomous vehicles which, in our experience, traveled for more than 20 minutes over suburban and highway roads with absolutely no human engagement."

"While the vehicle was hesitant at times (RVs parked on the side of the road), took a turn or two tight, and was tentative in a collector merge, we equate the experience to the APTV rides in Las Vegas at CES 2017 (which was also a mix of on-and-off highway) —though a key differentiation is a lack of LIDAR or V2X installed/being deployed. Further, the vehicle was slightly more aggressive than those of OEM peers, but still did show some signs that improvement is needed (like all other test rides we have experienced). Altogether, we thought this was a positive showcase for where the company’s technology is currently, particularly the ability to navigate off-highway and recent internal push to enhance this development."

"Our ride handled both highway and off-highway roads well, though it was very cautious and at times hesitant to change lanes (even with no other vehicles nearby). This led to a somewhat jerkier ride."

"The biggest differentiator, in our view, is that Tesla conducted a complete, fully autonomous 20- minute test drive including on-highway and off-highway suburban streets without Lidar. Was it perfect? No. Did the driver have to manually intervene/disengage autopilot? On our drive,yes - one time when the car was about to miss a right-hand turn on a ramp. Did I feel safe? Yes. Would I want to fall asleep behind the wheel while in autopilot? Not yet."

These five analysts were bears, with low TSLA price targets. Basically, if Tesla was trying to attract valuations that other Autonomy companies are getting, it doesn't seem to have convinced these guys.

The thing these guys are missing (although some did pay lip service to this observation, but then didn't follow up on the implications), is that all other autonomy systems rely on very expensive and ugly LIDAR. For the past four years, Tesla has been the leader for autonomous driving, and based on yesterday's presentation, it's only going to get better for Tesla relative to the competition. They have a 7 year lead (and counting) on EV tech, and a growing lead on autonomous driving for the average Joe that wants to make long boring drives, or commuter traffic jams better. Analysts are completely missing the point that EV+AP makes any Tesla a much more valuable car, regardless of when FSD will arrive.

If I see anything from the more bullish analysts, I'll summarize them.

I feel most longs are generally honest people. A problem with honest people is that they don't lie, so they also trust what they see/hear. I would say never trust the analysts, never trust anyone on Wall Street, never trust CNBC, NYT, GS or MS, etc. They are not doing charity work for the public. They are players in the money game. There are many ways to play this money game.

For example, a company told their clients to short XYZ, then they saw some fundamental things, realized XYZ has 10 fold upside. What do you think their analyst will do the next day? Publish a report tell the world XYZ will rise 10 fold? No if they do that, their clients will be ruined. They will continue to write *sugar* about XYZ until their clients' and especially their own positions switched.
 
IOn a side note, I find this incredible that two immigrants - Elon from South Africa and Andrjej from Slovakia, emigrated to Canada and ended up in US to work together at Tesla. Stunning.
Contrary to popular belief, immigrants are generally very inventive and make impressive contributions to society. A/C electricity was invented by a Serbian immigrant, modern warships with turrets were invented by a Polish immigrant, even the parking metre was invented by a Mexican immigrant.
 
No, they specifically said that the path planned would have been the same even if they had never seen that road before or had gotten data on it from the fleet.

Later he or Musk clarified that it would work on any road within US. Reason being that nn learns what curves it can expect.

Roads are built with certain standards and rules. Eventually NN learns what to expect based on what visible clues. However road building rules vary from country to country therefore path predictionwould not work using another country’s data.
 
I don't think the scenario you described has anything todo with policy.

The trained NN should recognize the attached bike as the single object with the car, based on the fact that they are together. Once they are separated the NN would recognize the fallen bike as a separate object and road block. This should be achievable by the perception engine alone.
You are right, I did not write that clearly.... I suspect that's what should happen, and would be handled at that layer...

When I said 'I was wondering if they would allow a later layer to simply "understand" the nature of "captive items"', that's where I was wondering if they would have llowed the policy layer to deal with the potential for that happening.

Probably bad example.
 
  • Helpful
Reactions: humbaba
Curve prediction, then?

Curve prediction for comfortable braking and acceleration when cornering. Safe speeds are a different mechanism:

I'm a little surprised it's taken them this long to do that.

(Eyeroll)

That's unreliable, of course, so there have to be some backup features for when the predicted curve slams into a brick wall (another edge case).

Safe speeds are determined by assuming that ALL invisible road sections are hiding a brick wall, or something even worse: for example a stationary pedestrian.

I.e. the distance of the invisible road sections combined with weather conditions define a maximum safe speed upper boundary.

Curvature detection estimates the best comfortable speed within that safe boundary, that is still comfortable to ride.

I.e. it's a driving comfort feature that makes FSD driving "smoother" and more relaxing on curvy roads, within the strict safety boundaries of how much of the road is visible. It doesn't have to be perfect, because all speeds within that range are safe at all times.

Sorry, but you seem to know even less about FSD driving technologies than Andrej Karpathy and all the other incompetent morons at Tesla. ;)
 
This assumes that computing power cannot reach the levels of processing of the human brain.

According to this estimate that's not the case:


The firing rate in the neocortex (which hosts 80% of the brain's neurons) is between 0.3 and 1.8 per second. With 80 billion neurons in the neocortex that's a firing rate of about 24-144 billion per second.

The average number of synapses per neuron is 10,000 - while the average information content of a synapse is 0.1 bits, or ~100 bits per neuron.

So the NN processing speed of the entire human neocortex is ~2,400-14,000 billion bits per second. Now the operations it performs per firing is addition and multiplication and capping - which we can recognize with a ~10x complexity factor, so the net speed is about 24,000-140,000 billion bits of simple arithmetic operations per second. (This is probably generous to the brain.)

The Tesla AI chip computes ~144,000 billion mini-floats per second (144 TOPS), where a mini-float is 8 bits. So the total processing power is ~1,152,000 billion bits of simple arithmetic operations per second.

So if we believe these estimates then the Tesla chip is already comfortably beyond the NN processing power of the human brain, by a factor of ~8x.

Put differently, every Tesla camera has as much NN computing power allocated as a single dedicated human brain watching that camera 24/7 ...

What the human brain arguably does much better is information storage: 1,000,000 billion synapses can store about 1,250 TB of data, which is a lot more than what Tesla can store in their NNs.

But if we accept that "legally safe driving" requires only a very small subset of the vast amount of data a human brain stores, then the Tesla AI chip can already do an order of magnitude better job, with vastly superior control latencies.

That is going to save lives, and this will be apparent from the accident statistics.
Great breakdown, thanks. However there are other differences, some good, some bad.

We have only two eyes, whereas the car has 8/9 cameras (the extra one inside is not currently used AFAIK), 12 ultrasonic sensors, and a radar. Our eyes also only have maximum resolution for a small area in the center of vision, whereas the car's are full resolution for the field of view. The consequence of this is that the NN in the car is processing a lot more input data. This is good for when something comes out on the left while you're looking right, but probably needs even more processing to handle the extra data. So I discount a bit that the NN has more processing power than my brain. Still on the positive side, though, it doesn't get tired or inattentive.

And without wanting to pick a fight with @neroden, I don't agree that his oft-repeated example of needing to drive off road to get around a backhoe is all that meaningful. To boldly anthropomorphize, the NN is going to think "gee, no idea what that is, but it looks solid and I can't fit between it and the human holding a sign anyway, I guess I will just go really slowly around them...". The NN won't get what pilots call "helmet fire", where they just get overwhelmed and ignore inputs, or freeze up. Instead, it will still output *something*, and continuously re-evaluate the situation as it moves.
 
Last edited:
I agree with that. I would think it will take easily 6 months just to get regulators to allow robot cars even after a massive amount of data is collected showing the relative safety compared with human drivers. If the tech actually rolls out by the start of 2020, figure at least 12-18 months of dealing with the initial rough edges/corner cases, and collecting the data. That would allow Tesla to then approach regulators perhaps as early as the end of 2020 or early 2021 with compelling data in a quest for allowing robot taxis. Nothing with Tesla goes perfectly, so figure this tech actually rolls out to the fleet (disguised as a seriously advanced driver assistance) sometime next year. If they manage to work through the majority of issues over 18-24 months, that would put us well into 2021 or even 2022 before starting to work on the regulation side. So, if everything goes fairly well, I would guess at least 2022 before robotaxis can become reality. Having said that, if this does go pretty well, there will be a LOT of enthusiasm generated about it well before 2022. In fact, even next year it may seem like it will be going live soon when in reality it will still probably take a few years from there.
Keep in mind the rate of learning is increasing, AND the method of learning did and will likely continue to improve (shadow mode and Joey or whatever that was planned).
Prior improvements or timelines are therefor no longer accurate for estimating future progress. Again, NoA took a leap forward and surprized many. I went from I would not nap ever to... if I had to I would. That is huge.
 
  • Informative
  • Like
Reactions: BBone and neroden