Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla, TSLA & the Investment World: the Perpetual Investors' Roundtable

This site may earn commission on affiliate links.
I love that all it’s taken is an obscure podcast appearance from Elon and the scheduling of a FSD event about which we have almost no detail to completely distract us from such a rubbish P&D report. Hopefully Mr Market is as distractable!

Lex Fridman nor his podcast are obscure - virtually the whos who of AI have appeared on his podcast as well as other interesting people such as Eric Weinstein. Lex himself has appeared on Joe Rogan's podcast. Maybe it is time that you looked outside your NPR Planet Money bubble.:rolleyes:
 
A thought of what would should REALLY impress investors on April 22nd, more than even a Level 5 demonstration.

A business model that showed how the Tesla Network would be profitable.

This is a hurdle that both Lyft and Uber have NOT YET CONQUERED, AND MAY NEVER, and yet they are both valued in the tens of billions of dollars.

We are all focused on the technology. The killer feature of the Tesla Network may be profitability !

Another point to add in favor of the "opt-in" or incremental approach to FSD. In the world of self-driving truck start-ups, the strategy of many companies is to first get approved to self-drive on interstates, and develop hubs near freeway exits where local drivers can pick up the truck and deliver the load to local destinations.

Tesla could adopt a similar model and have Megachargers located near freeway exits for recharging and create designated locations near key freeway exits where drivers could take over for the last few miles, eliminating the need for a driver for the vast majority of 1000-3000 mile trips.

Just the initial stage of being able to self-drive safely on a subset of interstate routes would be extremely valuable, even if Tesla had to rely on drivers for the final leg on city streets in some locations for a transitional period.

(If anyone is interested, this CNBC video provides a decent introduction to the thinking of some of the other self-driving trucking companies -- short shrift to the Tesla Semi though. How the rise in self-driving trucks will transform the trucking industry)

Incremental progress can be extremely valuable and has the potential to generate tremendous amounts of revenue, even as Tesla develops a general solution that it hopes will work almost everywhere eventually.
 
Last edited:
There is no disagreement, you are simply wrong. The driver DOESNT pay attention. The user is NOT SUPPOSED to notice a system failure, the ADS system does and handles it while providing sufficient time to hand over back to the user.
...
This completely contradicts the SAE document as it explicitly says that the user DOES NOT monitor the environment nor the performance of the ADS system.

That is your misinterpretation. Contrary to your belief, the SAE document clearly states (on page 11) that in a level 3 system the driver determines that intervention is necessary and brings the vehicle to minimal risk condition as opposed to higher level (4,5) systems where ADS has to be capable of that:

NOTE 2: At level 3 given a DDT performance-relevant system failure in the ADS or vehicle, the DDT fallback-ready user is expected to achieve a minimal risk condition when s/he determines that is necessary, or to otherwise perform the DDT if the vehicle is drivable.

NOTE 3: At levels 4 and 5, the ADS is capable of automatically achieving a minimal risk condition when necessary

The above contradiction between your statement and the SAE document, clearly proves that you have a severe reading comprehension problem.

Q.E.D.

Note, that the National Highway Traffic Safety Administration agrees with my interpretation, not yours!

You insistence on your misinterpretation despite my clearly cited contradictions with the SAE document has passed the trolling threshold, therefore I will not waste my time responding to you any more.
 
No, it doesn't necessarily mean that large objects such as the universe, or large macroscopic objects such as a Tesla Semi Truck or a cat have a single wave function.

What you described is an extrapolation of the laws of quantum mechanics to very large systems, which is called the "Copenhagen view" of quantum mechanics, which actually only a minority of quantum physicists support (!):


The "Copenhagen view" has a number of problems:
  • Who are the "observers" that "measure" and cause the collapse of a wave function?
  • Who were the "observers" one microsecond after the Big Bang? The universe couldn't have expanded without the wave function collapsing.
  • Who are the "observers" inside a black hole?
  • Is Schrödinger's cat dead or alive before we open the box?
So there are other, scientifically rigorous explanations for the "observer problem":
  • that there's a dampening of quantum fields with larger objects,
  • that there's a "many worlds" multiverse,
  • or super-determinism,
  • or a simulation universe,
  • or "objective collapse" where larger quantum systems automatically collapse their wave functions,
  • etc.
We simply don't know (yet) whether large, complex wave functions exist, we don't know which of the variants above actually exist - but we do know that the Copenhagen view is not universally accepted! :D

BTW., some of these are falsifiable, with viable experiments proposed that would test whether very large quantum system have wave or particle behavior.

(Anyway, this comment is OT and not OT at once, until a moderator measures it: the "TMC uncertainty principle".)
I am aware of these issues. Nonetheless, your claim is a little misleading re Copenhagen. It still holds 42% consensus (below 50%, so not a majority, but still the largest held view, with the next largest being 24%). From the article you cited:

"The Copenhagen interpretation still reigns supreme here, especially if we lump it together with intellectual offsprings such as information-based interpretations and the Quantum Bayesian inter- pretation."

Also, the Copenhagen interpretation doesn't really bare on macroscopic systems, rather it has to do with interpreting any quantum system, whether micro or macro. From Copenhagen interpretation - Wikipedia:

"According to the Copenhagen interpretation, physical systems generally do not have definite properties prior to being measured, and quantum mechanics can only predict the probability distribution of a given measurement's possible results. The act of measurement affects the system, causing the set of probabilities to reduce to only one of the possible values immediately after the measurement. This feature is known as wave function collapse."

Of course, you can debate whether the Copenhagen interpretation applies to very large objects, i.e., Schrodinger cat.

Look, utlimately, we cannot say for sure whether QM (quantum mechanics) is a final theory. We, for example, know that Einstein's theory of gravity is not, since it cannot describe quantum systems. And QM may likewise be impacted as it tries to deal with gravity (it currently cannot). But current consensus is that QM is fundamentally correct and applies to all physical systems, regardless of whatever interpretation you may have, Copenhagen or otherwise. Different interpretations, such as many worlds, super-determinism, etc. does not mean that the system is not described by the mathematical laws of quantum mechanics (wavefunction, matrix mechanics, etc.). Rather, it is a question of how you interpret the mathematics.

(Agree this is OT. My background is physics, and I'm quite familiar with the nuances of various quantum interpretations and its impact on reality and philosophy. What I was talking about was the consensus view. If you really want to have a discussion of physics, philosophy, and reality, we can. I can talk all day about Bell's inequality, whether objects really exist, and what quantum mechanics really means.) :D
 
Last edited:
So your recommended approach is "not approve self-driving for a given road until there've been a statistically-significant number of supervised driverless miles on that road without a fatality"? Then the vast majority of the world's roads will never be approved, and you're only talking about a system that's driverless for major highways. Because contrary to your suggestion of "a million miles", fatal accidents occur about 1 per 75 million miles. And "1" isn't statistically significant; you need a larger sample.

If that's your plan, then throw away all of the dreams of all of the things that self-driving is supposed to accomplish for us, because you're not describing driverless point-to-point service anymore. I'm not sure that even the busiest road in our entire country would ever have statistically significant data about its fatality rate on self-driving.

Every road, every bend, will have a mountain of data. Interventions, bingles, injuries. The fatality probability will be a function of less severe stats.
 
  • Disagree
Reactions: neroden
They sure spend a lot of time on the brink of failure. Unlike other companies EV efforts which spend a lot of time on the brink of success.

Shortsville Times:

Tesla on Brink of Failure: About to Drive Off a Cliff Because They Only had 40 ft Topo Resolution and Didn't See It, KarenRei Says 'I told you so!'
 
I will repeat that all of this Go and Chess history is totally irrelevant to "self driving" because *we don't actually know the rules for driving* -- or at least Tesla staff don't! As soon as a suitable set of rules is devised, the problem is essentially solved, yes. But devising that set of rules is very, very hard. I don't think we've ever tried running a neural net on a game with unspecified rules which the computer isn't told and which the humans argue about.

I mean, I think it's embarassing that they're still working on computer vision and object classification. They are deluding themselves if they think solving that solves self-driving; that's just a prerequisite for even being able to *start* work on self-driving.

Tesla does appear to be far ahead of everyone else, and if they manage to simply implement automatic emergency braking that ends all "run over pedestrian" crashes, they'll have an insurance goldmine on their hands.

But "sleep in your car" self-driving is a bridge too far for the near future; from what I can tell they still haven't really started on the corner cases.
Far it be it from me to ever prolong an argument discussion, especially on the internet... but this popped up this morning:

AI defeated a top-tier 'Dota 2' esports team

This isn't the first time an AI has won in these types of games, but DOTA 2 is a large arena type of game with multiple players, wide ranging characteristics per player, individual unique abilities, lots of environmental variables, all with substantially less structure/bounding than something like GO or Chess.

An arena style game such as this is probably much more akin to the task of driving. While I don't know if the AI was given general rules and polices, or was jump-started with strategies, it's again another indicator of what's developing.

Incidentally I saw a report on another online-game AI some time ago that again talked about unique methodologies it developed during game-play… something like repeatedly sacrificing one character to lure an opponent or something like that.

The future will be interesting...
 
What I hope to see out of this event this weekend. Just one thing, really:

Throwing the gauntlet down.

I want to see Tesla issuing a challenge - to Waymo, to Cruise, to all of the big players: Try to better us. Let some third party pick a bunch of random point-to-point, non-geofenced routes across the US - you with your best development software and hardware, us with ours (HW3, development software releases) - and let's see who has the fewest driver interventions, missed turns, traffic violations, and independently assessed ride safety/quality scores.

It'd seriously take down "The Myth of Waymo", and also Sam "GM University" Abuelsamid's nonsensical "Tesla is last" ranking that people keep circulating. And it'd do so whether they accept the challenge or not. If they refuse the challenge... then the public impression will be, "what are they scared of?" If they accept the challenge... then for once they have to try to operate in a world that they haven't geofence-cheated on (aka "studying to the test").

So long as Tesla thinks that they could at least place highly, this would be a brilliant piece of PR. I hope they're ready for such a competition, and call out everyone else to compete. Even a high-ranking, but non-first-place score would mean that Tesla's autonomy systems would have to start being valued at at least a fraction of the value of Waymo and their ilk, rather than "near-zero" as they are today.
 
If FSD requires developing a "driving policy", then I guess Tesla's FSD will fail. Because Andrej Karpathy is convinced that "Gradient descent can write better code than you. I'm sorry."

I don't think he'll change his mind until and unless events force him to. I'm not 100% convinced he's right, but I'm disinclined to bet against him.

Andrej Karpathy: Software 2.0

I've watched his video on this...and it's compelling. But this "code" will be the responses in a given situation in order to meet a policy objective. Those policies still have to be defined.