Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla, TSLA & the Investment World: the Perpetual Investors' Roundtable

This site may earn commission on affiliate links.
That is based on comments from the demo impressions video I posted AND also Karpathy's comment during the presentation that Tesla ONLY added path planning for sharper turns / ramps into AP about 4 months ago.
That was in reference to releasing Navigate on Autopilot and the car being able to path plan it's way around cloverleaf interchanges where it can't see very far around the curve. The ability to path plan "blindly" around curves didn't exist in publicly released vehicle software before NoA was released. It existed before then for some time, being tested, though how long it was tested and how long it took to develop it before then is unknown to us. The same regarding cut-in detection. They don't just develop the feature and dump it on the public untested immediately after, he was referring to when we saw these features begin to be included in public builds.
 
  • Like
Reactions: Yuri_G and capster
it would be helpful if you can give some examples.
under what conditions you can pass on the right is not high level at all.

It is. You have to define what “the right” is in 3d space, in all possible conditions and what it means to “pass” in those same conditions. You need to identify high-level exceptions to it as well(car broken down on the left, for example, or if you’re exiting the freeway, or if the left lane is a carpool lane, or, in some cases, when the *right* lane is a carpool lane.

We subconsciously solve these problems pretty much instantly, but actually defining all the rules that govern them is extremely difficult. Worse, we need a way to inject these decisions into the middle of a neural network doing things we don’t understand.
 
  • Love
Reactions: neroden
This is idiocy. The guys who are in the middle of this have said they're close, less than two years away.

They said that they were two years away in 2015 or 2016. I said they were wrong. I was right. Game, set, match.

You, from your perch of ignorance, declaim that they're a bunch of deluded liars. I know who I'm going to believe.
Yeah, that's called "being an idiot" on your part. They are deluded, and so are you.

I have an awful lot of background knowledge here. I was not impressed by the technical detail of the presentation, and I did follow all of it. The Tesla guys are very smart, but they are too close to their work and are not seeing the next wave of problems they're going to hit -- and Musk is an incurable extreme optimist. I already know what they're going to hit. I'm waiting for them to notice it.

Once they notice that set of problems, it will take at least two years for them to resolve it (and that's optimistic). They haven't noticed it yet, therefore it will take more than two years.
 
I'll give a different scenario. Let's go for a true worst case scenario. Tesla NEVER gets "sleep in your car" done.

You can buy (or rent) a Tesla which will do 90% of your driving for you. You have to stay alert and look for trouble -- so you can't be drunk, or asleep, or reading -- but you can mostly just be watching the passing scenery. Your legs can relax. Your arms can relax. You can focus on looking for really weird *sugar* coming up ahead.

Or, you can buy anyone else's car and have to drive it entirely manually all the time.

Or, you can use a Waymo or Cruise car, and (best case) have it do 90% of the driving (they'll have the same inability to get to 100% that Tesla will), but only in an narrow geofenced area, and with a car which has super expensive hardware on it for no reason.

I don't think this eliminates taxis (or Uber/Lyft which are basically taxis). It probably makes Teslas very attractive cars for taxi drivers, though. The job becomes a much more relaxing one of chatting with the customer, looking out for trouble, and finding clever shortcuts.

Anyway, this is a great scenario. But Musk has to overpromise robotaxis, damn it.
Why wouldn't he if he, his team, all the people that work on FSD all say it's coming? The man speaks what he feels. He doesn't hold back. You know that. I know that. The whole world knows that. So why are you losing so much sleep screaming at the wind. You're going to give yourself a coronary.

Dan
 
Wrong. It can't simulate things you cant think of -- that's the point Karpathy made.

The only way to get a really broad simulation is to hire hundreds or thousands of people from all around the world to suggest weird scenarios. (Like Karen's LAVA FLOWING ACROSS THE ROAD. Nobody in California would have thought to simulate that.)
I don't know if anyone saw my note last night, but if Google were to give every car owner (on earth, or in all new veh, or in a particular state...) a free camera with a cell connection, could they get their missing data and catch up to Tesla?
How else could someone level the data field without a car that includes data gathering?
The point that older Model S's still contribute to learning is a hint. Seems it doesn't need a NN?
 
The law is to stop before entering the roadway. Many cars don't stop. If your rule assumes that the other car obeys the law, it's a crash every time. The problem is that there are far too many situations that it's not possible to write enough rules.
I am not saying learning those rules by observing others, I am saying manually translate those road rules into software rules manually. I don't see how hard it is to specify the rule: "stop before entering the road".
 
UBS perma-bear "Colin Langan" is (unsurprisingly) doubling down on their LIDAR misunderstanding and FUD:

"UBS analyst, Colin Langan, reiterated a Sell rating and $200 price target on Tesla after attending the company's Autonomy Day. The highlights include a target date for he deployment of "feature complete self driving" capabilities, the launch of the Robotaxi in 2020 and featured the in-house designed TSLA chip has more power and uses less silicon than the prior gen NVDA chip."

'The analyst stated "The primary sensor is often vision, but most experts believe LiDAR will be needed since it provides more unique data, even if it's small, that makes the entire system safer. Regulators will likely want more sensor data to ensure the highest level of safety. TSLA may also need more nonsimulated testing. TSLA reported 0 AV miles driven in California in 2018 vs. GOOG which reported 1.2m".'​

There's two false claims in these two short paragraphs already:
  • He is misconstruing the fact that Tesla did not opt to report disengagements in California into a false claim that Tesla only performs "simulated testing". In reality Tesla got disengagement events from about a billion miles of Autopilot driving, a three orders of magnitude larger data stream than the Google case UBS cites ...
  • They also don't realize that even a very cheap ~$5,000 LIDAR sensor will crowd out much more effective sensors, i.e. in Tesla's model LIDAR will actively reduce car safety and kill people.
He is hiding behind the "most experts believe" weasel words and false appeal to authority, instead of using easily verified facts and logic. I absolutely do not want to read his full analyst report: garbage in, garbage out. :D

In other, completely unrelated news, UBS is apparently positioning to IPO Waymo:

Alphabet's self-driving car business could book $114 billion in revenue in 2030, says UBS

"On the heels of Waymo’s commercial launch on Wednesday, investment bank UBS estimates that Alphabet’s self-driving car unit will reel in $114 billion in revenue in 2030."​

Which I suspect explains why they absolutely have to "misunderstand" Tesla's plans. ;)

I think the main problem analysts have here, is a misunderstanding of who the experts are. They're Tesla, and nobody else. All these other "experts" are clearly years if not decades behind the state of the art. It doesn't matter if most of the people in the field agree, if they're wrong and behind the times. Whether that's an intentional misunderstanding on the part of the analysts may vary by analyst - some may be following the herd.
 
I think NVIDIA's problem is where they say: " The Xavier processor features a programmable CPU, GPU and deep learning accelerators, delivering 30 TOPs. We built a computer called DRIVE AGX Pegasus based on a two chip solution, pairing Xavier with a powerful GPU to deliver 160 TOPS, and then put two sets of them on the computer, to deliver a total of 320 TOPS"

Those are going to be discrete units with differing capabilities, linked by a bus (or busses), and likely non-unified memory. If that 30 TOPS is 15 TOPS of NN accelerator, 10 TOPS of GPU, and 5 TOPS of CPU, then it's a little disingenuous to lump it all together and say, "Look, 30 TOPS" when the problem at hand needs 25 TOPS worth on NN processing.

Kinda like adding together the horsepower of my truck's ICE engine, starter, and the generator it's towing together and saying "Look a 600 HP solution!".

Interestingly they spent some time talking about workload analysis and design optimization on the presentation...

You summarized it very nicely. Thank you.

This is the point the nay sayers out there are overlooking and so is @Bladerskb
 
The FSD video they released, altough impressive, wasn't very "steaky" to me. It was almost the same type of situations they showed on their 2016 video. It's quite baffling to me why they wouldn't show more complicated situations involving inner city driving, because that's where robotaxis will eventually be operating in.

The answer is clear: Because their system doesn't work there yet.
 
I don't know if anyone saw my note last night, but if Google were to give every car owner (on earth, or in all new veh, or in a particular state...) a free camera with a cell connection, could they get their missing data and catch up to Tesla?
How else could someone level the data field without a car that includes data gathering?
The point that older Model S's still contribute to learning is a hint. Seems it doesn't need a NN?

It's not just a camera though. Pictures are only part of the data they collect. You have velocity, speed, etc. Then you have the cars used as "a mule" to test scenarios and so on and so on.
 
I am not saying learning those rules by observing others, I am saying manually translate those road rules into software rules manually. I don't see how hard it is to specify the rule: "stop before entering the road".
Yes, you can write that rule for your car, but then you have to write another rule for the car that doesn't. There are just too many individual cases to write rules for, many of which aren't covered by any law (e.g. there is no law against a deer jumping out in front of the car--but it happens frequently).
 
Hey, can someone please explain shadow mode. I think I was seduced by that animation of ghost cars and lost track. Sounded like the car is calculating the probabilities of multiple decisions and picks the safest? It was considering a lane chanfmge (the shadow) but since a car was comming from behind is stayed the course as the safest path?
Am I getting it, or no?

Not exactly. It runs the network for some outputs that aren’t public yet and looks for conditions where that output turned out to be incorrect. When that happens, if saves a snapshot and uploads it to Tesla HQ for data ingestion. I believe one of the examples was for cut-in. They ran the cut-in detector in shadow mode and any time the person didn’t actually cut in, they uploaded video.
 
They don't? That explains a lot. ::sigh::

I try to surround myself with people who are like that ( who learn whatever they can).

LOL. I can assure you they don't.
Actually, I think this is one of the most underrated qualities that the "spectrum" gave Musk.
It makes him think outside the box and fears and corporate cultures: if it's physically possible, then it's achievable.

The problem is when he encounters unknown unknowns, or simply he doesn't really know where real complexity lies:
he often misunderstands how people work, I would say (this is quite common, as you know).

Regarding FSD, I think he's overoptimistic with timelines, but also that he thinks that a simple 10x improvement in crash statistics will give Tesla the opportunity to use level 5 robotaxi.
I think this is a flaw in his reasoning: nothing guarantees us that regulation and media will be reasonable enough to take just that metric and say "yeah, you're right, we will accept to be killed by neural nets that no one in the world really understands". I fear people will be highly skeptic about this.

I mean, with all the Tesla in the world, we could have the first death in one or two days. Nobody will care about the thousands other deaths in those 2 days.
 
OK, that sort of makes sense, but it's an evasion of the question. The question should have been more bluntly stated as
"You are not going to have robotaxis in 2 years. I know more about this than you do. As a major investor, I want to know how much of the budget is being spent on robotaxi-specific stuff, versus how much is being spent on stuff which will still be valuable when you fail to get robotaxis working."

I really enjoy your posts, but sometimes you really need to tone down the "I know more than everyone about everything" aspect.
 
I don't like the use of this term either. Most people will interpret it as "we're done and ready for customers" when it more likely means "we're out of development mode and moving into testing mode now and it will take a while for it to be ready for customers"

My interpretation is, they can't predict how soon this is going to become sufficiently safer than humans to declare victory. But they predict that they can do it within the limits of the hardware, tooling and general design they already have.
 
It is. You have to define what “the right” is in 3d space, in all possible conditions and what it means to “pass” in those same conditions. You need to identify high-level exceptions to it as well(car broken down on the left, for example, or if you’re exiting the freeway, or if the left lane is a carpool lane, or, in some cases, when the *right* lane is a carpool lane.

We subconsciously solve these problems pretty much instantly, but actually defining all the rules that govern them is extremely difficult. Worse, we need a way to inject these decisions into the middle of a neural network doing things we don’t understand.

if that is so difficult, then the example from the karpathy presentation does not work at all. Karparthy specifically said they push rules to the cars, and the cars upload clips onces those rules trigger. One of the example is to look out for cars moving from right lane into your lane.

I am afraid you are complicates things. your rule does not deal with raw reality, your rule works on an environmental model produced by the NN.