Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla, TSLA & the Investment World: the Perpetual Investors' Roundtable

This site may earn commission on affiliate links.
O
Sounds correct to me. When someone is coming up behind you, staying where you are is almost always safer than trying to guess which way the person will turn to pass you*.

Shadow mode just means that the car calculates what to do, but is actually under manual control so no action is taken. Tesla can then see the difference between what the system thought should have been done and what the driver actually did.

* Unless you have some crash dummies with you. <this is dark humour>
OK, with shadow mode, I think FSD has the ability to learn faster than humans. I don't think humans do shadow mode.
Did Tesla accelerate learning here in a big way? I believe this was recent tech that resulted in that huge improvement
With NoA this last update... yes?
2020, starting to get it.
 
  • Informative
Reactions: Artful Dodger
I already know what they're going to hit. I'm waiting for them to notice it.

Once they notice that set of problems, it will take at least two years for them to resolve it (and that's optimistic). They haven't noticed it yet, therefore it will take more than two years.

Well, by all means don't hold us in suspense. Share with us this bit of omnipotence of which you speak.

Dan
 
it would be helpful if you can give some examples.
under what conditions you can pass on the right is not high level at all.

Here's a quick one: in Georgia, bicycles "shall ride as near to the right side of the roadway as practicable". Now this is for bicycles, but still: this is a law about motor vehicles (which bicycles are, legally). What is "practicable"? How would you define it in software?

Or a good parallel example is what happened with Ethereum: the goal was to develop "smart contracts" so that things could be stated in code and legal and contractual disputes could be resolved faster and fairer.

What happened with their flagship project? It got hacked and millions of dollars were stolen. They were then left in the position of "do we fork the blockchain and undo the hack, thereby reintroducing human judgment in to their perfect code utopia and undermining the project?", or "do we let the hack stand, lose tens of millions of dollars, and undermine our entire project by burning all of our initial investors?"

It's really hard to write software that covers every case, even with a simple contract as a template. Most laws would make really bad computer programs.
 
  • Like
Reactions: aubreymcfato
This is idiocy. The guys who are in the middle of this have said they're close, less than two years away. You, from your perch of ignorance, declaim that they're a bunch of deluded liars. I know who I'm going to believe.

You're the guy who says that the lilies that double their coverage every day are nowhere close to covering the pond when their coverage is at 1/32. Yes, it's hard to see that full coverage is only 5 days off. But that's what exponential progress is like.

Now, it's possible that their progress won't be exponential. Maybe they don't actually have a clear path to feature complete, and maybe their belief that they can bang out the 9's by brute force in a reasonable time is just wrong. But you certainly haven't shown any indication that you have any reason for believing this.

The counter argument to the 2 year time frame is that tsla is underestimating the edge cases their NN have to learn. Also certain potentiak interference between two different edge cases.

Also, the ai has not demonstrated the ability to apply things learned from previous edge cases to new unknown edge cases. So most of the solving needs to be initiated by humans still.

The only automated learning part is the distance estimation learning from automatic comparison with radar. When you have something like this setup, the learning is then exponential. But until you can setup the same for all edge cases, a human still need to press a button. Forcing the learning to be linear.

The 2 year FSD is probably possible in a good environment somewhere in the usa. So potentially what Elon daid can be true. At least from the look of it. Geofenced suburban driving is already solved. That part should be packaged and shipped out as feature complete.

Now vietnam. If they want to demonstrate and shut down the argument once and for all. Have a m3 navigate through Saigon. I guarantee you the stock price will shoot up the next day. The Saigon test should be the mother of all test for all future FSD aspiring ai.
 
Last edited:
  • Funny
Reactions: Artful Dodger
Yes, you can write that rule for your car, but then you have to write another rule for the car that doesn't. There are just too many individual cases to write rules for, many of which aren't covered by any law (e.g. there is no law against a deer jumping out in front of the car--but it happens frequently).

no, you simply put all the laws into handwritten rules. detecting which state you are in is trivial. Deer jumping in front of the car has nothing to do with this cause it's not part of the law.

You need to distinguish several parts of the system, the perception engine, that understand the surroundings (recognize a deer and noted its movement), and the prediction engine, which predicts the possible movement of moving agents identified by the perception engine (noted the deer maybe in your way), and the planning engine, which create multiple action plans and pick one. The first two are pure NN based, the third one is a mixed bag. knowing the deer maybe in your way and thus stop is most probably handled by NN.

The manually specified rules (coming from road laws) goes into the third part, after multiple actions are produced, right before they are choosen, the rule change the risk factor of different actions.
 
What is it that you don't understand about exponential vs. linear progress?

The march of nines is the opposite of exponential progress (when measured in terms of time). The largest, most obvious progress happens at the beginning and then it takes longer and longer to get that next 9.
 
if that is so difficult, then the example from the karpathy presentation does not work at all. Karparthy specifically said they push rules to the cars, and the cars upload clips onces those rules trigger. One of the example is to look out for cars moving from right lane into your lane.

I am afraid you are complicates things. your rule does not deal with raw reality, your rule works on an environmental model produced by the NN.

Karpathy indicated that they have good ways of telling unusual circumstances and specific scenarios apart from general data on the car. He didn’t go into specifics on that(likely highly trade secret), so we’ll have to take him at his word on it*.

At any rate, I think your misunderstanding my point. It’s the network that establishes that a given car is on the right, and right now they can do that because all the net does is classification. Once you get to the ultimate point of images in-> car controls out, all the information about which car is on the left or the right or wherever is distributed somewhere in the millions of activations in the middle of the network, in a way no human is likely to be able to sort out. And even if you can sort it out, you’d need to change intermediate activations somewhere in the middle of the network to ultimately affect controls.

I don’t think this problem is solvable by tradition software development. For better or worse, it’ll take (smart use of)data to solve. One example, off the top of my head, is to have GPS location info be an input and let the network learn that it should only pass on the right when GPS location is within defined areas.

*FWIW, I would guess that “unusual circumstances” could be either a specific network output trained on “weird things I haven’t seen before” or low confidence scores for existing known circumstances.
 
  • Like
Reactions: jerry33
Is that a trick question? Linear is easy to extrapolate while exponential is difficult unless you know where you are in the curve. Elon forever believes the curve is just going to keep getting steeper, but he calls out the difficulty in making these predictions.

End of 2020? Elon time, maybe. Unlike others, I remain an FSD skeptic. NoA seems to drive better where there is a higher concentration of cars providing data. (I'm shocked.) But skeptic or not, it is very ostrich behavior to deny the progress and Tesla is still the only company that can make bank off of not-quite-there FSD.

If Elon is right and they get their first regulatory approval in 2020, good for them. But if it takes until 2025 they will still have had a market benefit from selling cars with AP that can practically drive themselves -- and no one else even has that on their six year horizon. If, in the worst case, a regulatory approved FSD never happens (which I don't believe will be the case) then Tesla will still have decades of benefit from selling vehicles with AP that can practically drive themselves.

I watched a presentation by... I forget, I think Toyota's head of autonomy... where he said L5 was pie-in-the-sky only possible at some unforeseeable point in the future. He followed with a simplistic argument that L4 required L5 to rule out L4 as being a possibility and then attacked L2/L3 as being inherently dangerous. The takeaway was that L5 was the only possible approach and that it would take decades of liberal R&D. The whole thing reminds me of how (some) academics work the grant system. It is far better to get "progress" than "results" because you can just keep doing essentially nothing and get your government pork. In this case he is selling management on the need to pay him and his team indefinitely.

My point is this: it doesn't really matter where you fall on the FSD belief spectrum -- Tesla has an inherent advantage that will continue into the foreseeable future.

Lex Fridman showed that L2/L3 with Tesla is safer than manual driving, which debunks the Toyota guy. Yes, Tesla has an inherent advantage.
 
And there's no real need to resolve once-in-a-lifetime cases, like the clustertruck-case shown in Andrej's slides. FSD inherently must be robust enough to handle such cases without explicit training.
Cluster truck, I like it (and those are more common than you might think).
As Karpathy said, there are layers of recognition, as long as it detects something as an object/ non-drivable space, you're good.

I did check just now, and California allows right passing on expressways. Huh. Many states don't. The driving policy engine needs to know what state it's in...
Those in the right lane are not beholden to the speed of those in the left lane in any state.
Changing lanes to the right to pass can present a legal problem, going at traffic speed in the right lane is not...

So, pretty easy stuff
That was the demo video route. The test rides went other ways, based on reports.
Could not record during @Tesla FSD demo. But it was amazing. Model 3 handled stop signs, stop lights and complex traffic situations without any human interaction during our 15 minute test drive. The pace of progress is truly incredible. Congrats @elonmusk and @karpathy!

No NDA. I did a demo ride, fully autonomous drive. It was really really good. Parking lot to city streets, to freeway, to city streets back to parking lot, ~10 mile drive, 100% autonomous. Tesla is light years ahead.

The march of nines is the opposite of exponential progress (when measured in terms of time). The largest, most obvious progress happens at the beginning and then it takes longer and longer to get that next 9.

It's a decaying exponential ;)
Growing fleet * diminishing returns = linear improvement?
 
Karpathy indicated that they have good ways of telling unusual circumstances and specific scenarios apart from general data on the car. He didn’t go into specifics on that(likely highly trade secret), so we’ll have to take him at his word on it*.

At any rate, I think your misunderstanding my point. It’s the network that establishes that a given car is on the right, and right now they can do that because all the net does is classification. Once you get to the ultimate point of images in-> car controls out, all the information about which car is on the left or the right or wherever is distributed somewhere in the millions of activations in the middle of the network, in a way no human is likely to be able to sort out. And even if you can sort it out, you’d need to change intermediate activations somewhere in the middle of the network to ultimately affect controls.

I don’t think this problem is solvable by tradition software development. For better or worse, it’ll take (smart use of)data to solve. One example, off the top of my head, is to have GPS location info be an input and let the network learn that it should only pass on the right when GPS location is within defined areas.

*FWIW, I would guess that “unusual circumstances” could be either a specific network output trained on “weird things I haven’t seen before” or low confidence scores for existing known circumstances.
I disagree.

nobody uses a single NN for everything, it's always the combination of multiple NNs. most definitely the perception engine is separated from the action planner.

your rules works on the result of the perception engine. the perception engine knows left and right.

Edit: and no, NN does not only do classification. Karpathy's presentation mentioned it does prediction too.
 
Breathe man, breathe

The fibd an edge case, define it by things already known (car and bike in same spot, tunnel), set a trigger in the fleet to collect more of that. Add it to the training set. Rinse repeat

They accelerate the solution by having the largest (and growing) number of data collectors in the wild that they configure.
Plus they can probably trigger on things like "give me something which is between 50-80% likely to be a bike and car overlapping" versus "give me stuff that is exactly this", since the way NNs work is you get probabilities and if you're looking for edge cases you're probably more interesting in what might be than what it can already identify (if it is already highly certain to be what you're looking for, you don't necessarily need that particular image, other than for validation)
 
I disagree.

nobody uses a single NN for everything, it's always the combination of multiple NNs. most definitely the perception engine is separated from the action planner.

your rules works on the result of the perception engine. the perception engine knows left and right.

Perhaps nobody does(this isn’t really true, but we’ll go with it), but Karpathy certainly seems to think that’s the ultimate end state. He’s stated as much before and Elon reiterated it during the presentation.
 
I don't know if anyone saw my note last night, but if Google were to give every car owner (on earth, or in all new veh, or in a particular state...) a free camera with a cell connection, could they get their missing data and catch up to Tesla?
How else could someone level the data field without a car that includes data gathering?
The point that older Model S's still contribute to learning is a hint. Seems it doesn't need a NN?
a simple answer might be like
There are well over 400 different types of vehicles sold in the US listed on Kelly Blue Book
they all drive differently, sensors would be randomly placed, a great way to get total gibberish

Tesla has around 450,000 fairly identical vehicles that act fairly the same, with a few dozen sensors each. all fairly identically placed reporting with not a lot of +/- on placements of readouts
( when target shooting, you were very precise, all close together, but the target was _over there_)
this removes a lot of variables
 
Perhaps nobody does(this isn’t really true, but we’ll go with it), but Karpathy certainly seems to think that’s the ultimate end state. He’s stated as much before and Elon reiterated it during the presentation.
disagree. Karpathy said he hope they can let NNs handle everything. It does not have to be a single NN. For example, the debug view you see on the screen is the output of the perception engine. you lose that if you insists on using a single NN for everything.
 
Yeah, I know the social value of toning that down. It gets hard to do when I know more than /almost/ everyone about /almost/ everything, particularly when I'm listening to BS.

Just don't forget that Elon said that there will be a transition period and you would need to sit in the drivers seat. So it will be more like car sharing stuff not robotaxis, probably cars will be parked in specific places and will be able to get summoned maybe for a mile or two and then the passenger is kinda safety driver. Probably pick-up places will be also not every spot you choose. He said removing steering wheel is more like 3 years away, so you can estimate 5-6 years.

So picking up after night out and bringing home, or reading book while driving home from work is not what he said will be ready next year. I wish he said it more clearly.
 
Last edited:
The division, as I see it, is classification has to be human labeled. Controls/path planning must be automatically labeled based on outcome. Human labeling that would be incredibly dangerous and error prone.

Currently, from what I can get from the presentation, they're using a combination of manual path planning (doesn't scale) and "copying the average driver" (who isn't good enough at it). They'll have to do at least one more iteration on their entire path planning scheme (toss out what they've got and try again with a slightly different approach). Oh, they will do that, but it'll take time...
 
The march of nines is the opposite of exponential progress (when measured in terms of time). The largest, most obvious progress happens at the beginning and then it takes longer and longer to get that next 9.

Solvitur ambulando

Well, close enough for this droll disquisition and a shout out to Diogenes of Sinope. ;)

 
  • Informative
Reactions: Artful Dodger