Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla, TSLA & the Investment World: the Perpetual Investors' Roundtable

This site may earn commission on affiliate links.
I'm not saying they stitched together the video. Almost two years ago Tesla released a similar video with the model x, and as far as we know, that whole demonstration was "fake," using a completely different neural network and approach.

How do we know that this video is more legitimate than the one released previously?

Tesla didn't allow people to film the demo drives, probably because they required several takeovers because the system isn't complete yet.

Multiple reports so far from people that DID experience the demo that said the car did the drive flawlessly and without intervention. They are mentioned earlier in this thread.
 
Apparently the CA regulations don't require the reporting of disengagements as long as there's constant driver supervision. This is how Tesla was able to beat the competition without reporting disengagements and FSD miles driven.
Exactly... As long as Tesla classifies it as a level 2 system, they don't have to report any disengagement data.
 
That does appear to be an Acura TL of the 04-08 vintage with that long side indent along the door handles.

View attachment 399672
upload_2019-4-22_19-59-41.png


That's a Bimmer look.
 
  • Like
Reactions: KSilver2000
Apparently the CA regulations don't require the reporting of disengagements as long as there's constant driver supervision. This is how Tesla was able to beat the competition without reporting disengagements and FSD miles driven.

I understand, but that "constant driver supervision" is the nag. If they turn the nag off, which it is in this video and the other drives today, I think they have to report the disengagements. (Otherwise they would have never reported them back in 2016 when they made the original FSD video.)
 
Multiple reports so far from people that DID experience the demo that said the car did the drive flawlessly and without intervention. They are mentioned earlier in this thread.

Yep if this is true I will adjust my FSD skepticism appropriately.

If the routes were chosen at random by the investors, even if previously driven by Tesla vehicles and the areas heavily trained on the area, highly unlikely they would have gone without a hiccup in the dynamic environment that is everyday driving. Easily scalable. Very interesting.
 
  • Like
Reactions: neroden
I think Tesla is currently in the lead. I'm more of a skeptic on the timeline, although I think there might need to be major advances (mostly on the software side) that would cement their lead. Karpathy's part of the presentation was the most convincing to me, and I think that's where most of the work needs to be done.

I also think there could be some unforeseen negatives that would hurt whomever nears #1 first but doesn't actually get there. Things like accidents that the media sensationalizes.

You might wind up being in the right on the timeline, meaning in the right side of the exponential which would be the wrong side as an investor.

I'm more optimistic on the timeline.

But regardless, I agree about Tesla being in the lead and I'd rather be early than late on this one and I'm willing to sit tight for an extra year or three if need be.
 
Something about the visualization makes me think they're gaming it and using some maps (obviously they're gaming it in some way, since FSD isn't ready). We got a similar video ~2 years ago with the model x:

If you want to play Devil's Advocate, you could say that the route was cherrypicked. That is, they selected a route that they had tested extensively and knew would work. Going one step further, it's possible that they overfit training some or all of the NNs to accommodate this route.

I don't mind suggesting either of these possibilities publicly, because someone else is bound to come up with them.

Personally I don't think it would really matter. More important to me is the technical architecture and roadmap, which appear to be solid.
 
@neroden : So far, are you pleased with what is presented?
Working on it; can't finish watching it tonight. Summary: Not impressed so far.

Tesla's still probably ahead of the "competition" but based on the first 1:02 of the presentation, they don't seem to have started working on self-driving. Maybe they describe the self-driving in the second half.

Hardware is a nice NN accelerator, what you'd design if you wanted to speed up the particular processing -- almost everything they did there is very straightforward, but it's interesting that nobody else has done them all in one package. (Backwards-compatibility issues for everyone else, maybe?) Maybe they can sell those chips for a profit, if they really are the first to do this. There are probably clever trade-secret things in the hardware but they didn't reveal any.

Software for image recognition is... well, it's what I'd have done (not that I can do it personally, but the outline is what I would have told my employees to do). It's not rocket science, it just takes a lot of people to do labeling and several highly skilled people to do some very fiddly math stuff (it's a type of math I dislike doing, and most people can't do it well, but it's essentially "technical"). There's probably some cleverness in the NN architectures they're using, but they didn't reveal any of that.

I guess I should reserve judgement until I finish the presentation, but so far I'm not sure they've started working on self-driving at all.
 
From the presentation, it appears what I said before was partially correct. There are no overt human labelers for policy. It learns from what people do *and* the results of those actions. When a person sucks at driving, that provides just as good of data of what not to do as those who are great at driving do for what to do. If the person crashed(or almost crashed, since we know they can identify near misses) or ended up in the wrong lane or... then that becomes a negative case the system should avoid. The system effectively learns to do the best job of any human it’s ever seen do it, for every task in all conditions.

EDIT: to add the part I missed was that they are effectively training the system to mimic the actions of the human drivers with the best outcomes in each situation. That should actually mostly solve the “optimally safe and efficient but not actually legal” driving I talked about earlier. It’ll only pick up on illegal but safe behavior humans actually do.
The what deep learning teaches itself to play chess, for example, is that that it learns to predict which side will win the game. As it refines is ability to discriminate winning moves from losing moves, it can also make the plays the optimize the probabilities of winning. So too with watching human driver behavior. It can build up a model that predicts which drivers will soon crash and which ones avoid crashing. Then when driving the vehicle, the NN choses those driving conditions and behaviors which minimize the chance of an accident.
 
Tesla’s Reliance on ‘Computer Vision’ Adds to Self-Driving Car Challenge | Newswise: News for Journalists

Bart Selman is professor of computer science at Cornell University and an expert on artificial intelligence safety issues:

"It is well-known that current computer vision systems can fail in quite unpredictable ways. Having multiple sensors, ideally including Lidar, are therefore critical."

It is known.
Sounds like that might fall under Clarke's first law:

When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.​

In 1835 the prominent French philosopher Auguste Comte declared that it will be impossible to ever determine the chemical composition and temperature of stars. His apparent assumption was that one would actually have to travel the great distance to a star and withstand its heat to analyze it. In 1849 it was discovered that these factors could be deduced by analyzing the electromagnetic spectrum of a gaseous body at any distance. In 1864 a spectroscope was attached to a telescope allowing the determination of the chemical composition and temperature of stars.
 
If you want to play Devil's Advocate, you could say that the route was cherrypicked. That is, they selected a route that they had tested extensively and knew would work. Going one step further, it's possible that they overfit training some or all of the NNs to accommodate this route.

I don't mind suggesting either of these possibilities publicly, because someone else is bound to come up with them.

Personally I don't think it would really matter. More important to me is the technical architecture and roadmap, which appear to be solid.

While I'm at it I'll add another critical idea so that we're ready for it: I suspect the FSD video wasn't recorded today. Maybe yesterday, or Saturday, possibly Friday. Again I don't think it really matters, but someone might notice discrepancies and try to make a big deal out of it. The discrepancies I noticed were the time of day and the traffic on that section of 280N.
 
Multiple reports so far from people that DID experience the demo that said the car did the drive flawlessly and without intervention. They are mentioned earlier in this thread.

Gali had 1 intervention during his drive, and another guy had 0 interventions. The test drives were about 10-15 minutes each.

As far as I know, this is the first time Tesla has provided test rides with its FSD software. This is pretty impressive. It shows they have some version of it actually working. It just needs more labeling / training / edge-cases.
 
  • Informative
  • Like
Reactions: neroden and BBone
If you want to play Devil's Advocate, you could say that the route was cherrypicked. That is, they selected a route that they had tested extensively and knew would work. Going one step further, it's possible that they overfit training some or all of the NNs to accommodate this route.

I don't mind suggesting either of these possibilities publicly, because someone else is bound to come up with them.

Personally I don't think it would really matter. More important to me is the technical architecture and roadmap, which appear to be solid.

The FSD internals videos from @verygreen make it pretty clear IMO that a significant part of the training data covers the 3D recognition of surrounding vehicles - which cannot be "overmatched" on a specific route, as the vehicles encountered on the route are effectively a random sample.
 
If you want to play Devil's Advocate, you could say that the route was cherrypicked. That is, they selected a route that they had tested extensively and knew would work. Going one step further, it's possible that they overfit training some or all of the NNs to accommodate this route.

I don't mind suggesting either of these possibilities publicly, because someone else is bound to come up with them.

Personally I don't think it would really matter. More important to me is the technical architecture and roadmap, which appear to be solid.
Possible, but doubt it. My car does all that freeway stuff already. I did notice some difficulty on a tight turn maybe outside the lines for a sec. But it still managed fine.
Keep in mind, not a lot of city roads there bc I've been there and it's between towns.
Dont know if anyone saw the "usable" driving surface on screen, and the stop light status indicator just below speedo to right side. Pretty cool.
 
The FSD internals videos from @verygreen make it pretty clear IMO that a significant part of the training data covers the 3D recognition of surrounding vehicles - which cannot be "overmatched" on a specific route, as the vehicles encountered on the route are effectively a random sample.

True, and that's why I wrote "some or all of the NNs" when posting as Devil's Advocate. They might overfit recognition for things that don't move. For example lane lines around curves might be dodgy.
 
Last edited: