Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla Autonomy Day April 22nd

This site may earn commission on affiliate links.
SOFTWARE continued (unfortunately I had to stop at 1:02:56 ,will get back to it when I get time):

So, yeah, the network starts by guessing what it's seeing, and then it's trained to fix the weights, by labelling lots of images. It's just a little more complicated than Markov chaining, for those who've heard of it. This is not fancy math.

Lots of super basic super boring stuff about image identification and neural networks.

Oh my god I'm bored.

"Lane Line markings, Objects, Where they are, Where we can drive, and so on"

Humans go through photos and mark lane lines. Oh my god. Unsurprising, but if they're working on this, oy, my god. Edge detection is actually something we have computer algorithms for? --- OK, yes, they've got some computer-automated marking.... unfortunately this is 1:02:56...

They are SO FAR from handling roads without lane lines...

Karpathy makes a false statements.. At 51:30. He points out, correctly, that when we look at a photo, we use the trees on the left the sky, etc. as cues and identify the lanes quickly. He points out that the NN "does not know that the trees matter, does not know that the car on the right matters, does not know that the buildings in the back matter, etc"

Then he makes a flat out false statement: "And you and I know that the truth is that none of those things matter; what matters is that there are few lane line markers back there and the lane lines curl a little bit".

That's not the truth. The truth is that the trees, the buildings, etc. do actually matter. I've actually seen inaccurate lane lines drive straight off the road from a rogue lane-painter. Lane lines are a convenience -- people lived without them in the 1940s. You're trying to find the road first and foremost, not the lane lines.

OK, not so bored. The evidence is that, by my standards, they're at a low level of functionality (though still likely better than the "competition"). He's explaining why simulation is not practical (you can't come up with it / you're grading your own homework). They do use simulations though (unfortunate, bad news).

The first six pictures he describes as "crazy stuff" and "very complicated environments" are relatively easy for humans. I encountered three worse things THIS MORNING.

I think they're beginning to understand the actual nature of the driving problem. I await them understanding it. The fleet data should help.

Hopefully now that there are more recent Teslas in Ithaca their dataset will get better and they'll start realizing just how awful the problem is. ;-)

They're still working on image identification.

----
First point where they may have an advantage:

OK, so they have a way of requesting "get similar images from fleet". This is a very useful way of trying to get data for the edge cases and corner cases. But what's their definition of "similar"? How well does this work? Seems like it would require an entire neural network program to define "similar". Is there some secret sauce here?

"Similar photo" may be easy for a bike on the back of a car, or for tunnels or for animals... but what if you have a corner case which happens repeatedly where the photos don't look "similar" to the neural net which is defining "similar"?


Does an uncovered truck bed with loose objects which might fall out look "similar" to another uncovered truck bed with different loose objects which might fall out?

"Animals, of course, also a very rare occurance, event" (no, pretty common really) -- glad they're working on it.

Mechanisms for detecting inaccuracies: driver intervention and "neural network is uncertain". Good, but not good enough. This will make cases where 90% of drivers blithely agree with the neural network and both are wrong.

.... I'm going to finish this, so I may be wrong, they may reveal something big in the last half , but so far it's confirming my belief that they haven't really started on the self-driving problem yet. However, it seems like nobody else is anywhere near them on the prerequisites, and they're starting to get an understanding of the nature of the problem... just starting. Maybe they'll really start working on self-driving two years from now. That would still put them vastly in the lead.
 
"The core problems these networks are solving in the car is image recognition." OK, so the neural networks are just doing image recognition. This is confirming my belief that they have NOT STARTED WORKING ON SELF DRIVING.

It’s a bit more than just image recognition, I would call it perception. The image tagging isn’t just nouns, some of it is propensity for movement too. So, the NN knows that a telephone pole isn’t going to move, whereas a pedestrian could do all sorts of crazy *sugar*.

But you are right in the sense that the actual driving isn’t being done by the NN at all. That’s all hard coded (they soften it by calling them heuristics, which is a fancy word for algorithms, which is just regular computer code). But they are indeed working on stuffing the driving decisions down to the NN as well. That’s their future R&D once they get past feature complete this year.

Can they do FSD without the NN driving? Who knows? They’ve done more than I thought they would already with just a perception NN. If it won’t be level 5, then their next system might be. Personally, I’ll be satisfied when their NN can be taught actual rules of the road. That appears to still be a hard research problem (merging declarative information in with a NN).
 
  • Like
Reactions: Ulmo
I thought the EyeQ3 that Tesla used in AP1 was a hardware image recognition chip, but not NN based. They started doing NN in the EyeQ4 or EyeQ5...

EyeQ3 has Vector Microcode Processors, which I assume allow parallel execution of things like multiply/add like Tesla’s chip.

However, when I read this: https://www.eetindia.co.in/news/article/a-peek-inside-mobileyes-eyeq5-part-2

It seems that the Mobileye chips are much more general purpose compute and much less NN specific than I thought. They aren’t even thinking of doing NN driving in 2020, the driving part is still handled via traditional software. And their 2020 chip has much less TOP specs than Tesla’s.

So, I take it back, Tesla has a unique chip today, and will continue to be in the forefront.
 
  • Informative
  • Like
Reactions: neroden and MP3Mike
a couple of things I caught: He stated that nobody currently has a car that can match the 2012 Model S.... 7 years later!

Also, there have been zero accidents while using NOA (something like 100,000 lane changes a day?)
 
I watched the whole presentation. Here are some key takeaways:

  • Dojo chip to do video based training
  • Next gen FSD chip also about 1 year into development, probably at a process < 14nm
  • Karpathy discussed the advantages of having Tesla’s fleet. It is the only company able to pursue this strategy because they have a fleet able to generate corner cases to add more varied data for training
  • Path planning by using good drivers paths
  • Waymos of the world rely on simulation for validation but they can’t generate all the strange corner cases that a fleet can
  • Tesla has a very good simulator but simulation is limited by creativity. It’s also like passing a test you created, you will always pass. Also you cannot accurately model all the agents in the scene, hence fleet data is paramount.
  • They can backtest against prior interventions
  • They can test in shadow mode as well
 
Elon killed two things today, all hopes for AP1 with his “buying a car without FSD today is like buying a horse” statement and somehow killed amphibians with a statement so profound I am truly unable to understand what he meant. R.I.P amphibians...
Probably playing to the popular missunderstandanding that dinousors were reptiles and not birds.
 
Lane lines are a convenience -- people lived without them in the 1940s. You're trying to find the road first and foremost, not the lane lines.

I'm not so sure. Try trying an interstate (or any road with multiple lanes) with no lane lines. They are more than a "convenience", they have become essential. There's a reason they were added, and why they reflect headlights - without them even humans have a hard time detecting the roads, especially in poor weather or at night.

OTOH adherence to lanes seems entirely optional in some countries!
 
  • Like
Reactions: GeoX750
SOFTWARE III:

Oh,****, they're doing path prediction based on where random incompetent idiots drive.

Most people do NOT follow safe paths on turns. This is an unsafe method. This will do OK on roads where humans are quite good, and will be appalling on roads where humans routinely do badly. It WILL repeat common errors. Certain turns and intersections have people cut into the wrong lane routinely. You want to copy a tiny minority of the drivers.

"You might not want to annotate all the drivers; you might want just imitate the better drivers." He claims they have ways to do this, but there are no details. Hope so. Hope they have people evaluating this. Don't think they do.

Points out that they are doing 3-D modeling. That's good.

Onwards to depth perception. Long rather dull section -- this mostly seems to be an anti-LIDAR lecture, it's all basic stuff. Musk claims he might shoot lasers out of his eyes. Again, this is all computer vision material, not really self-driving research. "Slightly more sparse and approximate" use of triangulation to generate 3D. (So they are doing stereo and motion based depth perception.) They use the forward radar to annotate depth perception (obvious, I would guess they use the ultrasonics too). Also uses temporal consistency (no "object jumps forward and backward"). More anti-LIDAR discussion, yes, I agree.

Nothing on sound detection, which is necessary for level 4/5. They are not developing level 4/5; they haven't started. They don't realize that they haven't started.

Should be a very nice level 2/3 system, though.


Karpahty's sonclusion is correct: it is all about the long tail. They haven't figured out what that means yet, though. They DO have the absolute best source of data to learn how to build self-driving systems, and are probably far in the lead of anyone else. This allows them to start on any given problem much faster than anyone else; everyone else has a data collection problem they have to do before starting on self-driving work.

But they aren't ANYWHERE close to level 4; they're going to have to solve several major problems which they haven't even recognized as being problems yet. Now, anyone else who recognizes these problems won't be able to start working on them without collecting the data first, so Tesla is far ahead of everyone else, but level 4 is fantasy right now.

Questions:

Implications that they're currently using classic non-NN heuristics for driving policy decisions, Karpathy says they'll switch to NNs. Musk discusses the fact that you have to deliberately drive somewhat unsafely in order to change lanes on an LA freeway (which means the choice HAS to be handed to the driver -- the driver has to make a voluntary choice to drive unsafely). Basically they just admitted they will never have a fully self-driving car on an LA freeway shared with human drivers.

TLDR: Tesla has by far the best architecture for computer vision and top-grade driver assist features. However, Tesla is not working on level 5 self-driving.

Financial conclusion: The excellent driver assist features will be a major selling point which people will pay a big premium for. But unmanned robotaxis are a fantasy for years to come, so don't include them in your financial analysis.
 
  • Disagree
Reactions: dhanson865
SOFTWARE IV:

Musk is asked if he's considered using his fleet of supercomputers for something else. Says they've been superfocused on self-driving, but mentions AWS as a possible future ("after" they've solved self-driving -- which won't happen).

A comment:

Sounds like they need one more winter to nail extreme winter driving.
No.

There are at least four types of extreme winter driving:
(1) ice storms
(2) whiteouts
(3) super cold
(4) super high snow drifts

Essentially four different problems. I don't think they've even figured that out yet (damn Californians).
----

They haven't started working on NORMAL snow driving, as Karpathy admits.

Musk is going on about driveable space. This is too simplistic. They don't understand the problems yet.

An aside: I classify space when driving into the following FOUR spaces:
1 -- preferred driveable space -- no damage likely
2 -- questionable driveable space -- shoulders, dirt, grass, potholes -- car has higher chance of damage, but better than crashing
3 -- "going into the ditch" space -- car likely to be damaged, trip likely to end, but people should be fine
4 -- "running off the cliff" or "crashing into someone else" space -- people are likely to die

As a defensive driver, there's also
5 -- "space which is likely to become unsafe soon", such as the opposing lane.

If they've only got one concept of driveable space, they're not even close. I was forced to go into type 2 space while very carefully avoiding type 3 space yesterday morning. I have also been forced to go into type 3 space to avoid type 4.

Karpathy is saying that basically the NN can be trained if there are enough people traversing the environment -- this is true. But will they ever get enough data from people who are getting road location cues from the location of mailboxes on roads completely covered in snow? I think they have the right architecture to do it, though....

Karpathy explains that there's an explicit controller at the moment using heuristics (no NN yet) for driving policy. They're going to switch to NN but they haven't started.

Musk carefully redefines feature-complete to NOT allow "don't pay attention" driving. That is, they can book FSD revenue if you still have to pay attention and keep your hands on the wheel. This step he thinks they'll be done this year -- this is wrong, but "add six months" will probably be right, so maybe mid-2020 to book FSD revenue.

Musk defines "you can drive without paying attention" as the next stage. He idiotically claims that he'll get "don't pay attention" by 2Q next year, and he's delusional. Zero chance of that. Zero -- they haven't noticed most of the hard problems yet, let alone starting to work on them.

He defines convincing regulators as the third stage and expects end of next year, which will also not happen. Zero chance.

Question points out that merging lanes aren't working yet.

OK, so Karpathy was all about vision... Stewart is up next... maybe this will be about self-driving?
 
  • Helpful
Reactions: Ulmo
SOFTWARE V:
Stewart worked on applying machine learning at Facebook to how they rank the newsfeed, feed ads etc. etc. Well, that was a disaster, so not impressed by his resume. Then he did something similar at Snap.

Anyway, what does he do at Tesla? Oh dear god, is this just a software development practices lecture? He keeps talking about the procedure for developing the stuff.

More basics that we already know. Dull as dirt.
-----
OK, getting interesting. Each NN makes a few specific predictions (or interpretations of the world). The information is integrated -- they have specific pedestrian detection, but also generic-obstacle detection, and those ought to be compatible, and can perhaps predict additional things.

They're trying to put in the "this is where we expect the pedestrian to be going in the future" / "this is where we expect the bicycle to be in the future" prediction. This is supposed to be in the next version. Cruise already has this and has had it for several years. This is an area where Tesla is behind and is catching up with Cruise. I'm glad to hear that they're catching up...

He's doing hand-written algorithms for decision making. These are deployed a bit at a time to early access people.

Sooooo boring....he's talking hype mostly and then handed to Elon.

He watches every single accident.
 
Musk comments on redundancy. All the signal and power wires are duplicated since Oct 2016. Braking and steering can run on the auxiliary power if the main pack fails. Power steering is duplicated.

Forget the robotaxi fantasy -- think about service. This certainly reduces the number of "dead on road" incidents a lot.

Musk goes back to previous forward looking statements which he got right, but leaves out the ones like "full self driving in 2017" and "10,000 cars per year by the end of 2018". Not impressed. He should have included a few of the ones he got wrong; it looks a lot better if you admit to your errors.

"There is still no car which can compete with the Model S of 2012. Still waiting." Well, omitting Tesla's other cars, I must agree.
 
  • Like
  • Helpful
Reactions: abasile and Ulmo