Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Neural Networks

This site may earn commission on affiliate links.
Actually, if taken to court, Tesla would have a billion+ miles of data plus all their training test cases to show the vehicle behaves properly.
"Your honor and members of the jury, here are all the similar cases in which FSD acted in a reasonable manner, including this one. This event did (or did not) show an area the system can improve in (if did) and we have already updated all of our vehicles with that improvement."

Versus, 'yep we hard coded it to do exactly what it did when it did what we got sued for'.

You can't code against life. Nor can you create a system that can fully predict effects of actions.

"Ladies and Gentlemen of the jury, you heard the evidence presented by the defense. You heard how they collect data, how they process that data, and how they continue to process that data even right now as we speak. You heard the expert witness for the defense explain what the term 'outlier case' means. You heard defense counsel expain that my client's death was an outlier case. The defense would like for you to focus on the accident as a learning experience, an event showing an area where their dangerous system 'can improve in'. You see how easily the defense brushes off the death of my client, like it's no big deal, and that Tesla has 'already updated their vehicles with the improvement'.

Well you know what? Tesla needs to face the fact that their improvement came at the price of my client's life. And despite all of the similar cases that the defense presented to you, cases where Tesla's technology worked? Well guess what? They didn't work on the day that my client was killed by the same technology. Why should my client have to die so that Tesla's technology can improve? Do you think Mr. Karpathy, Mr. Musk, and the other members of the autopilot team that you heard testify on the stand during this trial would be willing to pay the price with their lives? My client never signed up to be a beta-tester, nor did my client sign up to die for Tesla's cause.

You heard Mr. Karpathy testify on that very stand, and explain to you how it's impossible to create a system that can fully predict effects of actions. What? You know what is possible? It's possible to refrain from creating a technology that fails to recognize the fact that a stopped car or person is standing in front of it path and its possible to refrain from installing that technology in cars, which would then make it absolutely possible that my client would not be dead, and that you and I would not be spending our time in this courtroom today. That's what's possible. That's the price Tesla must pay if they want to play this game. But what did Tesla do instead? You saw Tesla's response to my client's death, saw how Tesla fails to take responsibility for its actions, saw exactly how far they've gone as to avoid paying for my client's death, saw how Tesla views this as just one fatal accident out of the billions and billons of miles that Tesla's vehicles behave properly with.

Look. You and I can both sit here and agree that the technology is amazing, and as an owner of a Tesla Model S myself, I can speak from personal experience that the technology is cool. But let's save that for some other time. Let's get down to the most important question that you all have to decide today. That is, whether Tesla should compensate for my client's death where its technology failed to react properly in this specific situation? I think that we can both agree that the answer to that question is yes, and I trust that the members of this jury will make the right decision here. Thank you."
 
"Ladies and Gentlemen of the jury, you heard the evidence presented by the defense. You heard how they collect data, how they process that data, and how they continue to process that data even right now as we speak. You heard the expert witness for the defense explain what the term 'outlier case' means. You heard defense counsel expain that my client's death was an outlier case. The defense would like for you to focus on the accident as a learning experience, an event showing an area where their teenager 'can improve in'. You see how easily the defense brushes off the death of my client, like it's no big deal, and that Tesla has 'told the teenager what they dd wrong'.

Well you know what? Tesla needs to face the fact that their improvement came at the price of my client's life. And despite all of the similar cases that the defense presented to you, cases where the teenager drove safely? Well guess what? They didn't work on the day that my client was killed by the same technology. Why should my client have to die so that Tesla's teenager can improve? Do you think Mr. Karpathy, Mr. Musk, and the other parents of new drivers that you heard testify on the stand during this trial would be willing to pay the price with their lives? My client never signed up to be on the road with new drivers, nor did my client sign up to die for Tesla's cause.

You heard Mr. Karpathy testify on that very stand, and explain to you how it's impossible to create a human that can fully predict effects of actions. What? You know what is possible? It's possible to refrain from allowing anyone to drive, which would then make it absolutely possible that my client would not be dead, and that you and I would not be spending our time in this courtroom today. That's what's possible. That's the price Tesla must pay if they want to play this game. But what did Tesla do instead? You saw Tesla's response to my client's death, saw how Tesla fails to take responsibility for its actions, saw exactly how far they've gone as to avoid paying for my client's death, saw how Tesla views this as just one fatal accident out of the billions and billons of miles that teenagers behave properly with.

Look. You and I can both sit here and agree that the ability to drive is amazing, and as an owner of a Tesla Model S myself, I can speak from personal experience that the technology is cool. But let's save that for some other time. Let's get down to the most important question that you all have to decide today. That is, whether Tesla should compensate for my client's death where its teenage driver failed to react properly in this specific situation? I think that we can both agree that the answer to that question is yes, and I trust that the members of this jury will make the right decision here. Thank you."

One could also insert doctor, or experienced driver.
 
So far, everybody in here is pretending that the inner working of the NN can be explained. THEY CAN NOT. The only thing that can be explained is that given the input set and training that has occurred, the weights and biases within the network give a certain output for a certain input which is infinitely variable and the same output may not be produced by the same input given the fact that pixel values can change.

This is why there are effectively NO companies allowing the NN to do the control input. It would be idiotic to do such a thing when those control inputs can be so easily produced with traditional code. An additional side benefit of using traditional code is that you can hard code sanity checks, like at no point should the brake or accelerator be applied 100%, or be applied 100% and then immediately released.

You literally can not defend in court that which you can not explain the function of. The inner workings of a NN are completely opaque, and not enough of you all seem to understand that fact.
 
An additional side benefit of using traditional code is that you can hard code sanity checks, like at no point should the brake or accelerator be applied 100%, or be applied 100% and then immediately released.

I imagine this sort of check is applied to the output of the NN, just as it is applied to whatever NN sits behind the steering wheel today.

IMO, once we get there, autonomy will need to be rolled into type approval/homologation. If a car is going to be supplied with L3 and above, it should pass a formal driving test for that level of ability, just as a human would (and feature updates would require recertification). Then, as with humans, we don't need to care about the inner workings of the system, just that the system has proven that it is sufficiently skillful to operate reliably within the specified domain.
 
  • Like
Reactions: mikes_fsd
I imagine this sort of check is applied to the output of the NN, just as it is applied to whatever NN sits behind the steering wheel today.

IMO, once we get there, autonomy will need to be rolled into type approval/homologation. If a car is going to be supplied with L3 and above, it should pass a formal driving test for that level of ability, just as a human would (and feature updates would require recertification). Then, as with humans, we don't need to care about the inner workings of the system, just that the system has proven that it is sufficiently skillful to operate reliably within the specified domain.
This seems like the obvious, if bureaucratic, answer from my perspective. Never mind that Tesla could easily get insurance to protect against such an eventuality — which statistically, it will always be. An easily planned for and mitigated to an extent feasible, eventuality.
 
I imagine this sort of check is applied to the output of the NN, just as it is applied to whatever NN sits behind the steering wheel today.

Again, there seems to be quite a lot of confusion around this. There is no NN behind the steering wheel. It's traditional code. The NN is used to interpret the scene, bound objects in a box, and determine what they are. The NN is in absolutely no way in control of the vehicle except that it says "that's a line". Anything that produces input to a control (accelerate, brake, steering) is traditional code.


IMO, once we get there,

Get where, exactly? What's the "there" that you're talking about?

autonomy will need to be rolled into type approval/homologation.

We already see that this is essentially meaningless in the US, so we're presumably only talking about european and similarly regulated nations?

If a car is going to be supplied with L3 and above, it should pass a formal driving test for that level of ability

Nobody in their right mind would argue against a standards based, comprehensive test suite that all autonomy systems must complete. That's not at all what's being discussed here, and I've argued for this for years now.

Then, as with humans, we don't need to care about the inner workings of the system, just that the system has proven that it is sufficiently skillful to operate reliably within the specified domain.

Except that NN work exactly nothing like human minds do, and unless a NN has been trained to identify something it has no concept of what it is or how to behave around it. So, in that case, your correct choice is to identify that an object is present and unknown (which itself is a massively computationally complex problem using a NN and likely better solved with traditional code), which in turn triggers traditional coded control system to stop. Otherwise, because a NN is not at all like a human and not at all capable of comprehending anything it hasn't been trained on, a NN controlled car seeing something new would happily plow into it as though nothing was present at all.
 
  • Disagree
Reactions: mikes_fsd and mongo
Again, there seems to be quite a lot of confusion around this. There is no NN behind the steering wheel. It's traditional code.

You have code running in your head? Most people who sit behind a steering wheel have an NN ;)

Get where, exactly? What's the "there" that you're talking about?

"There" = autonomy which is suitable for any public road.

We already see that this is essentially meaningless in the US, so we're presumably only talking about european and similarly regulated nations?

I imagine this would sit under the remit of the NHTSA in the US?

Except that NN work exactly nothing like human minds do, and unless a NN has been trained to identify something it has no concept of what it is or how to behave around it. So, in that case, your correct choice is to identify that an object is present and unknown (which itself is a massively computationally complex problem using a NN and likely better solved with traditional code), which in turn triggers traditional coded control system to stop. Otherwise, because a NN is not at all like a human and not at all capable of comprehending anything it hasn't been trained on, a NN controlled car seeing something new would happily plow into it as though nothing was present at all.

Not sure I agree with your assumption that an NN is able to identify and accurately position something that it doesn't recognise. Imagine not being able to take the car out on a cloudy day, as the clouds confuse it, or near glass buildings, because it can't cope with reflected images, etc, etc.

Let's stretch our NNs/traditional code and try a little gedankenexperiment. Let's say we trained a chimp to drive a car, and, after 100,000 miles and several years of faultless driving, the chimp causes a serious accident. The chimp survives, but there is no way to see what was going on in it's head, so how do we figure out what went wrong?
 
You have code running in your head? Most people who sit behind a steering wheel have an NN ;)

A computer NN is absolutely nothing like the human brain. To compare them is pointless. You don't have a NN behind your steering wheel, you have the product of hundreds of millions of years of evolution and unexplained processes, interactions, and reactions.


"There" = autonomy which is suitable for any public road.

Cool, well, we're talking about a NN controlling the inputs of the vehicle, which is not how anybody is solving this problem because it would be corporate suicide and literal murder thanks to the fact that computer NN are in essence unpredictable when faced with something they haven't been trained to handle, and invisible inputs can massively change their outputs.

Humans, for as much as everyone here seems to be ignoring the fact, do not fit that description. Self preservation alone makes human minds significantly better than computer NN.


I imagine this would sit under the remit of the NHTSA in the US?

Not really, no. Manufacturers have a requirement to report autonomously driven miles in some states, and that's about it. BTW, Tesla has still reported basically zero miles, which means they're either falsifying government records, or they aren't operating autonomously. This was clarified in a document dump and FOIA request where Tesla reported that in fact their autonomy day "autonomous" rides were not actually autonomous.

Not sure I agree with your assumption that an NN is able to identify and accurately position something that it doesn't recognise. Imagine not being able to take the car out on a cloudy day, as the clouds confuse it, or near glass buildings, because it can't cope with reflected images, etc, etc.

Yeah, so, I guess first off, you don't have to agree with me. I have facts on my side. This is the problem with having this conversation with people that don't actually know what a NN is or how it works. They just make up the things they don't know. A NN can not identify something it hasn't been trained on. Again, AT BEST it could have some general solution for when the edge detection algorithms detect edges in a scene it would normally consider clear, that traditional code will be informed of that.

Second, and most importantly, your example of reflections is LITERALLY a place where NN fail. It's funny you mentioned that and didn't bother looking it up. A NN doesn't have a sense of self like you, so if you give it a picture of itself in a mirror, it would conclude that a car is coming directly at it. At twice the speed that it's approaching the mirror. I mean, this is super basic stuff here. Interestingly, things like lidar can be used to detect these kinds of situations, but once more this is a candidate for traditional code.


Let's stretch our NNs/traditional code and try a little gedankenexperiment. Let's say we trained a chimp to drive a car, and, after 100,000 miles and several years of faultless driving, the chimp causes a serious accident. The chimp survives, but there is no way to see what was going on in it's head, so how do we figure out what went wrong?


This is a stupid hypothetical. And it's exactly why we don't let chimps drive. WTF is going on here on this site lately? Did you really consider this to be a persuasive idea?
 
Found this paper to be interesting. Karpathy tweeted it. They improved SOTA of optical flow by 16% and it scales better with fast runtime on GPUs(even better on HW3/HW4 I guess) and for large video streams(like 8 cameras).

https://arxiv.org/pdf/2003.12039.pdf

I assume the first step for Tesla is to go from 2.5D to 4D labelling of the dataset, but then I think at some point I think they will change the architecture of their neural network from an inception-style CNN to some form of a transformer style network.

upload_2020-9-6_9-45-48.png
 

Attachments

  • upload_2020-9-6_9-45-48.jpeg
    upload_2020-9-6_9-45-48.jpeg
    111.1 KB · Views: 51
A computer NN is absolutely nothing like the human brain. To compare them is pointless. You don't have a NN behind your steering wheel, you have the product of hundreds of millions of years of evolution and unexplained processes, interactions, and reactions.

Hmmm... what does the first "N" in "NN" stand for?

Yeah, so, I guess first off, you don't have to agree with me. I have facts on my side.This is the problem with having this conversation with people that don't actually know what a NN is or how it works. They just make up the things they don't know. A NN can not identify something it hasn't been trained on.

That is exactly what I said, so I am glad you agree with me. But, in your earlier post, you suggested that the NN should hand over to traditional code when it encounters something it doesn't recognise. said "your correct choice is to identify that an object is present and unknown (which itself is a massively computationally complex problem using a NN and likely better solved with traditional code), which in turn triggers traditional coded control system".

It isn't a complex problem for a NN. It is a complex problem for the architect who designs the NN and training set. But think it through - even if the NN could generally identify objects, it would also need to be able to determine whether the unidentified object was a threat before handing back.

Second, and most importantly, your example of reflections is LITERALLY a place where NN fail. It's funny you mentioned that and didn't bother looking it up. A NN doesn't have a sense of self like you, so if you give it a picture of itself in a mirror, it would conclude that a car is coming directly at it. At twice the speed that it's approaching the mirror. I mean, this is super basic stuff here. Interestingly, things like lidar can be used to detect these kinds of situations, but once more this is a candidate for traditional code.

Do you honestly think I just made those examples up..?

This is a stupid hypothetical. And it's exactly why we don't let chimps drive. WTF is going on here on this site lately? Did you really consider this to be a persuasive idea?

Hahaha!
 
Hmmm... what does the first "N" in "NN" stand for?

From the 1950s when the computer scientists that didn't know anything about how the brain works presumed they were simulating one. In no way does a computer neural network work the same way your brain does, nor your neurons.

You'd know this if you bothered to understand the topic before debating it.


in your earlier post, you suggested that the NN should hand over to traditional code when it encounters something it doesn't recognise. said "your correct choice is to identify that an object is present and unknown (which itself is a massively computationally complex problem using a NN and likely better solved with traditional code), which in turn triggers traditional coded control system".

Which I clarified with additional detail. By describing the output of edge detection being used to determine an unknown object is present. A neural network would literally not output anything meaningful because none of its weights and biases have been designed to.

Do you honestly think I just made those examples up..?

Yes.


Agreed. You seem to believe in magic. Good luck with that.
 
  • Funny
Reactions: J1mbo
This is why there are effectively NO companies allowing the NN to do the control input. It would be idiotic to do such a thing when those control inputs can be so easily produced with traditional code. An additional side benefit of using traditional code is that you can hard code sanity checks, like at no point should the brake or accelerator be applied 100%, or be applied 100% and then immediately released.

I believe the guy behind comma.ai claimed in an interview at one point (I think it might have been with Lex Fridman, but I'm not going to rewatch the entire thing trying to find the timestamp, in part because I might be misremembering who he was talking to, so might be a different interview altogether) that they've trained a NN on how people do lane changes (with the caveat of having picked good examples and not using every possible input, of course) and they're using that NN to drive the lane changing behavior, rather than traditional code. So, assuming that was true, then you're incorrect about nobody letting NN handle controls.
 
From the 1950s when the computer scientists that didn't know anything about how the brain works presumed they were simulating one. In no way does a computer neural network work the same way your brain does, nor your neurons.

You'd know this if you bothered to understand the topic before debating it.

Maybe you need to go back to your research. For starters, "neural network" is a generic term.

Which I clarified with additional detail. By describing the output of edge detection being used to determine an unknown object is present. A neural network would literally not output anything meaningful because none of its weights and biases have been designed to.

Edge detection is equally meaningless in this context: your solution would literally be scared of shadows.

It is always amusing when anonymous people post stuff online as if they wrote the book on a particular topic, and rather than debate and teach and learn, they take the view that their comprehension of that topic is peerless and never wrong. Anyone who disagrees with them must be ignorant and is quickly dismissed. Anyway, thanks for the banter! You have reminded me to check up on these guys, who are doing some interesting stuff which might even be good enough to replace your edge detection solution...

What Is the Akida Event Domain Neural Processor?
 
Maybe you need to go back to your research. For starters, "neural network" is a generic term.

Yeah. A term coined in the 1950s when computer scientists, not neural scientists, were working on simulating a brain. And it turns out their entire understanding of how a brain works was primitive and wrong.


Edge detection is equally meaningless in this context: your solution would literally be scared of shadows.

So you're just going to ignore the part where I said the edge detection layer feeds data to traditional code. Cool. Back here in the real world, things like volume estimation can be done in traditional code, which differentiates between a shadow (a 2d object) and an object with volume. I shouldn't really need to point this out if you folks are attempting to school me in how neural nets work and how autonomous driving systems are using neural nets to produce input to control systems.

It is always amusing when anonymous people post stuff online as if they wrote the book on a particular topic, and rather than debate and teach and learn, they take the view that their comprehension of that topic is peerless and never wrong.

That's the difference between myself and may people on this sub. First, I'm not anonymous. Second, I listen to actual experts. Like, literal experts that have literally invented entire fields of study when it comes to machine learning. Another generic term, BTW.

Anyone who disagrees with them must be ignorant and is quickly dismissed.

Not at all. But having been in the industry for many decades now, I've gained an ability to detect someone that believes in magic versus someone that understands how systems work. I simply didn't think there was much point to credential waving when there is so much research freely available online. Open Coursewear has some great lectures, BTW, for those that might be interested.

Anyway, thanks for the banter! You have reminded me to check up on these guys, who are doing some interesting stuff which might even be good enough to replace your edge detection solution...

It still needs to be trained to detect unknown objects. The improvement is that the entire network doesn't need to be retrained. I'd love to see how it stacks up accuracy wise over time, which the paper doesn't mention, because as you lean more and more on the local training the more the behavior will likely diverge in unexpected ways. Something that researchers of very large networks have discovered over time.

Regardless, the so-called transfer learning step still would never be used in a real world vehicle system to produce control inputs. Otherwise, you're going to need to dedicate a few cars to smash into objects for the network to be trained that hitting them is bad and that those objects the training vehicles have hit belong to a class that needs to be avoided.

Then again, literally all of this becomes completely moot if you simply feed an image with detected edges into a bit of traditional code that estimated volume and tells the car to stop for any object in its path with volume that hasn't been classified by the NN. The bit none of you seem to have picked up on yet is that this can be a trigger to collect training data, which can be fed through the training system to become part of the next updated network.

I quoted wikipedia that says they are similar. What is your source or are you just DrBabbles?

Yeah, I mean, if you're quoting wikipedia here then it's sort of conversation over.

Neural Networks Do Not Work Like Human Brains – Let’s Debunk The Myth

The differences between Artificial and Biological Neural Networks

A critique of pure learning and what artificial neural networks can learn from animal brains | Nature Communications

The number of ACM articles available on this topic is almost too many to even begin to cite, and the number of published papers is even higher. But I think the above links are a pretty good primer. Wikipedia is not a primary source.