Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Don't take your hands off the wheel

This site may earn commission on affiliate links.
I tried not to judge his personal motivations too harshly as I find him a very sympathetic character who in general is doing useful work, but he does seem a bit too emotionally attached to Tesla/Musk, for whom this "paper" appears mainly designed to do a solid.
I don't think you're listening closely enough to him. He has a very different take from Musk on how human - AI interaction should happen, how best to approach arrangement and task allocation.
 
It is too bad my comments about that exactly were snipped out when I was quoted, some uncertainty about what exactly will happen going forward and that this isn't conclusive. :(

As for your last sentence; Proofs are for mathematicians and alcohol. :p Science, and nominally regulation, are about levels confidence. You can't really prove these things, you just need increasingly larger chunks of increasingly controlled data to [potentially] increase confidence. This of course is very preliminary. However if you don't think it takes a meaningful amount of air out of assumptions about automobile automation becoming very unsafe in the valley between manual and full AI, that there is no way to manage it and meaningfully mitigate the risk, then you're doing a very poor job of adjusting your priors and "science orientated" isn't a mantle you should be ascribing to yourself.

I'm trying to unpack what you're saying here -- but first allow me a drink ;)

One could draw a conclusion from the MIT report that all is necessary is for the driver to believe they are being examined, and that will significantly decrease the likelihood that an accident will occur.

The vast number of accidents on the road are preventable, many of which are caused by a lack of attention to detail, distraction - and unfortunately, in some cases, impaired driving. Have a look at some of these statistics:

  • The National Safety Council reports that cell phone use while driving leads to 1.6 million crashes each year.
  • Nearly 390,000 injuries occur each year from accidents caused by texting while driving.
  • 1 out of every 4 car accidents in the United States is caused by texting and driving.
  • Texting while driving is 6x more likely to cause an accident than driving drunk.
  • Answering a text takes away your attention for about five seconds. Traveling at 55 mph, that's enough time to travel the length of a football field.
  • Texting while driving causes a 400 percent increase in time spent with eyes off the road.
  • Of all cell phone related tasks, texting is by far the most dangerous activity.
  • 94 percent of drivers support a ban on texting while driving.
  • 74 percent of drivers support a ban on hand-held cell phone use.
Have a look at the final two bullet points - and this is really the phenomenal point about being human - we often engage in behaviors which WE KNOW are bad for us. Internally, we have our own set of competing interests, including those that are intellectual and emotive.
It is likely that once FSD matures, we will see a similar percentage of people supporting FSD if it can be shown that the risks are significantly lower (say by five to ten times) - This was also Musk's point. That data SHOULD be collected objectively.

This means there should be some definition of how that data is collected, where it is obtained from, and how it is corroborated and interpreted. Whether you call it an objective scientific measure or only "mitigating risk," it still must be done based on a standard set of principles. It must also take into account human factors - If we wanted to improve road safety, we could do it today in a variety of ways that just would not work for people (say by restricting driving to people in their mid 20's and up, or by lowering the road speed to 20 miles per hour).
 
  • Like
Reactions: OPRCE
One could draw a conclusion from the MIT report that all is necessary is for the driver to believe they are being examined, and that will significantly decrease the likelihood that an accident will occur.
AP is doing that. Right? Gaze across Tesla message boards and you'll see the loathing people have for getting nagged at, complaining about being watched. :)

Again, nowhere close to conclusive but if this doesn't adjust your priors at all then I don't think you're being objective.
 
AP is doing that. Right? Gaze across Tesla message boards and you'll see the loathing people have for getting nagged at, complaining about being watched. :)

Again, nowhere close to conclusive but if this doesn't adjust your priors at all then I don't think you're being objective.

I would say no, as it is an intermittent test and can be easily fooled with wedged fruit or vegetables.

Anecdotal evidence (in the form of poll participation on this board) indicates about 12.5% of users may engage in such hazardous behaviour to leave themselves free for crucial texting activities, which I posit they would not if they knew they were under close video surveillance by MIT or had an actual cop riding shotgun.


Then you should understand your suggesting him having bias in the direction is completely counter to his outlook.

Yes, I'm thinking he is somewhat conflicted wrt Tesla/Musk.
 
AP is doing that. Right? Gaze across Tesla message boards and you'll see the loathing people have for getting nagged at, complaining about being watched. :)

Again, nowhere close to conclusive but if this doesn't adjust your priors at all then I don't think you're being objective.

We know that in the world of quantum mechanics, merely looking at something changes its outcome.

One thing that was a cause for pause was when Elon Musk mentioned animal-based transportation being on par with vehicles that don't have self-driving capabilities. I found it a bit ironic, because of thousands of years ago, we could say we already had some level of self-driving capability -- and even donkey could climb mountainous regions that are entirely out of scope for most vehicles. In essence, we traded automation for speed and comfort, and now we're trying to add it back in again ;)
 
  • Like
Reactions: lolder and OPRCE
AP is doing that. Right? Gaze across Tesla message boards and you'll see the loathing people have for getting nagged at, complaining about being watched. :)

Again, nowhere close to conclusive but if this doesn't adjust your priors at all then I don't think you're being objective.

Also, its common knowledge that humans don't like being told what to do - even if they know its right - and even if they were going to make the same selection without input from another source. Here we have unlimited examples -- How many times have you felt like stopping when someone behind you honked to move forward, or when your spouse asked you to do something but dislike doing (taking out the garbage, or going shopping)?
 
We know that in the world of quantum mechanics, merely looking at something changes its outcome.
Then we must invalidate out all human science endeavor! Because even indirectly measuring we're measuring. Question everything! It's all lies! Time to dig out my Supertramp album, surely they had a song about this. ;)

Come on, this is cheap.
One thing that was a cause for pause was when Elon Musk mentioned animal-based transportation being on par with vehicles that don't have self-driving capabilities. I found it a bit ironic, because of thousands of years ago, we could say we already had some level of self-driving capability -- and even donkey could climb mountainous regions that are entirely out of scope for most vehicles. In essence, we traded automation for speed and comfort, and now we're trying to add it back in again ;)
Robotics is rife with the beast of burden motif, and replication of fauna form in general. Example Prime; Boston Dynamics.

There's even that cockroach cyborg project, things like that, that are trying to leverage bio-mechanics to shortcut around tricky engineering and manufacturing tasks of building the tool/vehicle.

It is indeed kinda the dream, "like a pack animal but with more predicable control and easier to mass produce". Training an animal is a long process that doesn't really scale. If you can get that into software that you load into a mass manufactured body, boom. A much larger scale and a lot more reliable results, at least reliable in a repeatable fashion if not potentially "smarter" than the pack animal in the desired task.
 
Also, its common knowledge that humans don't like being told what to do - even if they know its right - and even if they were going to make the same selection without input from another source. Here we have unlimited examples -- How many times have you felt like stopping when someone behind you honked to move forward, or when your spouse asked you to do something but dislike doing (taking out the garbage, or going shopping)?
You're just spiraling down into angel dancing "what ifs". :/ It isn't a bad piece of a prior (I think it was a really reasonable one in fact, I had it) but if just can't let it be that, well that's an issue.
 
Then we must invalidate out all human science endeavor! Because even indirectly measuring we're measuring. Question everything! It's all lies! Time to dig out my Supertramp album, surely they had a song about this. ;)

Come on, this is cheap.

Robotics is rife with the beast of burden motif, and replication of fauna form in general. Example Prime; Boston Dynamics.

There's even that cockroach cyborg project, things like that, that are trying to leverage bio-mechanics to shortcut around tricky engineering and manufacturing tasks of building the tool/vehicle.

It is indeed kinda the dream, "like a pack animal but with more predicable control and easier to mass produce". Training an animal is a long process that doesn't really scale. If you can get that into software that you load into a mass manufactured body, boom. A much larger scale and a lot more reliable results, at least reliable in a repeatable fashion if not potentially "smarter" than the pack animal in the desired task.

We are only now discovering the wonders of natural neural networks, and we still are clueless about how they work. Animals (and humans) are not born with an entirely clean slate; there are some things they do from birth that almost allows them to "hit the ground running." I'm particularly amazed at the work O. Wilson has done with ants - we are still entirely clueless as to how they join forces to create something as complex as an ant colony that arguably a single ant does not have the neural capacity to design. If you haven't seen any of the ant documentaries, spend some time on youtube - they're incredible!

At some point, we may not need to train animals; we'll pass down whatever we'd like them to "know" genetically. Animals are also incredible (mainly fish and insects) when it comes to mass production and self-assembly. I think in the US, we produce somewhere around 9 billion chickens a year - talk about efficiency and scale. I think Elon is right in his assessment (likely taken from William Gibson and others) that there will be a joining of human and machine. Right now, AP is a very rough approximation of this idea.
 
We are only now discovering the wonders of natural neural networks, and we still are clueless about how they work.
We do indeed have clues. We certainly don't have he bulk of the answers but we've got some decent working ideas.
At some point, we may not need to train animals; we'll pass down whatever we'd like them to "know" genetically. Animals are also incredible (mainly fish and insects) when it comes to mass production and self-assembly.
The stuff we have to "hit the ground running" is fairly rudimentary. It isn't really anything sentient. The complexity, growth, and changes we go through is actually massive.

The problem here is you're really talking about packing 10lb of mud into a 5lb sack. DNA has a relatively limited storage capacity without going way over the edge into re-engineering into something extremely removed biologically. The intricate details built up over time, from "learning" but also via development physiologically, equates to an extremely ordered physical system you'd try to recreate. In turn that represents a huge amount of information, beyond the normal capacity of DNA as it stands. Potentially ever, depending on your goals here.

That path likely passes through being able to program NN without the current "learning" cycle, just being able to compute the matrix of what we want. Which is stunningly hard.
 
You're just spiraling down into angel dancing "what ifs". :/ It isn't a bad piece of a prior (I think it was a really reasonable one in fact, I had it) but if just can't let it be that, well that's an issue.

By the way, I'm not knocking the effort; I'm just looking at it for what it is - a single study, and I encourage more of the same. I found the lead author to be intelligent, and engaging which is quite refreshing from much of what we see on many of the Tesla inspired YouTube channels.

People, at the end of the day, still appreciate, and prefer a level of autonomy. There will be a careful balance at play here, where the person begins and ends, and where automation takes over. Some of us drive because we love the freedom, some of us dread it for the same reason if it means we are sitting in the middle of a giant slow moving amorphous mob of "soul suckingtraffic. Perhaps in the future, our experience in a car will be more like a lounge or theatre, where we have very little exposure to the outside and can create and take part of an entirely separate environment inside. Maybe this is what Elon had in mind when he implemented the Atari emulator -- want an incredible driving experience? Drive a racing simulation while the car takes you down boring and repetitive roads - or race with your friends who may actually be in another timezone or place. If Elon has his way, we may be trying to simulate Earth - while living on Mars.
 
We do indeed have clues. We certainly don't have he bulk of the answers but we've got some decent working ideas.

The stuff we have to "hit the ground running" is fairly rudimentary. It isn't really anything sentient. The complexity, growth, and changes we go through is actually massive.

The problem here is you're really talking about packing 10lb of mud into a 5lb sack. DNA has a relatively limited storage capacity without going way over the edge into re-engineering into something extremely removed biologically.

The intricate details built up over time, from "learning" but also via development physiologically, equates to an extremely ordered physical system you'd try to recreate. In turn that represents a huge amount of information, beyond the normal capacity of DNA as it stands. Potentially ever, depending on your goals here.

That path likely passes through being able to program NN without the current "learning" cycle, just being able to compute the matrix of what we want. Which is stunningly hard.

They're both massive. Some insects have a very short life-span (within hours) and are born with most of the knowledge and capability they'll ever accumulate. I am often amazed by the capability of the common housefly if we pause and think they are an incredible engineering marvel.

Regarding the information density of DNA; this is also unparalleled. It has been estimated that we can pack nearly 215 petabytes of data into a single gram of DNA. Today, encoding and decoding this data has been very slow, but the rate of sequence improvement and reliability (in both reads and writes) is rapidly increasing. Whether we adopt similar mechanisms into storage system technology or design our own remains to be seen.

I think the overall amount of information required to train the system is far in excess of what actually needs to be present when executing on it. But once that system is designed, it can be packed into a rather small base of code and information processing. The software code base and size of the neural network is actually quite small and seems to easily fit within a few gigs, while the data collected to get to this point is likely in the petabytes per hour, over many months and years.

This problem of self-driving is quite narrow and defined when compared to the massive tasks that a non-specific organic system must accomplish during its lifetime. This is why automation and self-driving is inevidible (as hard as it is, it is still considerably easier than the tasks our brain executes on subconsciously at every moment).
 
What this strikes me as:

Screen Shot 2019-05-15 at 12.52.19 PM.png


Doing the math like you did cuts both ways, and it gets dwarfed.
 
Also, its common knowledge that humans don't like being told what to do - even if they know its right - and even if they were going to make the same selection without input from another source. Here we have unlimited examples -- How many times have you felt like stopping when someone behind you honked to move forward, or when your spouse asked you to do something but dislike doing (taking out the garbage, or going shopping)?

Hey! Don't you tell me that I don't like being told what to do! :)
 
Basing any speculation on HW3 capabilities on the existing HW2.x NN makes as much sense as concluding a 12 month old will never, ever, walk since they keep falling over when they try.
You must be new here. The same statement was made when AP 1 came out, then when AP2 came out, then when AP 2.5 came out, and the same statement will be made again when AP 3 comes out. FSD is always "this year", always "next hardware rev", always......

EAP as it stands today is just horrible. It's always reacting (slowly). It drives like a 16-year old that just got their license (right off the end of their hood ornament). It never anticipates. Every control input is herky jerky. It stabs the brakes when going down the freeway with no one around.

AP 3 cameras/processors won't fix those things. But feel free to think that they will.