Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Summon seems like a silly party trick to me

This site may earn commission on affiliate links.
In no way am I saying that I don't think it's up to the one initiating summon to do so responsibly and to take all precautions to ensure that no one is injured.

That's a lot of double negatives, but I think you're saying it's up to the summoner to use Summon responsibly.

I agree.

But... have you noticed that in probably 75% of the videos people have posted -- whether Summon succeeded or failed -- the summoner was not using it responsibly?

This is called "predictable abuse." It has a formal definition, and there are related legal precedents.
 
  • Informative
Reactions: gilscales
That's a lot of double negatives, but I think you're saying it's up to the summoner to use Summon responsibly.

I agree.

But... have you noticed that in probably 75% of the videos people have posted -- whether Summon succeeded or failed -- the summoner was not using it responsibly?

This is called "predictable abuse." It has a formal definition, and there are related legal precedents.
So does the definition of predictable abuse apply to the Bronx zoo example of the moron jumping into the Lion's exhibit? Clearly this woman is impaired in some way. There are millions of impaired folks walking the face of the earth. So to steal one from your playbook and go all the to the extreme what's next?

In your world they should eliminate any possibility of a moron doing something stupid. Do they need to shut down the zoo? Do they need to perform psychological evaluations at the gate?

I mean, we have to rid the world of every possible risk and danger right? We can't possibly accept some level of risk in order to better humanity can we?
 
  • Disagree
Reactions: gilscales
So does the definition of predictable abuse apply to the Bronx zoo example of the moron jumping into the Lion's exhibit?

No.

In your world they should eliminate any possibility of a moron doing something stupid. Do they need to shut down the zoo?

No.

Do they need to perform psychological evaluations at the gate?

No.

I mean, we have to rid the world of every possible risk and danger right?

No.

We can't possibly accept some level of risk in order to better humanity can we?

Case-by-case basis. This is why we have a legal system, lawyers, and judges. In this case, it's up to NHTSA. We are a society with laws. I'm afraid you will have to accept that.
 
  • Like
Reactions: gilscales
It's interesting to me that these types of videos never get surfaced here. There's as many of these on YT (if not more) as the "disaster/failure" videos.

If you want to hate Tesla and/or its features without even attempting to be objective then it's easy to do. It takes more guts to truly be objective.


I think the point that people have been making is that the first part of that Smart Summon was extremely dangerous, because the path and proximity of the car was blocked by other cars and plants, meaning the responsible driver would have been unable to see obstacles such as a small child or pets being hit or worse. Just seeing part of the car does not mean you see everything it is about to hit or run over.

This indeed is the biggest fundamental problem with Smart Summon: there will always be (unless you hover above the car or run circles around it) blindspots created by the car itself — you can’t see what happens on the other side of it, a child slipping under rear wheels or getting squeezed between your car and another car — and of course other objects in the line of sight.

Mind you I am not personally against Smart Summon being released. I think it is cool at least technically that consumers can buy such a thing. But objectively speaking it has fundamental issues that are not easily solved: blocked view being the biggest one if it is operated from anywhere other than right there next to the car while moving to see around it all the time. The only solution to this flaw would be Level 3-5 car responsible driving and removal of the driver responsibility... which then would place extra reliability requirements on the car of course.

This is not an either-or thing. We can appreciate some progress while also noting fundamental issues.
 
No.



No.



No.



No.



Case-by-case basis. This is why we have a legal system, lawyers, and judges. In this case, it's up to NHTSA. We are a society with laws. I'm afraid you will have to accept that.
Agreed and the same applies to you. You are no more right or wrong than I am on this. We just have different views. The biggest difference is that you assert your views as fact and you go about it in a very aggressive and douchey way.
 
I just hope that the Idea that summons that may help people with Handicap or people with limited mobility that this summon can truly help them. Hopefully it will not be taken away because of stupid people.

The biggest question for me is: Can Smart Summon be used from a meaningful distance non-stupidly?

Because the way Smart Summon is set up, the driver responsible for the drive is always unable to see — from a distance — part of what is going on around the car. This in itself is a form stupidity, if performed, because you should always have full awareness of what is happening around your car when it is moving. Yet the way Smart Summon is set up, if you are standing at a distance, you will not see the other side of your car nor will you see through any obstacles blocking the line of sight...

...and what’s worse, there is no really good solution to this. Vigilance doesn’t help because you simply can’t see from a distance through your car and probably not around all other obstacles either. The extra blindspots this setup introduces are greatly increased compared to any normal driving situations. The only thing that helps is removing the responsible driver from the equation, or by being close to the car and moving around to monitor its movement. Perhaps camera views on the phone could help.
 
Agreed and the same applies to you. You are no more right or wrong than I am on this. We just have different views.

If NHTSA ends up restricting or recalling Smart Summon, then I will have been much more right than you.

you go about it in a very aggressive and douchey way

Yeah, I get pretty douchey when people threaten the life of my child for a party trick. My bad.
 
  • Like
Reactions: gilscales
meaning the responsible driver would have been unable to see obstacles such as a small child or pets being hit or worse. Just seeing part of the car does not mean you see everything it is about to hit or run over.

This indeed is the biggest fundamental problem with Smart Summon: there will always be (unless you hover above the car or run circles around it) blindspots created by the car itself

objectively speaking it has fundamental issues that are not easily solved: blocked view being the biggest one

The only solution to this flaw would be Level 3-5 car responsible driving and removal of the driver responsibility

Because the way Smart Summon is set up, the driver responsible for the drive is always unable to see — from a distance — part of what is going on around the car.

Vigilance doesn’t help because you simply can’t see from a distance through your car and probably not around all other obstacles either.

You are a wise man. I agree with all of the above.

(I feel like I've said essentially the same things 50 times in this thread already, but I guess I said them in a "douchey way" so they didn't count.)
 
I am not an engineer, software or otherwise, only a lawyer. But being a lawyer I do know my way around the structure of a particular argument.

This entire discussion is about the concept of “predictable abuse”. With smart summon the predictable abuse is not monitoring the car. With AP it’s falling asleep or paying less attention than you would otherwise. I would go so far as to suggest, without even trying to change anyone’s mind, that all discussions about Tesla’s roll out of Self Driving features are “predictable abuse discussions” - in many cases masquerading as a discussion about something else.

So far, and by that I mean up until today, actual data has overcome predictable abuse events. By “overcome” I mean regulators have concluded that there is not enough quantifiable predictable abuse to make the feature “unsafe overall.”

And that’s the key to the analysis. One instance of a feature leading to an accident is one data point. Knowing that an accident occurred in a car is not the same as knowing the CAUSE of the accident was the feature. And overall, the number of safe uses count. You can’t disregard all the safe uses, or safe miles, and argue the instances of predictable abuse outweigh them.

For the most part under US law, manufactures of products are entitled to the benefit that the product they make will be used as intended. Especially in the case of dangerous products.
 
So far, and by that I mean up until today, actual data has overcome predictable abuse events.

Agreed, I think this is the reality of Autopilot. There are some modes of predictable abuse -- just like L1 cruise control -- but the body of evidence is that it's not so terrible.

I wonder how the data will eventually line up for Smart Summon. I assert that the number of accidents that occurred and were filmed during users' first attempts is statistically exceptional. I expect that most people will just stop using it, which I guess is some comfort.
 
  • Like
Reactions: gilscales
I am not an engineer, software or otherwise, only a lawyer. But being a lawyer I do know my way around the structure of a particular argument.

This entire discussion is about the concept of “predictable abuse”. With smart summon the predictable abuse is not monitoring the car. With AP it’s falling asleep or paying less attention than you would otherwise. I would go so far as to suggest, without even trying to change anyone’s mind, that all discussions about Tesla’s roll out of Self Driving features are “predictable abuse discussions” - in many cases masquerading as a discussion about something else.

So far, and by that I mean up until today, actual data has overcome predictable abuse events. By “overcome” I mean regulators have concluded that there is not enough quantifiable predictable abuse to make the feature “unsafe overall.”

And that’s the key to the analysis. One instance of a feature leading to an accident is one data point. Knowing that an accident occurred in a car is not the same as knowing the CAUSE of the accident was the feature. And overall, the number of safe uses count. You can’t disregard all the safe uses, or safe miles, and argue the instances of predictable abuse outweigh them.

For the most part under US law, manufactures of products are entitled to the benefit that the product they make will be used as intended. Especially in the case of dangerous products.

While what you say makes sense and I would assume has legal merit, I think my point for example is not a predictable abuse question.

My point is that with Smart Summon it is impossible to be used in a way where the responsible driver can see around the car from any meaningful distance. And the manufacturer does intend it to be used from a distance.

Which then means, even without any clear abuse, Smart Summon is operated much of the time partially blindly. In almost all videos of it, this is how it is done: partially blindly. At the very least the ”far side” of the car is not seen by the responsible driver so they can not see if something gets under the car or squeezed between the car and anothor obstacle.

This makes Smart Summon very different compared to any other Autopilot control issue where the responsible driver always has normal level of possibility for awareness.
 
Agreed, I think this is the reality of Autopilot. There are some modes of predictable abuse -- just like L1 cruise control -- but the body of evidence is that it's not so terrible.

On a tangent: It is interesting how this will translate to Automatic city driving at Level 2. While there the situational awareness issues are not different from Level 2 highway driving, the reaction times are. Compared to the highway, the responsible driver will have much less time to react to anything the car misses when using Level 2 Automatic city driving.

But yeah, Smart Summon is a whole different ballgame in the sense that the responsible driver can never see around the car from a distance so the car drives — from a responsibility point of view — partially blindly.
 
But yeah, Smart Summon is a whole different ballgame in the sense that the responsible driver can never see around the car from a distance so the car drives — from a responsibility point of view — partially blindly.

If I may summarize this view:

Most uses of Smart Summon, even by well-intentioned operators, put the car into an L3+ (car-responsible) ODD, because the operator generally cannot see all around the car.

Simultaneously, we know the car is not capable of L3+ operation, because it cannot reliably detect curbs, trash cans, garage walls, grass, reversing vehicles, stop signs, parking space lines, or semi trucks.
 
  • Like
Reactions: gilscales
I agree that for those who expect final production quality, beta quality is quite disappointing.

Maybe Tesla owners didn't realize that they paid for a beta feature.

Worse yet, they may not realize that they paid for something that no one has ever been able to release it commercially!

Ask Waymo: It's still hardly able to figure out a pick-up/drop-off coordinate correctly despite having a safety officer onboard!

Again, these are features that no one has proven that it could work! These features are just visions and hope that they should work given enough time.

So, yes, these features can be called "garbage" but garbage to one is a treasure to another.

For those who love beta products, we can learn what are their limitations and adapt and personalize to make it work for us.

In summary, I would embrace these "garbage" any time and it's a joy!

Hey, maybe you purchased some beta features, but I sure as hell didn't. Here are the features that I purchased:

2016.11.29.ModelSDesignStudio.jpg

Look over the page and tell me where it says that EAP and FSD are beta features. All I see is the text about how the features are finished and ready to be deployed once the mythical regulators say it's good to go.

Your post just helps to prove my point that it's all about kicking the can down the road. Great, I'm glad that you love beta features. There are normal people just minding their business as they go about their day that didn't opt to be a part of the beta test for your 2 tons of metal. F*ck them though, amirite? Whatever it takes for The Mission, safety be damned!
 
...Look over the page and tell me where it says that EAP and FSD are beta features..

I do admit that it is difficult for an average consumer to realize that it's beta.

I knew Autopilot was beta when it was first announced in 2014 because people would pay for it first with the promise that it would be activated with subsequent releases of the software and not right away.

I then read the blog which reinforced my suspicion "Building on this hardware with future software releases, we will deliver a range of active safety features, using digital control of motors, brakes, and steering to avoid collisions from the front, sides, or from leaving the road."

Lots of people didn't read the word "future"! They bought something in 2014 that didn't work until "future"! That's not a final production quality at all!

I then read the owner's manual and it says clearly in there with the word "beta"!

When AP2 was announced in 2016, it's the same thing. People bought it but couldn't use it. It's no where near a final production quality at all!

The blog said: "Before activating the features enabled by the new hardware, we will further calibrate the system using millions of miles of real-world driving to ensure significant improvements to safety and convenience. While this is occurring, Teslas with new hardware will temporarily lack certain features currently available on Teslas with first-generation Autopilot hardware, including some standard safety features such as automatic emergency braking, collision warning, lane holding and active cruise control. As these features are robustly validated we will enable them over the air, together with a rapidly expanding set of entirely new features. As always, our over-the-air software updates will keep customers at the forefront of technology and continue to make every Tesla, including those equipped with first-generation Autopilot and earlier cars, more capable over time."

It said clearly that when you bought it, it wouldn't work. You had to have patience because it says "more capable over time". These are hints that it's not a final production quality at all!

When you ordered Enhanced Autopilot it says It "has begun rolling out". "Begun" is not the same as finished. If it's not finished, it's nowhere near final production quality at all!:

Screen-Shot-2018-08-31-at-11.59.25-AM.jpg

Photo credit: electrek.co


In summary, people may not know that they just bought a beta feature, but the clues are there. The legal document is there!
 
  • Disagree
Reactions: electronblue