Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

A hint into Model Y's future

This site may earn commission on affiliate links.
Last edited by a moderator:
  • Informative
Reactions: NeverFollow
during todays share holding meeting, Elon teased a new teaser of the model Y and said their may not be a steering wheel.

“Model Y will not have any leather in it, even in the steering wheel….even if it does have a steering wheel.”

Also, he's sticking with the March reveal


Tesla releases new image of Model Y electric CUV, Musk jokes about it having no steering wheel
Impossible.

It will most certainly have a steering wheel. Government and Federal regulators aren't going to vote in FSD before the Model Y comes out.

They have been pre-occupied for the past 1.5 years.
 
  • Like
Reactions: P85_DA
Impossible.

It will most certainly have a steering wheel. Government and Federal regulators aren't going to vote in FSD before the Model Y comes out.

They have been pre-occupied for the past 1.5 years.
I’ll go beyond that and say tesla won’t even have FSD before Model Y comes out
It’s not a knock on tesla, it’s just a difficult technology that requires perfection, which is going to take time, whether Elon likes it or not
 
I'm not a Fed official, but from a transportation professionals standpoint, if you think FSD is within grasp in two years I think you will be very disappointed. What I do see happening in the next two years is pre-defined routing ( think bus routes) for self driving. Way to many variables software developers havn't even thought about that we as humans do subconsciously.
 
  • Like
Reactions: jgs
I’ll go beyond that and say tesla won’t even have FSD before Model Y comes out
It’s not a knock on tesla, it’s just a difficult technology that requires perfection, which is going to take time, whether Elon likes it or not
I agree with you 100%.

I would go a step further and say that FSD will not be possible because moral issues come into play that HUMANS do a remarkable job at without having to even think.

IMHO FSD would work if there were no issues concerning driving. However Driving is equally about when things don't go right.

For instance - A dog runs out in front of a car. The car has a choice between hitting the dog and hitting a tree - Who is going to program in that moral decision.
A human runs out into the street. The car has a choice of hitting the human or hitting a dog. Who programs that morality into the car?

For some reason that only God himself knows....humans seem to be able to react appropriately in the majority of these situations.

FSD - even if technical possible - won't be able to be programed to respond to decisions relating to value and moral correctness.
 
  • Like
Reactions: AMPd
I agree with you 100%.

I would go a step further and say that FSD will not be possible because moral issues come into play that HUMANS do a remarkable job at without having to even think.

IMHO FSD would work if there were no issues concerning driving. However Driving is equally about when things don't go right.

For instance - A dog runs out in front of a car. The car has a choice between hitting the dog and hitting a tree - Who is going to program in that moral decision.
A human runs out into the street. The car has a choice of hitting the human or hitting a dog. Who programs that morality into the car?

For some reason that only God himself knows....humans seem to be able to react appropriately in the majority of these situations.

FSD - even if technical possible - won't be able to be programed to respond to decisions relating to value and moral correctness.
You have a point about moral intuition, but... How many dogs or humans have leapt in front of the average driver? On the other hand, how many errors has the average driver made that have resulted in an accident that a machine wouldn’t have made? If an AI is trained by watching human reaction when humans or dogs leap in front of cars then it doesn’t need to “know” about moral decisions, but it will emulate the human decision anyway. It also isn’t subject to silly distractions like so many drivers are.

In some number of years (whenever that is), the humans will still have more heart but the machines will crash less.
 
  • Like
Reactions: unbelievable
You have a point about moral intuition, but... How many dogs or humans have leapt in front of the average driver? On the other hand, how many errors has the average driver made that have resulted in an accident that a machine wouldn’t have made? If an AI is trained by watching human reaction when humans or dogs leap in front of cars then it doesn’t need to “know” about moral decisions, but it will emulate the human decision anyway. It also isn’t subject to silly distractions like so many drivers are.

In some number of years (whenever that is), the humans will still have more heart but the machines will crash less.
I agree with what you are saying, however I would like to see the first court case for a FSD immoral accident.

Who would the judge question?

Who holds the liability in a FSD situation?

Even if FSD is flawless in its performance....there is absolutely no way its decision making can be trusted in a non-programmed situation. How can you program in ALL situations. It can't happen.
 
I agree with what you are saying, however I would like to see the first court case for a FSD immoral accident.

Who would the judge question?

Who holds the liability in a FSD situation?

Even if FSD is flawless in its performance....there is absolutely no way its decision making can be trusted in a non-programmed situation. How can you program in ALL situations. It can't happen.
You can’t. Even Elon says so:

“The real trick is not how you make it work 99.9% of the time. If a car crashes, say, 1 in 1000 times, then you’re probably still not going to be comfortable falling asleep.. it’s never going to be perfect, but if the car is unlikely to crash in 100 lifetimes or 1000 lifetimes… then that’s probably ok.”
 
You can’t. Even Elon says so:

“The real trick is not how you make it work 99.9% of the time. If a car crashes, say, 1 in 1000 times, then you’re probably still not going to be comfortable falling asleep.. it’s never going to be perfect, but if the car is unlikely to crash in 100 lifetimes or 1000 lifetimes… then that’s probably ok.”
True,

What I believe the hurdle is going to be is to determine who is to blame for the 0.1%. I pay insurance for the 0.1% of my driving time where I might have an accident.

You have to account for that 0.1% because 99.9% of successful driving won't possibly constitute a possible death or injury. 1 death....no matter what percentage its a part of is too much - whether its human caused or God forbid autopilot.
 
I agree with what you are saying, however I would like to see the first court case for a FSD immoral accident.

Who would the judge question?

Who holds the liability in a FSD situation?

Even if FSD is flawless in its performance....there is absolutely no way its decision making can be trusted in a non-programmed situation. How can you program in ALL situations. It can't happen.

Basically there needs to be an agency dedicated to monitoring and investigating AV accidents. If they happen too often for one OEM, they need to investigate them.

Other than that, the damages and so on will all have to be payed by the OEM anyways. Once we go beyond level 2, there will be a much greater incentive to actually release something that works.
 
Basically there needs to be an agency dedicated to monitoring and investigating AV accidents. If they happen too often for one OEM, they need to investigate them.

Other than that, the damages and so on will all have to be payed by the OEM anyways. Once we go beyond level 2, there will be a much greater incentive to actually release something that works.
No there doesn't need to be an agency. We have too many agencies. We already have a Department of Transportation. They just need to step up and do their job.

People get covered by insurance....not OEM's. EAP gets used by people - who are insured. My insurance company is not insuring Tesla's EAP at all. They insure me in case MY EAP malfunctions.
 
For instance - A dog runs out in front of a car. The car has a choice between hitting the dog and hitting a tree - Who is going to program in that moral decision.

These sorts of issues are overstated. Unless Musk invents a time machine (announced Q4218?) the car doesn't know the dogs future actions. In all these sorts of scenarios what the car will do is panic brake while staying on the roadway. That choice will be good enough to avoid excessive liability, which is what manufacturers want.