Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Deer and autopilot

This site may earn commission on affiliate links.
Moral dilemma problems are objective and can easily be described mathematically but outcomes are subjective at best.

What does that have to do with whether it can be duplicated by a computer program? The point isn't that computers will not always pick the option that every human agrees with, when those humans don't actually agree between themselves. That is obviously impossible. However; once humans agree what stops a computer from matching that agreement?

Thank you kindly.
 
  • Like
Reactions: SageBrush
What does that have to do with whether it can be duplicated by a computer program? The point isn't that computers will not always pick the option that every human agrees with, when those humans don't actually agree between themselves. That is obviously impossible. However; once humans agree what stops a computer from matching that agreement?
Thank you kindly.

Exactly, we agree, you are reiterating my point. As humans we'd have to assign moral problems and the subjective outcomes mathematical weight before it can be duplicated by a computer program. Once that happens it should be trivial for it to at minimum follow a certain set of rules or a trained neural network.

As a human and a computer to prove correctness is impossible as even humans can't agree on what correct is. Once we pin down acceptable then we'll all be fine.
 
Exactly, we agree, you are reiterating my point. As humans we'd have to assign moral problems and the subjective outcomes mathematical weight before it can be duplicated by a computer program. Once that happens it should be trivial for it to at minimum follow a certain set of rules or a trained neural network.

In which case, I am completely at a loss for why you would say something like:

Consider the problem NP-hard.

If the problem is trivial (as you claim) than it can't be NP-Hard. If it is an intractable social problem, we have no idea of its bounded algorithmic complexity.

Complexity aside, since we are talking deep learning, all we really have to do is give it examples. If we generally try to avoid animals, so will the car. If we sometimes hit animals to avoid hitting humans, so will the car. The trouble comes when we teach it bad behavior.

Thank you kindly.
 
What does that have to do with whether it can be duplicated by a computer program? The point isn't that computers will not always pick the option that every human agrees with, when those humans don't actually agree between themselves. That is obviously impossible. However; once humans agree what stops a computer from matching that agreement?

Thank you kindly.
A fixed algorithm cannot deal with a limited number of possible decisions and an infinite number of situations without the risk of an infinite loop. Something only learning machines can defeat.
 
A fixed algorithm cannot deal with a limited number of possible decisions and an infinite number of situations without the risk of an infinite loop. Something only learning machines can defeat.

Actually that is simple. A program which merely adds one to the input does that. Please stop spouting nonsense about what computers can't do.

Thank you kindly.

p.s. we do have learning machines.
 
My claim would certainly be the opposite. You might get a consistent answer... That doesn't mean the result wouldn't be consistently incorrect.

You lost me. If you tell me the dilemma, and the resolution of that dilemma, and I model it with a Turing machine, you are saying that the results won't match, or that they will match but that will somehow be incorrect?

The computer doesn't know it is a moral dilemma. It treats it just like a chess problem, which computers are better at than almost all humans. How is sacrificing a rook to save a queen different from sacrificing a squirrel to save a baby?

Thank you kindly.
 
Actually that is simple. A program which merely adds one to the input does that. Please stop spouting nonsense about what computers can't do.

Thank you kindly.

p.s. we do have learning machines.

If you have a gas powered car made in the last 10 years, yes you probably do have a simple learning machine that shares what it learns with other systems.

Thank you is a term used for acknowledging a kindness. Not sure you're using it right (seems to be in every post you make) when you are trying to talk down to the readers.
 
Last edited:
Actually that is simple. A program which merely adds one to the input does that. Please stop spouting nonsense about what computers can't do.

Thank you kindly.

p.s. we do have learning machines.

50 years later, can your computer that millions of hours of development time has been spent determine a problem exists when the problem's characteristics are completely undefined? If so, WTF is that Blue Screen Thingy? Or why do you reboot your car or cellphone?

Without reflashing does you Car, Computer, Cellphone learn to defeat a problem without knowing the problem? Humans can. So can a learning machine programmed correctly.
 
50 years later, can your computer that millions of hours of development time has been spent determine a problem exists when the problem's characteristics are completely undefined? If so, WTF is that Blue Screen Thingy? Or why do you reboot your car or cellphone?

Without reflashing does you Car, Computer, Cellphone learn to defeat a problem without knowing the problem? Humans can. So can a learning machine programmed correctly.
Machine learning is not magic and is still not a general purpose agent... that's what deepmind and openAI are all about. Windows, Android, and even iOS have several machine learning algorithms built in and this doesn't make them perfect.

The algorithms ability to learn and generalize isn't nearly as flexible as a human brain and there's a lot of research happening to try to bridge the gap.

There is a misconception in the general public as to what AI is and is not capable of. When learning to defeat a problem if that problem requires a series of steps to be done correctly in order for the problem to be solved AI has a very difficult time with this and does NOT currently generalize well. DeepMind has come pretty far with this issue but has yet to claim "success" on all fronts.
 
I have bumped two deer and had a near miss with another one but no damage on any of these. The Model S brakes are fantastic and will stop you on a dime if you are alert and not afraid to fully deploy them. That is what has saved me from large repair bills so far. I also drive with fog lights on to see what is coming at the car from the side. Have spotted a lot of deer this way too. The high beams are also great for helping you see deer. Here a deer, there a deer, everywhere a deer deer.
 
  • Like
Reactions: Red Sage
Thank you is a term used for acknowledging a kindness. Not sure you're using it right (seems to be in every post you make) when you are trying to talk down to the readers.

I am acknowledging a kindness. Namely your participation in this forum. Doesn't make you any less wrong (or me less obligated to point it out). Nor does you being wrong reduce the kindness you are doing by being here.

Thank you kindly.