Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

FSD very far away due to regulations?

This site may earn commission on affiliate links.
How about Rodney Brooks from MIT who says:
"
There are quite a few people out there who’ve said that AI is an existential threat: Stephen Hawking, astronomer Royal Martin Rees, who has written a book about it, and they share a common thread, in that: they don’t work in AI themselves. For those who do work in AI, we know how hard it is to get anything to actually work through product level.

Here’s the reason that people – including Elon – make this mistake. When we see a person performing a task very well, we understand the competence [involved]. And I think they apply the same model to machine learning. [But they shouldn’t.]
...
"


// founding OpenAI is great by the way

You think people who have their life and career entirely dependent on AI will come out to say it could be an existential threat and we better control it? That day will come when fossil fuel companies say global warming is a threat we need to control ICE car sales. Not that they are all dishonest but people tend want to believe what is beneficial to them and not otherwise. The Zuckerber story I mentioned is just one of them. Facebook is very involved in AI He did not want people to have a bad impression on that. You don't believe there is a potential out of control danger just in Facebook alone?

Elon understands AI. BTW he's very honest in warning us about AI even when his companies are heavily involved in that. He does not need to actually program machines to know exactly how it works although he's a pretty decent coder too. All he needed to see is how fast technology has progressed in our lifetime. He mentioned the Atari Pong game came out forty years ago that is so primitive compares to computer games today. Can you imagine how simulation in another forty years will be like? That's why he's saying there is the likelihood everything we see is just an ancestor simulation by future human (or non-human). Far out right? But ALL scientists agree that there is no way we know of at this moment can prove it's not the case.

Going back to machines our brain has nearly 100 billion neurons. Most advanced chips now can pack that many transistors in there too. Chips are still following Moore's law but our brains will not grow bigger. You don't believe in a few decades, if not few years, computers will be able to do most everything we human can do and more? Try not to be so arrogant of how good we are.
 
Last edited:
Elon is a founder of OpenAI which has many products with the goal of to benefit the society instead of do harms. His view on the danger of AI is shared by many great scientific minds including Steven Hawkins and Neil DeGrasse Tyson. Once he responded to Mark Zuckerberg's comment that his view is irresponsible by tweeting "I've talked to Mark about this. His understanding of the subject is limited." which actually silenced Zuckerburg.

As in my post to R.S below Elon has had some of most important contribution to the world even if he's not perfect. The last paragraph in you post that is filled with hate is nothing short of amzing.

No it didn't. Zuck doesn't have time for the petty arguments, no one does. Mark literally called him clueless and literally laughed at his statement of terminators in 2019 in his FB live on july 23rd. That's all he needed to do. I'm not surprised by the tweet by Elon later. This is coming from a guy who said we will have terminators by 2019. I love how you avoid his AGI terminator by 2019 statements. This is what Elon does, when you call him out on the stupid sh*t he says, he tries to insult you. Same thing happened with the "pedo" guy. This is Elon being Elon. self centered, egoistic, pathological liar.

But i can't wait for you to come back and say his 2019 statements don't mean anything, he's a visionary, blah blah. Just another excuse to pile on to his other statements.
 
  • Disagree
Reactions: pilotSteve
You don't believe in a few decades, if not few years, computers will be able to do most everything we human can do and more? Try not to be so arrogant of how good we are.
Elon said it will happen this year 2019. full human level AGI sentient singularity robots. so tell me, where are they? or is it that he's a visionary so he gets to say stupid *sugar* like this for attention, get all the media articles and get a pass when it doesn't happen?
 
How about Rodney Brooks from MIT who says:
"
There are quite a few people out there who’ve said that AI is an existential threat: Stephen Hawking, astronomer Royal Martin Rees, who has written a book about it, and they share a common thread, in that: they don’t work in AI themselves. For those who do work in AI, we know how hard it is to get anything to actually work through product level.

Here’s the reason that people – including Elon – make this mistake. When we see a person performing a task very well, we understand the competence [involved]. And I think they apply the same model to machine learning. [But they shouldn’t.]
...
"


// founding OpenAI is great by the way

I do agree with you, but it's kind of ironic timing in that OpenAI has decided not to publicly circulate an AI-based text generation tool that enables researches to spin convincing, but fabricated machine written articles.

Too scary? Elon Musk's OpenAI company won't release tech that can generate fake news

My own opinion is Elon was so worried about what AI would do to us that he forget about the much more immediate threat of what we would do with AI.

He also tends to get his name attached to things that on very loosely involve him. Like years ago he talked about us living in a simulation, but that was water cooler talk in Silicon Valley for years before Musk ever said anything.

Another example is that article I linked to says OpenAI as if it's his company, but he's recently distances himself as there was obviously conflict of interest.

elon-musk-stepping-down-from-openai-board-artificial-intelligence

Sure he co-founded OpenAI, but a lot more people/companies are involved in it.
 
Last edited:
I do agree with you, but it's kind of ironic timing in that OpenAI has decided not to publicly circulate an AI-based text generation tool that enables researches to spin convincing, but fabricated machine written articles.

Too scary? Elon Musk's OpenAI company won't release tech that can generate fake news

My own opinion is Elon was so worried about what AI would do to us that he forget about the much more immediate threat of what we would do with AI.

He also tends to get his name attached to things that on very loosely involve him. Like years ago he talked about us living in a simulation, but that was water cooler talk in Silicon Valley for years before Musk ever said anything.

Another example is that article I linked to says OpenAI as if it's his company, but he's recently distances himself as there was obviously conflict of interest.

elon-musk-stepping-down-from-openai-board-artificial-intelligence

Sure he co-founded OpenAI, but a lot more people/companies are involved in it.

Elon Musk's OpenAI company won't release tech that can generate fake news”

This is a game changer!!!! Now I know where all of the Tweets and voice overs on the earnings calls promising the undelivered features of our cars has been coming from......
 
  • Funny
Reactions: S4WRXTTCS
I do agree with you, but it's kind of ironic timing in that OpenAI has decided not to publicly circulate an AI-based text generation tool that enables researches to spin convincing, but fabricated machine written articles.

Too scary? Elon Musk's OpenAI company won't release tech that can generate fake news

My own opinion is Elon was so worried about what AI would do to us that he forget about the much more immediate threat of what we would do with AI.

He also tends to get his name attached to things that on very loosely involve him. Like years ago he talked about us living in a simulation, but that was water cooler talk in Silicon Valley for years before Musk ever said anything.

Another example is that article I linked to says OpenAI as if it's his company, but he's recently distances himself as there was obviously conflict of interest.

elon-musk-stepping-down-from-openai-board-artificial-intelligence

Sure he co-founded OpenAI, but a lot more people/companies are involved in it.

Here's what researcher actually think about OpenAI publicity stunt.

[Discussion] Should I release my MNIST model or keep it closed source fearing malicious use? : MachineLearning

[Discussion] OpenAI should now change their name to ClosedAI : MachineLearning
 

Is that what researchers actually think? If I was part of the group I would be joking about it as well as it came across as way too alarmist. I didn't see anything from the examples that alarmed me to the point where I felt like it should be closed. But, I'm not a researcher. I simply use/adapt what they come up with in the stuff I experiment with. I think calling the people on a reddit thread "researchers" is stretching it a bit. :)

Sure some of them might be, but most are probably like me where we use it like a tool. Of course we're going to hate on people taking the tools away, and hiding them.

I'm most fearful of what the Chinese government is doing with AI than something that writes convincing fake news as we already have convincing fake news from people who believe in "alternative" facts.

OpenAI also didn't say that weren't ever going to release everything, but that they were going to hold off to discuss it a bit more. Here is exactly what was said "OpenAI NLP Team Manager Daniela Amodei meanwhile told Synced their team consulted a number of respected researchers regarding the decision, and received positive responses... OpenAI says it will evaluate the results in six months to determine their next step.”

Mostly I thought it was kind of amusing as this was a decision by researchers. This wasn't an Elon Musk decision that I'm aware of.

The "OpenAI should now change their name to ClosedAI" was an interesting thread. My takeaway was that it was a lot more about how OpenAI operates in general than this specific example. Like the idea that this might just be another PR Stunt.

I liked this comment thread a lot because it covered both sides of the issue.

[Discussion] OpenAI should now change their name to ClosedAI : MachineLearning
 
Last edited:
You might think he is bragging but that's only because you can't see the physics and the path he does see. Don't forget among all executives of auto or even tech companies he's also the one who has the closest contact with and most understanding of AI and machine learning. Give him time he will lead us there. I'm an optimist, not to mention a physicist too, I see no reason why we could not get there.

Maybe if you -- or Elon -- were a roboticist then your credentials would be relevant. Also stating that you are an optimist rather erodes the strength of your argument, don't you think? Optimism by definition sort of indicates that you belief in the outcome is not strongly supported by the evidence. If it were, that would make you simply a realist.
 
Maybe if you -- or Elon -- were a roboticist then your credentials would be relevant. Also stating that you are an optimist rather erodes the strength of your argument, don't you think? Optimism by definition sort of indicates that you belief in the outcome is not strongly supported by the evidence. If it were, that would make you simply a realist.

No. Optimists look at positives instead negatives. Optimists do things. Pessimists only make dire predictions. They could be right at a moment only because things are not easy to do. Without optimists they will be forever 100% right.

A good example is to look at what optimists and pessimists said a decade ago about EV, among many other things.

Really, OpenAI has "many products"? Can you please name one?

OpenAI - Wikipedia
 
Last edited:
  • Disagree
Reactions: rnortman

Releasing research tool is NOT a PRODUCT! A product is something you call sell, market and make money or create revenues off of. I'm not surprised that you are also trying to warp the definition of what a product is. The very reason Google sold Boston Dynamics is because they couldn't productize in the short term.

Publishing research is NOT a product. Releasing research tool is NOT a product. Every single research team in every university, private or public company and even individuals do that.

DeepMind unlike OpenAI is not a publicity stunt. Their research achievements are actually used in Google's products. DeepMind research lead to the creation of Google's Night Sight to WaveNet being applied to Google's Voice Assistant (TTS and V2T) and an evolution of WaveNet being used to create Google Duplex Or DeepMind's research being applied to Waymo's SDC. I could go on all day and list countless DeepMind's research that have DIRECTLY lead to actual PRODUCTS.

Name us one from OpenAI? Just one!
 
Last edited:
No. Optimists look at positives instead negatives. Optimists do things. Pessimists only make dire predictions. They could be right at a moment only because things are not easy to do. Without optimists they will be forever 100% right.

A good example is to look at what optimists and pessimists said a decade ago about EV, among many other things.

First, this is a self-serving definition of "optimist" and a strawman definition of "pessimist". And you left out a rather critical term in all of this, which is "realist" -- a person who is neither optimistic nor pessimistic, but simply makes a sober -- and necessarily probabilistic rather than certain -- assessment of reality based on available evidence.

But even accepting your definition -- the opinion of a person who looks only at positives (confirmatory evidence) and not at negatives (contradictory evidence) cannot be relied upon. That kind of optimist may be useful for motivating people to action, but there is no good reason to believe that the action they're motivating people to makes any sense whatsoever, or will ultimately have positive outcomes -- unless there is evidence or good argument to support their beliefs when considering both confirmatory and contradictory evidence, in which case optimism is not required because the belief is supported by reason.

Basically your definition of optimist boils down to "fanboy" and/or "charismatic leader", and neither of those categories should be given any credibility in an argument.
 
Wrong again. I'm not talking about blind optimism but optimism based on true understanding. In an airplane flying in sever turbulence the pilot is always the optimist and people sitting in the back, especially those first time flyers, are always pessimists. This works out fine though unless if the passenger tries to take over control of the plane.
 
  • Funny
Reactions: BinaryField
Wrong again. I'm not talking about blind optimism but optimism based on true understanding. In an airplane flying in sever turbulence the pilot is always the optimist and people sitting in the back, especially those first time flyers, are always pessimists. This works out fine though unless if the passenger tries to take over control of the plane.

I know what you're talking about, though I wouldn't use the word "optimism" for it. This is that thing you need in your gut and your heart to get you through short-term stressful, scary, and dangerous situations. It's what you need if you're a pilot, a firefighter, a soldier, or a stand-up comedian.

It's completely irrelevant to making decisions and assessments where you can take your time and do it without emotion at all. It's not the way for a leader to make big decisions -- you shouldn't use it when investing or making purchasing decisions either. And it's not the way you should do engineering or science.
 
I know what you're talking about, though I wouldn't use the word "optimism" for it. This is that thing you need in your gut and your heart to get you through short-term stressful, scary, and dangerous situations. It's what you need if you're a pilot, a firefighter, a soldier, or a stand-up comedian.

It's completely irrelevant to making decisions and assessments where you can take your time and do it without emotion at all. It's not the way for a leader to make big decisions -- you shouldn't use it when investing or making purchasing decisions either. And it's not the way you should do engineering or science.

Agreed. Optimist and pessimist may not be best words to describe those. Confidence and fear out of ignorance may be better descriptions. Some accuse Elon of hyping things up but he says those things he could see thing through and are confident of it. So are pilots when they are ones who know how these will work out. Those shouting the sky will fall were only out of ignorance. It's easy to think things will fail when you have no ides how it could work.
 
Last edited:
I have been hearing excuses for over 2 years. The bigger question is when do people start the next class action case for taking our money and not delivering the FSD feature. October 2019 many will be coming up on 3 years since purchasing this feature, I picked up my October ordered vehicle in December 2016. How many people leased vehicles and will be returning them without ever getting any use out of a feature they were charged for.
 
I have been hearing excuses for over 2 years. The bigger question is when do people start the next class action case for taking our money and not delivering the FSD feature. October 2019 many will be coming up on 3 years since purchasing this feature, I picked up my October ordered vehicle in December 2016. How many people leased vehicles and will be returning them without ever getting any use out of a feature they were charged for.


I did not opt out of the EAP class action lawsuit, BUT if there is one started for those of us that pre-paid for FSD on our HW2.0 cars I may very well opt out of that one and make my own claim. I will not accept a couple of hundred dollars in leu of FSD if that's what it comes down to.