Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Are you with Musk or Hawking on AI

This site may earn commission on affiliate links.
Came across this interesting article about advance in AI:

Now AI is beating us at our favorite video games

Here we've got an AI research organization that's built an AI that will collaborate with other AI's. The context is team based multiplayer video game combat (5 on 5 competition). To be successful in this sort of game it's not enough to have a team of the 5 very best soloists - the team of 5 not nearly as good competitors (individually) that are working well together and supporting each other will mop the map with the soloists (at least, that's been my experience in other games, not DOTA, with this style of multiplayer combat).

I assume as a technical detail, that the AI team is implemented as 5 AI's - either 1 AI instantiated 5 times for the 5 players, or as 5 similar but different AI's, each controlling a player. So the AI's aren't communicating with each other "out of band" (outside the game), but instead are relying on game chat and mutual behavior to communicate tactics and strategy. Or maybe they've got the equivalent of Ventrilo / Teamspeak (I date myself) / etc.. - as a human team will have - so that they can talk and communicate tactics / strategy as the game progresses.

In any case, making sub - optimal choices for what an individual soloing and working on their own, in order to create the possibility of a better global outcome for the team - that's an impressive objective function to solve :)
 
I have the sense that it's like nuclear tech back in the day: people realized it was terrifying in its implications, but figured it was inevitable, so they wanted to be the ones to develop it rather than leave it to be only in the hands of others - It became a self-fulfilling prophecy.

Concerning AI, simply warning people to not develop it will not only be fruitless, but will probably hasten its development.
 
People way, way overestimate speed of AI development. Additionally, people believe in silly and strange things, like that something complex and intelligent can be somehow made easier and easier.

There is already complex intelligence capable of self-improvement on this planet... yet it never achieved singularity (or whatever it is supposed to happen).
 
  • Disagree
Reactions: Anstandswauwau
One that can rapidly increase it's processing power and data accumulation rate, speeding up it's own evolution?
There are several problems with this answer.
1. It is non-sequitur. "Processing power" and "data accumulation rate" alone won't do anything. Connect all supercomputers in world to internet with big fat pipe and you will get a whole lotta of nothing, since those supercomputers will not know what to do with that data.
2. How exactly your entity will "rapidly increase it's processing power and data accumulation rate"? I sincerely hope you don't think AI can create needed hardware out of thin air.
3. What does "evolution" even mean in this context? That fabled idea of runaway self-improvement?
 
  • Disagree
Reactions: Anstandswauwau
1. At some level of development it will know what to do with that data.
That's biggest problem with any prospective AI. Software, not hardware. I do not see solutions any time soon, let alone solutions for "superintelligence".

2. Not out of thin air of course but it may be able to gain access to previously secured computers.
That is pretty far from "rapidly increase it's processing power and data accumulation rate" claimed before.

3. Improvement of it's abilities.
I see kind of improvement that is talked about in transhumanist communities extremely unlikely. It is just flat out assumed that making more and more complex intelligencies becomes easier and easier without any evidence or justification.
 
  • Disagree
Reactions: Anstandswauwau
The following TED talk describes a very interesting use of AI technology:

A bold idea to replace politicians

I have always felt that with universal availability of the Internet, direct-democracy would be possible, but of course, the main problem is dealing with too many bills to vote on, basically most people would not have time to read / study / consider all the issues tabled for votes. However, a personal AI assistant trained for each person's preferences / views could be used for this purpose. I really like the idea!
 
The following TED talk describes a very interesting use of AI technology:

A bold idea to replace politicians

I have always felt that with universal availability of the Internet, direct-democracy would be possible, but of course, the main problem is dealing with too many bills to vote on, basically most people would not have time to read / study / consider all the issues tabled for votes. However, a personal AI assistant trained for each person's preferences / views could be used for this purpose. I really like the idea!
Interesting.....very interesting.....I like the idea as well. Who would control the data we setup at the beginning? What company or would it be the gov't that houses the data? How would it not get manipulated?.....hmm....
 
Interesting.....very interesting.....I like the idea as well. Who would control the data we setup at the beginning? What company or would it be the gov't that houses the data? How would it not get manipulated?.....hmm....

IMHO, the more decentralized, the better. I could imagine everybody keeping their own data on their own device (computer / tablet / phone) and I could even imagine multiple competing commercial apps for the AI assistant, it could be a functionality of Siri / Google assistant etc.

ps: the system could also be setup to always allow manual human input and continuously learn / improve its training based on those. So each new bill would have a time limit, say 2 weeks after posting to be voted upon, during which time you could manually review and vote on anything. The AI would be programmed to vote on outstanding bills on the last day before expire if you do not vote on them manually before.
 
Last edited:
Both Elon Musk and Hawking have the same view. I agree with them.

Best case human become pets. More likely case human will be destroyed by advanced AI, there is no match. Once pass the tipping point, AI can advance 100 fold every year, if not every week. Human can advance 10% every 100 years. We are close to the tipping point.

The best we can do is to delay AI development by 100 years, hopefully during that time we can figure out a way to contain the risk.
 
I find this video (already posted on this forum, but not here) very interesting.


TL;DW: this is how I understood things:
1. Neural network is mindless, dumb like a rock entity and can learn only if you show it what you want it to see.
2. This "show it what you want it to see" is done by labeling and other data processing on bigass database of gazilion pictures (majority of them presumably gathered by Teslas).
3. Then labeled data are fed to neuronal network so it can identify unknown objects similar to known ones thanks to labeling.
4. This extremely inefficient teaching method (compared to how human is learning) is current state of art. I am sure in future it will get better, but right now it is what it is.


Some general comments:
1. How teaching methods could get better in future? Well most obvious thing to do would be automating manual labeling. Somehow.
This requires solving seemingly unsolvable problem, since if you can label things automatically, you do not need labeling in first place!
Though I can see place for assistant neuronal networks that try to label things and human supervisors confirming (or not) correctness of labels. Maybe it could be overall more efficient that full-manual labeling.

2. Well, I don't think true AI will come any time soon. I hope no one expects intelligence from our visual cortex either. Current efforts (like visual processing and recognition, deep dreams, playing Go or Dota), while important stepping stones, actually have nothing to do directly with Artificial Intelligence.

3. But some mindless thing that can easily beat Turing Test should come in our lifetime. There are a lot of use cases for them, like personal assistants, receptionists in hotels, etc. Most important one is at end of whole post.

4. In my opinion, we are hundred years or so (plus minus one-two decades) before actual AI is created. Exact number is not important, my point is that everyone currently alive will be almost certainly dead by then.
Another point is that AI won't came into being suddenly or unexpectedly. Sentences like "Once pass the tipping point, AI can advance 100 fold every year, if not every week." represent this popular trope and are pure fiction, though fervently believed by some people.

5. Situation with AI will be muddied by multiple problems:

a) before actual AI you will get almost- or not-quite-AI. No clear-cut here, only very gradual progress. And even earlier there will be things that aren't AI but pretend very well they are (bullet point 3). This will fool a lot of people (especially members of modern technocults like transhumanists etc). Philosophical and practical problems with proving actual intelligence, consciousness etc certainly won't help things. P-zombies, anyone?

b) those genuine AI will be inhuman (interface to communicate with humans notwithstanding). This will be deliberate, since if you want human-like AI you can in 18 years produce one quite easily, though resulting product's variance can be quite huge (simulating human mind accurately won't be possible even by then). And inhuman AI will help us notice things that we are incapable of due to our biases and other cognitive problems.

c) Do human rights do even apply to such inhuman AI?


Last words:

I have seen future of close-term "AI", and it is bot spewing political propaganda.
 
I have seen future of close-term "AI", and it is bot spewing political propaganda.

Yep that is the one that has me scared. The ability to craft political propaganda that pushes the desired emotional response in people.
Couple that with Deep fake video and you have a way of delivering any message you want. And controlling and distorting any narrative.

We are probably pretty close to this right now.
 
I always find this interesting when Ai is discussed.

Theres an opinion as soon as we achieve general Ai it will accelerate into a singularity and kill humanity within seconds (some exaggeration). I guess party due to this being a scifi trope and more interesting than 'smart software helps humans' story.

In reality it'll be restricted in its capabilities by the hardware its running on, even if it can self optimise theres only so much it can do, only so many clock cycles, expanding outside of that would mean massive latency and quite variable hardware etc. And at first it may not even be as smart as an average human.

Also remember humans are 'dumb' individually, but Ai will need the combined effort of many hundreds of thousands over many years, so for an Ai to replace all those it will need to be quite a lot brighter or need many friends...

I'm genuinely looking forward to smart general Ai helping us make the best choices going forward aka the Culture.
(Replying in a more appropriate thread where much of this has been covered.)

Obviously Elon has other concerns, and he's working more closely with AI than most of us.


Sort of?

I would assume an AI could make calls to other systems, living half in the base system and half inside services running on Azure or AWS.

The bigger thing which I think people over-estimate is ambition. What is going to motivate a computer to kill humans? We don’t really compete for resources. It will be difficult for even the most effective computer to create it’s own manufacturing capability.

It’s far more likely that a person will create a computer to do something and it will have terrible side effects which the computer doesn’t care about.
It's not that an AI will be "motivated" to kill humans, it may simply develop goals that don't include human life. We can't assume an AI will inherently value humanity or any life form.