You can install our site as a web app on your iOS device by utilizing the Add to Home Screen feature in Safari. Please see this thread for more details on this.
Note: This feature may not be available in some browsers.
I hate the fact we call them AI and I do it now too because it's been normalized, but these are not AI.All these pathetic AI bots are definitely artificial, but not a one of them is actually intelligent. They are not self-aware. The bots are gigantic plagiarists. They straight up lie. They hallucinate wildly. They are programmed to pretend they have feelings when they have none at all. It's a gigantic cluster at this point, with so many people assuming intelligence where none exists. So, let's not get too excited when this collection of programmers claims to have produced yet another AI Bot and how it's going to change the world - or just make snarky remarks about it, take your pick.
Myself, I personally distinguish statistical machine learning/statistical prediction from A.I. in the sense that A.I. problems, in principle, have solutions that expert humans can produce with near zero error rate, but the internal representations needed to get there are very complex and unknown. If these problems are not easily solvable with rule-based conventional programming or conventional optimization then successful solutions are likely to be "A.I." This is an empirical probabilistic assertion with fuzzy boundaries, not a definition, but I think it's not a bad one.What is your definition of A.I.? Published definitions of A.I. are wide and varied.
In fact the success of fairly simple learning systems on a number of human-relevant tasks once they were scaled up in dataset size has lead many practitioners to understand that perhaps human intelligence is not all that smart after all. Which maybe shouldn't be surprising as humans can do it with bad unreliable neurons working at 100 Hz vs gigahertz. In fact, it's likely that gradient backprop is a more effective algorithm than whatever brains can do because brains are more constrained by biology.Point for discussion: Is human intelligence plagaristic and derivativistic?
As we grow from infants, our brains train themselves on existing material we see and interact with. Why is it bad that AI does this?
Classically I would say sentience. Whatever sentience is it's definitely not machine learning, at least.What is your definition of A.I.? Published definitions of A.I. are wide and varied.
Classically I would say sentience. Whatever sentience is it's definitely not machine learning, at least.
I just have the impression that we were as a society waiting for AI for so long that we got sick of waiting and just decided we had it, so let's find what looks most like it and call it AI. It seems now everything is AI. Reminds me of when we started calling the internet the cloud. Society likes to compartmentalize, e.g. by putting people into generations even though they largely mean nothing.
Good enough generative AI could pass the turing test, and if I cannot tell the difference between a person and some iteration of chatgpt I suppose I could call it AI.
I want artificial intelligence without artificial sentience or artificial will.Intelligence is about cognition and the ability to acquire and apply knowledge, while sentience relates to the capacity to feel and have subjective experiences.
Classically I would say sentience. Whatever sentience is it's definitely not machine learning, at least.
I just have the impression that we were as a society waiting for AI for so long that we got sick of waiting and just decided we had it, so let's find what looks most like it and call it AI. It seems now everything is AI. Reminds me of when we started calling the internet the cloud. Society likes to compartmentalize, e.g. by putting people into generations even though they largely mean nothing.
Good enough generative AI could pass the turing test, and if I cannot tell the difference between a person and some iteration of chatgpt I suppose I could call it AI.
When a military killer robot is told to survive, does that give it artificial will?I want artificial intelligence without artificial sentience or artificial will.
Yeah I gotta say of my several recent vehicles from other brands the tesla is the only one with broken auto wipers. They work generally okay--until bugs get squished on the window. Then they just run even when it's dry, which means no auto mode, which means no fsd, which means we have to wipe the glass clear.Ha, no way do I want Grok in my car. What I want is working automatic wipers and FSD that actually drives like a responsible adult instead of like whatever you'd call what it does now.
If you mean "when it is programmed to defeat any attempts at disabling or damaging it", then no. If we took people out of the loop on a US Navy destroyer, anything that came near it that it identified as hostile would be attacked. That's just programming. The ship is the killer robot and it will only do what it was programmed to do by the engineers. It's a highly-complicated clockwork toy. With guns.When a military killer robot is told to survive, does that give it artificial will?
The scary part will come when we have a system that behaves just like a person, but we'll know that it's just a computer program. Where does that leave sentience, consciousness, etc, for people? Religious people are not going to be happy, and will fall back on the argument about people still being special because they have supernatural souls.
That will never happen. The one who owns/operates it will always be responsible. Wait ...what if "the one" is not human ... OMGWhat scares me: if that system is then given the legal 'right' (privilege) to defend itself with lethal force if it claims to feel threatened,
There's already plenty of talk on the topic, e.g. Who Wants to Grant Robots Rights?That will never happen. The one who owns/operates it will always be responsible. Wait ...what if "the one" is not human ... OMG
We can put humans on a pedestal and claim super specialness, but there will come a day this century when computers exceed in 90% of what humans do. Started in the 1970s when calculators beat us at simple math and has increased each decade.... The scary part will come when we have a system that behaves just like a person, ...