You can install our site as a web app on your iOS device by utilizing the Add to Home Screen feature in Safari. Please see this thread for more details on this.
Note: This feature may not be available in some browsers.
It's only as good as the online database that it crawls.A I seems to only be as good as the person programming it.
The hypocrisy of conducting an unauthorized public beta of life-and-death software in FSD while calling for others with text and image generators to fear for public safety.I love my Tesla and I purchased FSD, so I'm not sure what to think about this:
Elon Musk and others urge AI pause, citing 'risks to society'
Elon Musk and a group of artificial intelligence experts and industry executives are calling for a six-month pause in developing systems more powerful than OpenAI's newly launched GPT-4, in an open letter citing potential risks to society.www.reuters.com
Why does this have to be debated as an Elon concern? The concern document was signed by thousands of the top minds in AI and universities around the world as a Real concern. This is not some Elon wants propaganda. This is a massive group of people around the world much smarter than a forum showing legitimate concern of letting this AI genie out of the bottle without regulation or understanding of What it can do. Hate or assume what you wish but this is not a simple switch you can turn off if you get it wrong.Elon is right. AI could be very dangerous. Unfortunately we are not even close to creating real AI, so he is creating fearmongering over nothing.
I haven't read the document, but it is puzzling to me that the top minds in AI (If they actually are the "top minds") would be concerned about something that isn't truly intelligent. I have used ChatGPT and it isn't even close to a true AI. Neither is Tesla's attempt at self-driving.Why does this have to be debated as an Elon concern? The concern document was signed by thousands of the top minds in AI and universities around the world as a Real concern. This is not some Elon wants propaganda. This is a massive group of people around the world much smarter than a forum showing legitimate concern of letting this AI genie out of the bottle without regulation or understanding of What it can do. Hate or assume what you wish but this is not a simple switch you can turn off if you get it wrong.
Consider reading it before assuming your decision. This partition of concern was’t signed by average Joe consumers it was led by top minds and professors around the world (including Musk) that understand the implications as this grows exponentially until it can’t be stopped.I haven't read the document, but it is puzzling to me that the top minds in AI (If they actually are the "top minds") would be concerned about something that isn't truly intelligent. I have used ChatGPT and it isn't even close to a true AI. Neither is Tesla's attempt at self-driving.
I haven't read the document, but it is puzzling to me that the top minds in AI (If they actually are the "top minds") would be concerned about something that isn't truly intelligent. I have used ChatGPT and it isn't even close to a true AI. Neither is Tesla's attempt at self-driving.
Yup, I know you're right about that because I saw an NBC news segment last night . The AI used many recordings of a news reporter to piece together a request to another news reporter for access to a credit card. It was very limited and was not really a two way conversation. The other news reporter agreed to give the AI bot access to their credit card.Doesn't the dumb A I need a recording of your voice for the scam to work? How does it know what a person's voice sounds like?
btw. I am impressed by the arguments for "AGI = scary" and the imperative to make safe these large models which already have the potential for reactor runaway as it mistakenly pursues self-survival goals.The hypocrisy of conducting an unauthorized public beta of life-and-death software in FSD while calling for others with text and image generators to fear for public safety.
The idea being that AI can use pattern recognition and user data to find your voice, find your phone number, infiltrate the cellular telephone network traffic, find your active, real time conversations, intercept what you say to the other party, use your voice sample to generate altered dialog and the other party hears whatever the AI wants you to have said.Doesn't the dumb A I need a recording of your voice for the scam to work? How does it know what a person's voice sounds like?
Those real estate speculation algo's were not AI, just pattern forecasting, the most primitive of mathematics and the available products are so heavily manipulated, they're purpose built to kill speculators and accumulate gains for the institutions. That's not limited to real estate, it's systemic.AI algorithms don't seem to be too good at predicting patterns. It couldn't predict housing prices very well. Opendoor, Zillow and Redfin betted with AI algorithms and lost Billions. AI couldn't predict the Banks that have crashed over the past few weeks. What is the AI prediction of what the FED is going to do about Inflation?
Right, there's no way to regulate, let alone enforce regulation of R&D. Short of government "weapons inspections" and sanctions, like the USA over Iran, any attempt to legalize AI development (as with all government doublespeak, the intent is the opposite of the description, in this case, to legalize means to make illegal and to monetize, to put a dollar price on the value), simply puts AI in the hands of the same two entities: government and organized crime. Just like pharmaceuticals and street drugs, military weapons, and the monopoly of violence.Let's assume ChatGPT (or insert some other democratized AI here) today somehow got to a level of sophistication that it's not just the fringe AI experts calling for a stop. Given that many people, businesses, govt and foreign entities are adopting the engine for their own AI purposes, who do we call exactly to pull the plug? How do we know when we get them all? How do we enforce it?