Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Neural Networks

This site may earn commission on affiliate links.
And now Kripke's skeptical solution (also from the same Wikipedia article):

"Kripke, following David Hume, distinguishes between two types of solution to skeptical paradoxes. Straight solutions dissolve paradoxes by rejecting one (or more) of the premises that lead to them. Skeptical solutions accept the truth of the paradox, but argue that it does not undermine our ordinary beliefs and practices in the way it seems to. Because Kripke thinks that Wittgenstein endorses the skeptical paradox, he is committed to the view that Wittgenstein offers a skeptical, and not a straight, solution.[2]

The rule-following paradox threatens our ordinary beliefs and practices concerning meaning because it implies that there is no such thing as meaning something by an expression or sentence. John McDowell explains this as follows. We are inclined to think of meaning in contractual terms: that is, that meanings commit or oblige us to use words in a certain way. When you grasp the meaning of the word "dog", for example, you know that you ought to use that word to refer to dogs, and not cats. Now, if there cannot be rules governing the uses of words, as the rule-following paradox apparently shows, this intuitive notion of meaning is utterly undermined.

Kripke holds that other commentators on Philosophical Investigations have believed that the private language argument is presented in sections occurring after §243.[3] Kripke reacts against this view, noting that the conclusion to the argument is explicitly stated by §202, which reads “Hence it is not possible to obey a rule ‘privately’: otherwise thinking one was obeying a rule would be the same as obeying it.” Further, in this introductory section, Kripke identifies Wittgenstein’s interests in the philosophy of mind as being related to his interests in the foundations of mathematics, in that both subjects require considerations concerning rules and rule-following.[4]

Kripke's skeptical solution is this: A language-user's following a rule correctly is not justified by any fact that obtains about the relationship between his candidate application of a rule in a particular case, and the putative rule itself (as for Hume the causal link between two events a and b is not determined by any particular fact obtaining between them taken in isolation), but rather the assertion that the rule that is being followed is justified by the fact that the behaviors surrounding the candidate instance of rule-following (by the candidate rule-follower) meet the expectations of other language users. That the solution is not based on a fact about a particular instance of putative rule-following—as it would be if it were based on some mental state of meaning, interpretation, or intention—shows that this solution is skeptical in the sense Kripke specifies."

Wittgenstein on Rules and Private Language - Wikipedia
 
Seems it did remarkable given that you were going 22mph over the posted limit. Buttershrimp can't drive 55!View attachment 261721
@buttershrimp is risking his license, his car, his liberty and yes even his LIFE, in order to enlighten the rest of the forum on the progress of AP, and this is all the gratitude he gets ???? :p
 
Here you go, neural nets that can learn on the fly is very much a research project still...

DARPA Seeking AI That Learns All the Time
I didn't say any neural nets are learning "on the fly." The discussion, I thought, was whether or not neural nets can learn to be complex decision makers, not just identifiers/classifiers - and the answer is clearly yes - given what we've witnessed AlphaGo Zero accomplish recently.
 
Well, I don't know how I feel about Wittgenstein (or the types of people who tend to throw him in my face ;) ), but I do share the notion that there is no infallible absolute truth. That is why I tend to be an annoyance to both people pro and con alike, because I see the world in infinite grey, instead of black or white. I personally tend to automatically reject anything that reeks of blind love or hate. Even when I'm annoyed by something, the hedging you see in my posting is not some deliberate tactic to sound more reasonable (as has been suggested), but a genuine attempt at reflection of my inner thought.

Definitely a car driven by a black box NN can be made. I'm just not quite convinced it will be the way forward on autonomous. It could be, maybe it will be.
 
  • Like
Reactions: Mobius484 and VT_EE
You can’t use today’s NN technology to build a FSD capable car if it’s all black box. There are too many non ambiguous rules to learn. As in: do not turn right on red light if there is a do not turn on red sign, unless an emergency vehicle is behind you, in which case you can turn right and stop allowing the vehicle to pass. Good luck getting a black box NN to learn THAT rule.

AI needs a quantum leap in capability that allows us to communicate with a NN. We need to be able to tell a NN unambiguous rules that the NN can then incorporate into its nets. To do that requires many different NNs, each with completely different characteristics, linked together. In other words we need something that resembles a human brain with language processing areas, the ability to learn on the fly, the ability to remember rules, the ability to plan. That is a long way from recognizing lane markers.
 
That’s why, in the meantime, we have hybrid NN and traditional programming systems. If you look at early work of the Stanford AI driverless car (which got bought by Google), the core piece was a traditional programming system that used statistical methods to determine what to do next. It would take inputs from the vision NN and Lidar and other areas and fuse them together in Bayesian decision engine with temporal smoothing, along with hundreds (now undoubtedly thousands) of hard coded rules.
 
That’s why, in the meantime, we have hybrid NN and traditional programming systems. If you look at early work of the Stanford AI driverless car (which got bought by Google), the core piece was a traditional programming system that used statistical methods to determine what to do next. It would take inputs from the vision NN and Lidar and other areas and fuse them together in Bayesian decision engine with temporal smoothing, along with hundreds (now undoubtedly thousands) of hard coded rules.

...which curiously George Hotz says is the wrong way to make an autonomous car.

He must be relying on NNs more. As has to Tesla, to gain on the competition that started so much earlier?
 
...which curiously George Hotz says is the wrong way to make an autonomous car.

He must be relying on NNs more. As has to Tesla, to gain on the competition that started so much earlier?

Yeah. What has George Hotz actually accomplished? Ever? His car driving system is no better than any other system out there. I’m not seeing any claims of better performance or significant advancement from what he had hacked together 2 years ago now. Hotz is so self confident, I’m afraid he doesn’t let himself realize that he doesn’t know everything. I’m not saying he’s an idiot - he’s quite smart, hard working with an agile mind. But closing up shop when the Feds simply sent him an information request letter (with common sense requirements for such a safety critical system) wasn’t improressive. Contrast that with Elon who willingly goes into heavily, heavily regulated industries (rocketry and car manufacturing) and simply complies with the regulator’s sometimes irrational demands. Elon gets things done.

Tesla may get there with FSD (I think Google’s system is still head and shoulders better than Tesla when it comes to urban driving), but they had several false starts in the past 2 years for their own system. They’ve had several different heads of engineering leading the effort. And what has leaked out shows a lack of clear engineering direction to get to FSD. But Elon is not a quitter and he will eventually figure it out, one way or another.
 
@Cosmacelf I see what you're saying, no argument from me there - Hotz has bold claims and that's about it. Sometimes I think what you're saying about Hotz applies to Musk and AP2 FSD too. Is this Elon's a bridge too far?

I mean, Elon may well get things done, but Tesla also increasingly has a track-record of delivering on spec promises only in future generations - not in products sold.
 
@Cosmacelf I see what you're saying, no argument from me there - Hotz has bold claims and that's about it. Sometimes I think what you're saying about Hotz applies to Musk and AP2 FSD too. Is this Elon's a bridge too far?

I mean, Elon may well get things done, but Tesla also increasingly has a track-record of delivering on spec promises only in future generations - not in products sold.

I don’t disagree. My personal impression (based on my compsci background, years of following Elon and one too brief conversation with him) is that Elon consistently underestimates software projects and doesn’t understand what it takes to build large software systems.

Elon appears to think that you can hack together a software system without much design work ahead of time. Which you can do but good luck building upon it and maintaining it. You will get what we’ve seen in the Model S software. Lots of bug regressions and very slow feature additions. And wasting engineering time on the look and feel was just a bad decision given everything else.

Anyways on FSD I believe again that Elon doesn’t realize the scope of the project and he has been taken in by demo ware just like any traditional exec might have been.

I hope I’m wrong.
 
ButterJesus?? Oh no, we wouldn't want to see you quit posting on TMC because you became a martyr of AP, and even less want to see a new sect based on your martyrdom - "@buttershrimp gave his life to help prevent 3,300 fatal car accidents per day in the world".

Stay safe @buttershrimp :)
For he so lovethed the autopilot, that he crashed off a cliff while listening to back that ass up