qdeathstar
Completely Serious
Conflating perfection with progressSometimes I think you're trolling us. haha. They ran a red light.
You're supposed to stop at the crosswalk before turning.
You can install our site as a web app on your iOS device by utilizing the Add to Home Screen feature in Safari. Please see this thread for more details on this.
Note: This feature may not be available in some browsers.
Conflating perfection with progressSometimes I think you're trolling us. haha. They ran a red light.
You're supposed to stop at the crosswalk before turning.
We all want FSD to progress, but more I hope no naive beta tester harming others or themselves. I recognise complacency/higher risk taking in more 9.2 videos, but only Tesla know (maybe) the real operating limits of the system. I haven't heard anyone speak of a beta tester manual that describes the limits.“yeah but if I come of with the worst case scenario FSD isn’t making progress, see I’m RIGHT!!!!!”
Wut?“yeah but if I come of with the worst case scenario FSD isn’t making progress, see I’m RIGHT!!!!!”
I've never said FSD isn't making progress. It is making far less progress than I expected though.Conflating perfection with progress
I didn't like the way the driver let FSD drive through the red without looking to the left at the crosswalk. Or disengage & stop, which is what he should have done. He's not even looking left.I've never said FSD isn't making progress. It is making far less progress than I expected though.
The more I watch these videos the more ridiculously difficult the problems seems. Obviously it's still worth pursuing because the potential benefit is astronomical.
The car should have stopped at the crosswalk, if there was a fast moving pedestrian (or more likely a scooter) behind the van they could have easily been hit. Horrible driving policy to stop way behind a crosswalk when it's obscured by another vehicle (the van was stopped way too far forward but that's hardly an edge case...)
We already know that humans with humans is extremely dangerous thus the current carnage on the highway. The aim is to improve that with computers. No system is going to be perfect but is it better than all humans?the 2 easy cases are easy: all computers or all humans. its the mix that is dangerous and we'll be with that for decades, I think.
two things should happen, in parallel: continue developing what is being done, now. but also start to admit and consider making smart or active roads.
I dont think its a sign of giving up to ask for hints, from the world. if active roads can add more nines, why not? its 'just a networking problem' (semi lol) but its solvable with today's understanding and does not rely on AI and magic.
we dont have real AI or magic. we do have other things that can add more assist tech and make roads safer as well as making the driving experience more fun and less stressful.
I like it when companies set realistic goals for products, and keep the stretch goals for the labs. when the lab thingies are ready, then they can transition to product, but not until they are ready.
teslas are very very good level 2 cars. but they suck at higher levels than that for many reasons. man's got to know his limitations (who said that?)
The car should have stopped at the crosswalk, if there was a fast moving pedestrian (or more likely a scooter) behind the van they could have easily been hit. Horrible driving policy to stop way behind a crosswalk when it's obscured by another vehicle (the van was stopped way too far forward but that's hardly an edge case...)
I don't know about hairpin bend, but lots of dirt road videos on this YouTube channel: https://www.youtube.com/channel/UCaBOIBVD_f6eyRRRBRrCz8wAnyone got video of a hairpin bend on a dirt road?
What about oncoming cars on a dirt road?
I say again, human and human is not too bad. its what we mostly have now.We already know that humans with humans is extremely dangerous thus the current carnage on the highway. The aim is to improve that with computers. No system is going to be perfect but is it better than all humans?
You keep saying this, but with no real evidence or clear coherent arguments to back this up. What, precisely does the car need to "understand" and in what way does it need to understand it to be able to drive?self driving on real-world roads is such an asymptotic problem. we are very close, but not enough 'nines' to make it safe enough to allow hybrids, let alone all level5's.
you can't argue away corner cases. corner cases can mean lives lost. that's the rub. you cant' just 'punt and reboot' if you come to a case you dont understand.
that's the real limit; understanding. we have no clue how to make computers understand. we can have them run numerical algorithms and do stats and probability, but they have not achieved even the slightest level of what we would consider congnition.
(I've been reading lots of dan dennett lately; smart guy, interesting thoughts on thought, itself, but its clear that we still have no idea how to software-ize human thought in silicon)
driving on real roads (not simple highways) means you need to understand.
With recent NHTSA investigation, I’m afraid it will be long time before the button.But in order to release The Button, they will have to solve this problem to a large degree.
Everyone who works on the problem for a long period of time seems to disagree with you. There’s a reason that Elon says solving real world AI is necessary for FSD.You keep saying this, but with no real evidence or clear coherent arguments to back this up. What, precisely does the car need to "understand" and in what way does it need to understand it to be able to drive?
Almost all aspects of driving are essentially a mechanical process. You get some visual input, apply a set of rules to determine a response, and then execute that response. If driving wasn't like this, then even humans would have trouble driving. Where would you be if you had to consciously reason out every single action? For example:
"hmm, the traffic signal is red, but I'm going to turn right .. let's see, as long as there are no cars coming I can do that .. are there cars coming? hmm .. no, ok, I can turn right. Let's see, I put on the turn signal by clicking this stalk, and then .. ah yes, I need to rotate the steering wheel just enough .. ok, the car is turning, lets be careful not to turn the wheel too much, and I'd better move my foot a little to adjust the speed."
Of course that's not the way we drive (or anything else for that matter). What happens is we learn to drive which creates the mechanical ruleset in our minds, which then takes over the day-to-day driving, from the visual recognition of signage and road conditions to the individual muscle actions to trigger responses. Sure, the learning process involves cognition and reasoning, that's why learning takes time, but that process is establishing the ruleset in our minds. And guess what? That's pretty much what Tesla are doing when the engineers there reason about and develop the AI/NN rules and processes around those rules. They are providing the cognition for the car that builds the ruleset.
Now, of course, not all driving is like that. When something unexpected or unusual happens, the ruleset in our brain fails. This triggers an alert response: "Help, what should I do?" .. and the conscious brain takes over (or at least, should take over) and then reasons about the situation. And guess what, this is what the car does! If it gets out of its depth, it shouts to the human driver to take over.
The question, then, is how extensive does this ruleset have to be to handle the day-to-day mundane driving tasks? A ruleset that has to ask the driver for help on every single turn is clearly not enough, but the point is that no matter how large the ruleset has to be, it does not require "understanding" in the sense you keep using it. The car already has that at its disposal; it's called the human driver.
The challenge then, it not to build a car that can reason its way out of any situation, but to provide sufficient rules so that the car knows when to shout for help, but not to do it so often that its more bother for the human than manually driving. You dont want the car to drive off a cliff, or blindly knock a pedestrian over on a crosswalk. But, vital though these are, they are still essentially mechanical. They do not require conscious cognition, only recognition ("X is a pedestrian located at a certain point in space and moving in a certain direction").
You should note that I am not claiming that the FSD task is not a daunting one; it clearly is, and Tesla still have a ways to go. Have they made good progress? Absolutely. Look at how solidly the car builds out its world view, and places cars and pedestrians into that view. Even a few years ago that would have blown everyones minds. Do they have a long way to go? Yep. Will they get there? Well, even humans have a finite ruleset; we allow people to drive when that ruleset has reached a certain level (its called a driving test). When will the car reach that point? No idea, and I dont think Tesla know either, but the ruleset is finite, otherwise no human would ever pass their driving test.