Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

FSD Beta Videos (and questions for FSD Beta drivers)

This site may earn commission on affiliate links.
“yeah but if I come of with the worst case scenario FSD isn’t making progress, see I’m RIGHT!!!!!”
We all want FSD to progress, but more I hope no naive beta tester harming others or themselves. I recognise complacency/higher risk taking in more 9.2 videos, but only Tesla know (maybe) the real operating limits of the system. I haven't heard anyone speak of a beta tester manual that describes the limits.
 
“yeah but if I come of with the worst case scenario FSD isn’t making progress, see I’m RIGHT!!!!!”
Wut?

This is something humans do every day - look around the turning vehicle to check for oncoming traffic - note you’ll often have to move your head to the left to get the visibility, with the number of large pickups on the road today. I just had to do it today. Here:

B5C9350F-57BE-408D-8A25-70E9CAA9AB00.png
 
Conflating perfection with progress
I've never said FSD isn't making progress. It is making far less progress than I expected though.
The more I watch these videos the more ridiculously difficult the problems seems. Obviously it's still worth pursuing because the potential benefit is astronomical.
The car should have stopped at the crosswalk, if there was a fast moving pedestrian (or more likely a scooter) behind the van they could have easily been hit. Horrible driving policy to stop way behind a crosswalk when it's obscured by another vehicle (the van was stopped way too far forward but that's hardly an edge case...)
 
I've never said FSD isn't making progress. It is making far less progress than I expected though.
The more I watch these videos the more ridiculously difficult the problems seems. Obviously it's still worth pursuing because the potential benefit is astronomical.
The car should have stopped at the crosswalk, if there was a fast moving pedestrian (or more likely a scooter) behind the van they could have easily been hit. Horrible driving policy to stop way behind a crosswalk when it's obscured by another vehicle (the van was stopped way too far forward but that's hardly an edge case...)
I didn't like the way the driver let FSD drive through the red without looking to the left at the crosswalk. Or disengage & stop, which is what he should have done. He's not even looking left.

Hypothetical situation:
One of these guys stops in front of the van, goes back instead of continuing. Buddy and FSD don't even look or consider that as they take the corner at 8mph without stopping at the crosswalk. That's why you stop, to look first, then turn when you're sure it's clear.

But hey, zero disengagement drive!

Back1.png

Back2.png
 
self driving on real-world roads is such an asymptotic problem. we are very close, but not enough 'nines' to make it safe enough to allow hybrids, let alone all level5's.

you can't argue away corner cases. corner cases can mean lives lost. that's the rub. you cant' just 'punt and reboot' if you come to a case you dont understand.

that's the real limit; understanding. we have no clue how to make computers understand. we can have them run numerical algorithms and do stats and probability, but they have not achieved even the slightest level of what we would consider congnition.

(I've been reading lots of dan dennett lately; smart guy, interesting thoughts on thought, itself, but its clear that we still have no idea how to software-ize human thought in silicon)

driving on real roads (not simple highways) means you need to understand.

I'm just not convinced that we can get there any time soon.

I wish we would just admit it and 'instrument' roads so that we can use that as a guide, for the computers. its an active method instead of passive.

use our infra money to do that kind of thing. again, with v2x this could be a 'thing'.

sooner or later, I think we'll have to admit this. if we want anything more than highway assists (which work great, simply because highways are more consistent and the ruleset -can- be programmed in).
 
@linux-works I agree with that thinking. I like to compare a bot car with a bot player in Fortnite. But encountering real human players, things are a totally different level of unpredictable behaviour, complex maneuvers and changing tactics. (I always die).
It might be that one bot car among many humans is not going to work at all, but only bots with no human irrationality mixed in will be fine.
 
the 2 easy cases are easy: all computers or all humans. its the mix that is dangerous and we'll be with that for decades, I think.

two things should happen, in parallel: continue developing what is being done, now. but also start to admit and consider making smart or active roads.

I dont think its a sign of giving up to ask for hints, from the world. if active roads can add more nines, why not? its 'just a networking problem' (semi lol) but its solvable with today's understanding and does not rely on AI and magic.

we dont have real AI or magic. we do have other things that can add more assist tech and make roads safer as well as making the driving experience more fun and less stressful.

I like it when companies set realistic goals for products, and keep the stretch goals for the labs. when the lab thingies are ready, then they can transition to product, but not until they are ready.

teslas are very very good level 2 cars. but they suck at higher levels than that for many reasons. man's got to know his limitations (who said that?)
 
the 2 easy cases are easy: all computers or all humans. its the mix that is dangerous and we'll be with that for decades, I think.

two things should happen, in parallel: continue developing what is being done, now. but also start to admit and consider making smart or active roads.

I dont think its a sign of giving up to ask for hints, from the world. if active roads can add more nines, why not? its 'just a networking problem' (semi lol) but its solvable with today's understanding and does not rely on AI and magic.

we dont have real AI or magic. we do have other things that can add more assist tech and make roads safer as well as making the driving experience more fun and less stressful.

I like it when companies set realistic goals for products, and keep the stretch goals for the labs. when the lab thingies are ready, then they can transition to product, but not until they are ready.

teslas are very very good level 2 cars. but they suck at higher levels than that for many reasons. man's got to know his limitations (who said that?)
We already know that humans with humans is extremely dangerous thus the current carnage on the highway. The aim is to improve that with computers. No system is going to be perfect but is it better than all humans?
 
The car should have stopped at the crosswalk, if there was a fast moving pedestrian (or more likely a scooter) behind the van they could have easily been hit. Horrible driving policy to stop way behind a crosswalk when it's obscured by another vehicle (the van was stopped way too far forward but that's hardly an edge case...)

From the vantage point of the YouTube camera, you don't see what the driver or the fsd cameras see.
 
Too close to right curb here, and still lots of disengagements in urban city driving:


Urban city driving is so challenging because all large city centers look different. Will Tesla ever be able to solve for all of them in one large NN tree? No one knows. But in order to release The Button, they will have to solve this problem to a large degree.
 
Last edited:
We already know that humans with humans is extremely dangerous thus the current carnage on the highway. The aim is to improve that with computers. No system is going to be perfect but is it better than all humans?
I say again, human and human is not too bad. its what we mostly have now.

machine and machine is ideal. we are decades away from that. we can't just outlaw manual-driven cars so soon (economics, not science limits this).

but the hybrid of man and machine 'thinking' on the same road, that's the worst and the hardest to program for.

that's all I'm saying.
 
  • Like
Reactions: qdeathstar
self driving on real-world roads is such an asymptotic problem. we are very close, but not enough 'nines' to make it safe enough to allow hybrids, let alone all level5's.

you can't argue away corner cases. corner cases can mean lives lost. that's the rub. you cant' just 'punt and reboot' if you come to a case you dont understand.

that's the real limit; understanding. we have no clue how to make computers understand. we can have them run numerical algorithms and do stats and probability, but they have not achieved even the slightest level of what we would consider congnition.

(I've been reading lots of dan dennett lately; smart guy, interesting thoughts on thought, itself, but its clear that we still have no idea how to software-ize human thought in silicon)

driving on real roads (not simple highways) means you need to understand.
You keep saying this, but with no real evidence or clear coherent arguments to back this up. What, precisely does the car need to "understand" and in what way does it need to understand it to be able to drive?

Almost all aspects of driving are essentially a mechanical process. You get some visual input, apply a set of rules to determine a response, and then execute that response. If driving wasn't like this, then even humans would have trouble driving. Where would you be if you had to consciously reason out every single action? For example:

"hmm, the traffic signal is red, but I'm going to turn right .. let's see, as long as there are no cars coming I can do that .. are there cars coming? hmm .. no, ok, I can turn right. Let's see, I put on the turn signal by clicking this stalk, and then .. ah yes, I need to rotate the steering wheel just enough .. ok, the car is turning, lets be careful not to turn the wheel too much, and I'd better move my foot a little to adjust the speed."

Of course that's not the way we drive (or anything else for that matter). What happens is we learn to drive which creates the mechanical ruleset in our minds, which then takes over the day-to-day driving, from the visual recognition of signage and road conditions to the individual muscle actions to trigger responses. Sure, the learning process involves cognition and reasoning, that's why learning takes time, but that process is establishing the ruleset in our minds. And guess what? That's pretty much what Tesla are doing when the engineers there reason about and develop the AI/NN rules and processes around those rules. They are providing the cognition for the car that builds the ruleset.

Now, of course, not all driving is like that. When something unexpected or unusual happens, the ruleset in our brain fails. This triggers an alert response: "Help, what should I do?" .. and the conscious brain takes over (or at least, should take over) and then reasons about the situation. And guess what, this is what the car does! If it gets out of its depth, it shouts to the human driver to take over.

The question, then, is how extensive does this ruleset have to be to handle the day-to-day mundane driving tasks? A ruleset that has to ask the driver for help on every single turn is clearly not enough, but the point is that no matter how large the ruleset has to be, it does not require "understanding" in the sense you keep using it. The car already has that at its disposal; it's called the human driver.

The challenge then, it not to build a car that can reason its way out of any situation, but to provide sufficient rules so that the car knows when to shout for help, but not to do it so often that its more bother for the human than manually driving. You dont want the car to drive off a cliff, or blindly knock a pedestrian over on a crosswalk. But, vital though these are, they are still essentially mechanical. They do not require conscious cognition, only recognition ("X is a pedestrian located at a certain point in space and moving in a certain direction").

You should note that I am not claiming that the FSD task is not a daunting one; it clearly is, and Tesla still have a ways to go. Have they made good progress? Absolutely. Look at how solidly the car builds out its world view, and places cars and pedestrians into that view. Even a few years ago that would have blown everyones minds. Do they have a long way to go? Yep. Will they get there? Well, even humans have a finite ruleset; we allow people to drive when that ruleset has reached a certain level (its called a driving test). When will the car reach that point? No idea, and I dont think Tesla know either, but the ruleset is finite, otherwise no human would ever pass their driving test.
 
You keep saying this, but with no real evidence or clear coherent arguments to back this up. What, precisely does the car need to "understand" and in what way does it need to understand it to be able to drive?

Almost all aspects of driving are essentially a mechanical process. You get some visual input, apply a set of rules to determine a response, and then execute that response. If driving wasn't like this, then even humans would have trouble driving. Where would you be if you had to consciously reason out every single action? For example:

"hmm, the traffic signal is red, but I'm going to turn right .. let's see, as long as there are no cars coming I can do that .. are there cars coming? hmm .. no, ok, I can turn right. Let's see, I put on the turn signal by clicking this stalk, and then .. ah yes, I need to rotate the steering wheel just enough .. ok, the car is turning, lets be careful not to turn the wheel too much, and I'd better move my foot a little to adjust the speed."

Of course that's not the way we drive (or anything else for that matter). What happens is we learn to drive which creates the mechanical ruleset in our minds, which then takes over the day-to-day driving, from the visual recognition of signage and road conditions to the individual muscle actions to trigger responses. Sure, the learning process involves cognition and reasoning, that's why learning takes time, but that process is establishing the ruleset in our minds. And guess what? That's pretty much what Tesla are doing when the engineers there reason about and develop the AI/NN rules and processes around those rules. They are providing the cognition for the car that builds the ruleset.

Now, of course, not all driving is like that. When something unexpected or unusual happens, the ruleset in our brain fails. This triggers an alert response: "Help, what should I do?" .. and the conscious brain takes over (or at least, should take over) and then reasons about the situation. And guess what, this is what the car does! If it gets out of its depth, it shouts to the human driver to take over.

The question, then, is how extensive does this ruleset have to be to handle the day-to-day mundane driving tasks? A ruleset that has to ask the driver for help on every single turn is clearly not enough, but the point is that no matter how large the ruleset has to be, it does not require "understanding" in the sense you keep using it. The car already has that at its disposal; it's called the human driver.

The challenge then, it not to build a car that can reason its way out of any situation, but to provide sufficient rules so that the car knows when to shout for help, but not to do it so often that its more bother for the human than manually driving. You dont want the car to drive off a cliff, or blindly knock a pedestrian over on a crosswalk. But, vital though these are, they are still essentially mechanical. They do not require conscious cognition, only recognition ("X is a pedestrian located at a certain point in space and moving in a certain direction").

You should note that I am not claiming that the FSD task is not a daunting one; it clearly is, and Tesla still have a ways to go. Have they made good progress? Absolutely. Look at how solidly the car builds out its world view, and places cars and pedestrians into that view. Even a few years ago that would have blown everyones minds. Do they have a long way to go? Yep. Will they get there? Well, even humans have a finite ruleset; we allow people to drive when that ruleset has reached a certain level (its called a driving test). When will the car reach that point? No idea, and I dont think Tesla know either, but the ruleset is finite, otherwise no human would ever pass their driving test.
Everyone who works on the problem for a long period of time seems to disagree with you. There’s a reason that Elon says solving real world AI is necessary for FSD.

Why was no one ever able to write a rule set for image recognition that was even close to human performance? Surely there must be a finite set of rules to identify relevant objects?
 
  • Like
Reactions: linux-works