Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

It's happening....FSD v9 to be released (to current beta testers)

This site may earn commission on affiliate links.
I’m not sure where I sit on this atm .. first I always felt the big problem was the basic computer vision .. finding roads and cars etc. This seems to be close, as the car recognition is damn good so far as I can see from the videos. However there is still a lot of work on the car understanding what it sees in terms of expected behavior, and all the myriad variations on signage and lane markings. Unlike the vision problem, this seems to me to be a linear problem (expert system) which means we are likely to see slow and steady progress as the rule set grows, rather than any big breakthrough. What this implies for the release schedule I do not know. Possibly it means that FSD is less of an “if’ and more of a “when” than before, but it’s probably some time away still.

Correct....computer "Vision" has been solved and Tesla's system is probably superior to humans at this point now that true 3D vision is out and working properly. Car has no issue determining what is a road, where it should drive, road signs/traffic lights etc. Just recognizing the world in general.

But thats not the issue nor has it ever been IMO. The issue is the true magic of decision making, and the current system is still pretty dumb. Im pretty sure there is no decision neural nets, or at least they are very basic. Most of the decision making is probably thousands of if/then statements. This will be the true problem to solve.

You can really see this happening in one of the recent FSD Beta videos (cant remember exactly which one), but it was essentially behind slow traffic and the car could not determine whether it should go around it or not (it incorrectly tried to go around it into on-coming traffic).

Just this small case happens dozens of times that are second nature to us, but almost impossible to solve for a non-sentient computer program. How do you program for this case? When should the car stay in traffic and go with it, or go around a car like in light traffic? This is almost impossible to program with if/then statements, and Neural nets are great for vision and very narrow case decision making, but suck at broad AI which is what decision making in a driving environment is.

This type of AI is a long way off, and would require some new advancements in the field that actually come close to machine sentience to solve. The car will literally need to be AWARE of ITSELF and understand everything else around its and whats happening. For the example above, there is no programming logic that can explain to the car, oh we are in heavy traffic, there is a long line of cars ahead of us and we should not attempt to go around it. Right now they are just programing in corner cases one by one, and this will probably get it close enough eventually for FSD, but no where near level 5.
 
that true 3D vision is out and working properly. Car has no issue determining what is a road, where it should drive, road signs/traffic lights etc.
computer "Vision" has been solved and Tesla's system is probably superior to humans at this point

Vision is solved. ;) This scene could confuse any human!
Screen Shot 2021-07-22 at 11.59.42 AM.png
 
Correct....computer "Vision" has been solved and Tesla's system is probably superior to humans at this point now that true 3D vision is out and working properly. Car has no issue determining what is a road, where it should drive, road signs/traffic lights etc. Just recognizing the world in general.
I'm going to miss Semi Dance when I move to 3D from 2D 😞


Seriously, recognition of busses, trucks, etc. alongside the car is so bad with the production releases. I can't watch any more beta videos, but are there videos showing this behavior actually being fixed?
 
Vision is solved. ;) This scene could confuse any human!
Wow. That is such a great example of where we are with computer vision/machine learning.
Yes, we can detect a car, and estimate pose, and maybe even speed and distance. We're well on our way in object detection, with things that we can easily have a human label in a single image frame.
But we have zero understanding of context, and there's no magic to get computers to learn that.
 
Correct....computer "Vision" has been solved and Tesla's system is probably superior to humans at this point now that true 3D vision is out and working properly. Car has no issue determining what is a road, where it should drive, road signs/traffic lights etc. Just recognizing the world in general.

But thats not the issue nor has it ever been IMO. The issue is the true magic of decision making, and the current system is still pretty dumb. Im pretty sure there is no decision neural nets, or at least they are very basic. Most of the decision making is probably thousands of if/then statements. This will be the true problem to solve.

You can really see this happening in one of the recent FSD Beta videos (cant remember exactly which one), but it was essentially behind slow traffic and the car could not determine whether it should go around it or not (it incorrectly tried to go around it into on-coming traffic).

Just this small case happens dozens of times that are second nature to us, but almost impossible to solve for a non-sentient computer program. How do you program for this case? When should the car stay in traffic and go with it, or go around a car like in light traffic? This is almost impossible to program with if/then statements, and Neural nets are great for vision and very narrow case decision making, but suck at broad AI which is what decision making in a driving environment is.

This type of AI is a long way off, and would require some new advancements in the field that actually come close to machine sentience to solve. The car will literally need to be AWARE of ITSELF and understand everything else around its and whats happening. For the example above, there is no programming logic that can explain to the car, oh we are in heavy traffic, there is a long line of cars ahead of us and we should not attempt to go around it. Right now they are just programing in corner cases one by one, and this will probably get it close enough eventually for FSD, but no where near level 5.
Dude you nailed my thoughts exactly! The train of thought that since humans drive with just two eyes surely a car with +8 cameras and a powerful computer must be better is extremely flawed. There are multiple flaws in this thinking but the biggest is what you said - a machine that can make the decisions needed to actually autonomously drive in even the most benign real world situations would have have to be essentially sentient (replicate a human brain). Pretty sure we are a LONG ways off from that - also positive cars wouldn’t even make the top ten list for optimal use cases for self aware sentient machine.

That’s why I scoff at those that believe level 3 and above is in any way possible in the next 5 to 10 years. It defies common sense.
 
Wow. That is such a great example of where we are with computer vision/machine learning.
Yes, we can detect a car, and estimate pose, and maybe even speed and distance. We're well on our way in object detection, with things that we can easily have a human label in a single image frame.
But we have zero understanding of context, and there's no magic to get computers to learn that.
Credit to @mhan00

If they used their depth map, I suppose they might have a chance of avoiding embarrassment like this, sometimes. Someone could still make a box with a car painted on it though - a less egregious error I suppose.

HOW WILL THEY DO FUSION OF DEPTH MAP AND VISION??? Haha.
 
Dude you nailed my thoughts exactly! The train of thought that since humans drive with just two eyes surely a car with +8 cameras and a powerful computer must be better is extremely flawed. There are multiple flaws in this thinking but the biggest is what you said - a machine that can make the decisions needed to actually autonomously drive in even the most benign real world situations would have have to be essentially sentient (replicate a human brain). Pretty sure we are a LONG ways off from that - also positive cars wouldn’t even make the top ten list for optimal use cases for self aware sentient machine.
The problem with that is it doesnt really map to how humans work. Ever driven to work while thinking about what you will do when you get there? Bet you hardly noticed what you were doing on the drive. Sure, if something unusual or unexpected happens your attention snaps back, but during that daily hum-drum drive, your brain is pretty much driving on its own "autopilot".

This is really what Tesla are aiming at .. having the car do that humdrum bit. This is very different from reasoning about novel situations, which is indeed totally beyond anything we can get from computers or AI today (or any time soon).

And, to be fair, a car doing the humdrum bit is probably a good deal safer than a human .. it wont drift out of lane, change lanes into a blind spot, stop watching ahead while texting, or run a red light etc etc.

There tends to be a lot of focus on how a car wont be able to do the "smart" things. True. But humans are damn awful at doing the the dull boring things .. and in fact the vast majority of the time its those failures that cause accidents. So yeah, the car is going to act stupid when a human could easily figure stuff out, but it wont fall asleep at the wheel. Sounds like a decent trade-off to me.
 
This is really what Tesla are aiming at .. having the car do that humdrum bit. This is very different from reasoning about novel situations, which is indeed totally beyond anything we can get from computers or AI today (or any time soon).
There tends to be a lot of focus on how a car wont be able to do the "smart" things. True. But humans are damn awful at doing the the dull boring things .. and in fact the vast majority of the time its those failures that cause accidents. So yeah, the car is going to act stupid when a human could easily figure stuff out, but it wont fall asleep at the wheel. Sounds like a decent trade-off to me.
The problem is, if you no longer make the humans do the dull boring things that keep them interfaced with the system, they get even worse at task switching back and doing the smart things. If your autonomous car is great at the humdrum things for 100 hours, but drives head on into a pole every 101st hours, there's a very good chance the human will not be mentally present enough to fix this. This is a huge topic in Aviation where autopilots really can be used for thousands of hours with no failures, and was a root cause of the Air France 447 crash, where the AP handed a wounded (but perfectly fly-able) aircraft off to the human suddenly, and that human had no real context of what was going on.

If you really believe the self driving is safer than a human, then realizing that Tesla has AP jail, and turns off AP if you stop paying attention seems totally insane. They are disabling a feature that is safer than a human, just because the human was distracted? I thought this was the exact thing it was supposed to be good at.

But it's not insane when you realize it's only safer with a fully alert, non-distracted human at the wheel. And average humans are pretty safe actually- 1:2M miles by Tesla's own numbers. The fact that your system can only be used with a fully alert, at the ready human means you're nowhere near that on your own, or you'd happily continue driving even with a less than alert human, because turning off would actually be a reduction in safety.

And, to be fair, a car doing the humdrum bit is probably a good deal safer than a human .. it wont drift out of lane, change lanes into a blind spot, stop watching ahead while texting, or run a red light etc etc.
We don't know yet how often this will happen. We do know that current AP does in fact run red lights and drift out of lanes much more often than average humans. So we have a long way to go before we can claim this.

But humans are damn awful at doing the the dull boring things .. and in fact the vast majority of the time its those failures that cause accidents.
A lot of our simpler technologies have already made a huge difference here. AEB, LDW, blind spot monitors. Even Tesla's numbers show a 2X reduction in accidents due to these. You don't need anything like FSD to make a big dent here, and in fact, it might not be possible to double that again until you can take on the whole driving task, end to end.

Once cars are good enough to drive themselves 100,000 miles without an accident, we're going to see some serious human factors challenges, as a reasonable person could ride in the car for 10K miles, never see an issue, and decide they can completely check out mentally while letting the car drive, but in reality, that car is way less safe than if they operated it fully manually.
 
I see that my car still considers rain-sensing wiper functionality to be "Beta", several years after this capability was released...

Wonder how long FSD will be "Beta"?
did gmail EVER exit actual beta? I stopped paying attention, but gmail was (maybe still is) in a perma state of beta.

how do you define 'done' for fsd? definition of done (DoD) is challenging for something that needs a bunch of nines in order to not kill people on the road.

'well, it did ok for 10k miles but then, hit this dog on the road'. is that ok to release it and remove the beta moniker?

life and death and DoD. really hard to know when to call it a trustable full public release.
 
The problem is, if you no longer make the humans do the dull boring things that keep them interfaced with the system, they get even worse at task switching back and doing the smart things. If your autonomous car is great at the humdrum things for 100 hours, but drives head on into a pole every 101st hours, there's a very good chance the human will not be mentally present enough to fix this. This is a huge topic in Aviation where autopilots really can be used for thousands of hours with no failures, and was a root cause of the Air France 447 crash, where the AP handed a wounded (but perfectly fly-able) aircraft off to the human suddenly, and that human had no real context of what was going on.

If you really believe the self driving is safer than a human, then realizing that Tesla has AP jail, and turns off AP if you stop paying attention seems totally insane. They are disabling a feature that is safer than a human, just because the human was distracted? I thought this was the exact thing it was supposed to be good at.

But it's not insane when you realize it's only safer with a fully alert, non-distracted human at the wheel. And average humans are pretty safe actually- 1:2M miles by Tesla's own numbers. The fact that your system can only be used with a fully alert, at the ready human means you're nowhere near that on your own, or you'd happily continue driving even with a less than alert human, because turning off would actually be a reduction in safety.


We don't know yet how often this will happen. We do know that current AP does in fact run red lights and drift out of lanes much more often than average humans. So we have a long way to go before we can claim this.


A lot of our simpler technologies have already made a huge difference here. AEB, LDW, blind spot monitors. Even Tesla's numbers show a 2X reduction in accidents due to these. You don't need anything like FSD to make a big dent here, and in fact, it might not be possible to double that again until you can take on the whole driving task, end to end.

Once cars are good enough to drive themselves 100,000 miles without an accident, we're going to see some serious human factors challenges, as a reasonable person could ride in the car for 10K miles, never see an issue, and decide they can completely check out mentally while letting the car drive, but in reality, that car is way less safe than if they operated it fully manually.
These are all good points and I generally agree. However, I wasn't referring to what the car can do now, but what will happen as FSD and similar systems continue to improve .. as they will, though we can debate endlessly about the pace.
 
but what will happen as FSD and similar systems continue to improve .. as they will,
Agree, which is why I ended my discussion with the fact that this could actually increase the challenge, when systems are 1:100,000 miles instead of 1:100 like they are today. For at least highway AP, we're already drifting towards a reliability that lulls humans into a false sense of security and causes less attention to be paid, making them less reliable backups, and allowing the system to rely on them less. We may see a stagnation in overall improvement- as the autonomy gets better, humans may devolve in their capabilities as fast or faster, and there could be a valley where overall safety decreases until the autonomy is very close to manual human performance.
 
Agree, which is why I ended my discussion with the fact that this could actually increase the challenge, when systems are 1:100,000 miles instead of 1:100 like they are today. For at least highway AP, we're already drifting towards a reliability that lulls humans into a false sense of security and causes less attention to be paid, making them less reliable backups, and allowing the system to rely on them less. We may see a stagnation in overall improvement- as the autonomy gets better, humans may devolve in their capabilities as fast or faster, and there could be a valley where overall safety decreases until the autonomy is very close to manual human performance.

They can take lessons from Xray machines used for security screening. On them they will randomly insert an image of a threat to make sure the operator catches it, if the operator catches it it comes up and says that it was a test and that the bag needs to be rescreened just in case the inserted threat image covered up an actual threat. If they don't catch it it sets off alarms.

I guess for cars they will need to randomly swerve into oncoming traffic to make sure the driver takes manual control. I am only joking about the needing to swerve into traffic. I could see them trying to implement such a system I just don't see how they would do it considering if something goes wrong it could result in a wreck.

I do agree this is going to become a major issue until the car can fully drive itself. IF they reach 1 issue in 100k miles that is 8 years of driving for most people it is very unlikely that after 8 years of the car doing a great job that the driver will be ready to take control when needed.
 
  • Like
Reactions: gearchruncher
Agree, which is why I ended my discussion with the fact that this could actually increase the challenge, when systems are 1:100,000 miles instead of 1:100 like they are today. For at least highway AP, we're already drifting towards a reliability that lulls humans into a false sense of security and causes less attention to be paid, making them less reliable backups, and allowing the system to rely on them less. We may see a stagnation in overall improvement- as the autonomy gets better, humans may devolve in their capabilities as fast or faster, and there could be a valley where overall safety decreases until the autonomy is very close to manual human performance.
It's very difficult for anyone to predict how this will work itself out. First, imho we already have driver inattention issues even with no driver assist at all. Texting. Fiddling with the audio system. Daydreaming. In fact, it's ironic but with AP engaged and the car nag screens drivers are possibly paying more attention while on AP than while manually driving down the road.

There will of course always be the idiots who game the system .. the guy who watched a DVD and got killed etc. The real question comes down to simple statistics .. on aggregate, does a given system make the roads safer or not? When anti-lock brakes were being tested, some argued (quite seriously) that it would encourage people to drive closer and faster, and make accidents more probable. Didn't happen. Similar arguments have been used against most new safety systems over the years.

Yes, as the car takes a more and more active role in driving, drivers are going to deteriorate in both ability and responsibility. But I think you are way over-estimating the current abilities of most drivers today. How many drivers can really respond well in an emergency? Know exactly how much braking to apply? Know if its possible to swerve safely given the road conditions and other cars around them? Of course, the answer is almost no-one.

The fact is, the bar is pretty low for a car being safer than a human driver. Sure, it will mess up occasionally, sometimes with catastrophic consequences. But each such incident can be analyzed, rectified, and a fix deployed to an entire fleet of cars in a short time. Its leverage like this that will tip the statistics toward semi-automonous systems, regardless of how fast human abilities (such as they are) might atrophy.

Of course, we are all speculating until we start seeing some serious deployment. The current FSD beta really doesnt tell us much, since the drivers are selected to remain alert at all times, and are disengaging any time something looks risky. But regardless of opinions on pros/cons, it's going to be an interesting time ahead as the technology evolves.
 
The fact is, the bar is pretty low for a car being safer than a human driver.
What are we comparing human drivers to? If the bar were actually low we'd have autonomous vehicles after the billions of dollars that have already been invested.
The real question comes down to simple statistics .. on aggregate, does a given system make the roads safer or not?
Yep. Statistically where do you think unsupervised FSD beta is?
 
Correct....computer "Vision" has been solved and Tesla's system is probably superior to humans at this point now that true 3D vision is out and working properly. Car has no issue determining what is a road, where it should drive, road signs/traffic lights etc. Just recognizing the world in general.

But thats not the issue nor has it ever been IMO. The issue is the true magic of decision making, and the current system is still pretty dumb. Im pretty sure there is no decision neural nets, or at least they are very basic. Most of the decision making is probably thousands of if/then statements. This will be the true problem to solve.

You can really see this happening in one of the recent FSD Beta videos (cant remember exactly which one), but it was essentially behind slow traffic and the car could not determine whether it should go around it or not (it incorrectly tried to go around it into on-coming traffic).

Just this small case happens dozens of times that are second nature to us, but almost impossible to solve for a non-sentient computer program. How do you program for this case? When should the car stay in traffic and go with it, or go around a car like in light traffic? This is almost impossible to program with if/then statements, and Neural nets are great for vision and very narrow case decision making, but suck at broad AI which is what decision making in a driving environment is.

This type of AI is a long way off, and would require some new advancements in the field that actually come close to machine sentience to solve. The car will literally need to be AWARE of ITSELF and understand everything else around its and whats happening. For the example above, there is no programming logic that can explain to the car, oh we are in heavy traffic, there is a long line of cars ahead of us and we should not attempt to go around it. Right now they are just programing in corner cases one by one, and this will probably get it close enough eventually for FSD, but no where near level 5.


baffles me how this is such a hard problem, when games like gta, gran turismo, cities skylines can all easily mimic different aspects of driving.
 
Wow. That is such a great example of where we are with computer vision/machine learning.
Yes, we can detect a car, and estimate pose, and maybe even speed and distance. We're well on our way in object detection, with things that we can easily have a human label in a single image frame.
But we have zero understanding of context, and there's no magic to get computers to learn that.

Here is another example. Tesla's vision thinks the moon is a yellow light something human vision would never do.