Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

New AU FSD vrs USA FSD post June 21

This site may earn commission on affiliate links.
Its quite old footage so it would seem more probable that radar data caused the alarm in this particular incident. If the beta testers offer a window into the progress thus far, I suspect people are overly optimistic on the capabilities of the cameras & NN. The image resolution is severely degraded at night and in poor weather.
Happy to be proved wrong, but just looking at the current trajectory of development ie past 2 years- what is now and not false promises. I have grown weary for the always next release that "Will blow your mind" and in reality delivers only slight improvement in one area and a bunch of bugs in another. I do have an appreciation of the magnitude of the task for the developers and hence my skepticism and caution. I guess we shall have to wait and evaluate similar incidents with non radar equipped / radar disabled cars to see if it turns out to be a better system for the same type of incident in the video clip.
 
  • Helpful
  • Like
Reactions: Wol747 and paulp
I mean it's fairly simple logic..

Humans seem to drive OK with with 2 eyes and a few mirrors.

Your Tesla has 8 cameras, far better positioned than the human eye.
Two eyes that can blink, on a head that can move, inside a car that can wipe snow and dirt from the windscreen, connected to a brain that can operate an arm to pull down the sun visor.
I worry that cameras will fail with direct sunlight or if obscured. How many hours into a robotaxi shift before a bird poos on one of the 8 cameras, or splash of mud obscures the view.
 
  • Like
Reactions: paulp and Wol747
Two eyes that can blink, on a head that can move, inside a car that can wipe snow and dirt from the windscreen, connected to a brain that can operate an arm to pull down the sun visor.
I worry that cameras will fail with direct sunlight or if obscured. How many hours into a robotaxi shift before a bird poos on one of the 8 cameras, or splash of mud obscures the view.
The human eyes can also see any object, situation, or sign and cause the brain to react to it. Including kangaroos. Those eyes can do that in any country whether driving on the left or right, in a tunnel or a back lane.
Those same eyes can also see sales gimmicks, but the brain has a habit or over-ruling at times.
 
Its quite old footage so it would seem more probable that radar data caused the alarm in this particular incident. If the beta testers offer a window into the progress thus far, I suspect people are overly optimistic on the capabilities of the cameras & NN. The image resolution is severely degraded at night and in poor weather.
Happy to be proved wrong, but just looking at the current trajectory of development ie past 2 years- what is now and not false promises. I have grown weary for the always next release that "Will blow your mind" and in reality delivers only slight improvement in one area and a bunch of bugs in another. I do have an appreciation of the magnitude of the task for the developers and hence my skepticism and caution. I guess we shall have to wait and evaluate similar incidents with non radar equipped / radar disabled cars to see if it turns out to be a better system for the same type of incident in the video clip.
Optimistic in timelines, yes sure that's absolutely true. People tend to overestimate in the short term and underestimate in the long term. To that extent, the absolute capability limit of the current HW3 camera suite is not even close to being realised. The performance scales over time given better and better training data. This requires exponentially more compute power, so for example to get a 2% better performance, it may take 10x more compute power. That's why Tesla again thinking far into the future with the Dojo super computer so they don't encounter a compute shortage.

The automated driving task is one of the biggest most audacious challenges humans have ever tackled. Nobody understands this more than the AP engineers. Ironically all the armchair experts severely underestimate the complexity and yet continue to be bearish on it ever being possible. The engineers actually solving this problem see the problem as a much larger challenge than even the biggest naysayers and yet continue to work on it nonetheless.

The human eyes can also see any object, situation, or sign and cause the brain to react to it. Including kangaroos. Those eyes can do that in any country whether driving on the left or right, in a tunnel or a back lane.
Those same eyes can also see sales gimmicks, but the brain has a habit or over-ruling at times.
Humans on average are actually very bad drivers. 50 million people die or become injured on the roads every year and up to 99% of those injuries are due to human error. Think about that number. Not only the 50 million as direct victims but their friends and families. That's easily 100s of millions of people that are impacted by your perfect sensing human eyes.

Its not about whether FSD lives up to your expectation. Believe it or not, not everything in this world revolves around boomer's finicky arbitrary feelings. Its a question of data. Once the Tesla fleet is able to collect x number of million KMs driven in shadow mode where it had a high degree of performance (99.999%) then it would be criminally negligent not to release it to that part of the world.
 
Optimistic in timelines, yes sure that's absolutely true. People tend to overestimate in the short term and underestimate in the long term. To that extent, the absolute capability limit of the current HW3 camera suite is not even close to being realised. The performance scales over time given better and better training data. This requires exponentially more compute power, so for example to get a 2% better performance, it may take 10x more compute power. That's why Tesla again thinking far into the future with the Dojo super computer so they don't encounter a compute shortage.

The automated driving task is one of the biggest most audacious challenges humans have ever tackled. Nobody understands this more than the AP engineers. Ironically all the armchair experts severely underestimate the complexity and yet continue to be bearish on it ever being possible. The engineers actually solving this problem see the problem as a much larger challenge than even the biggest naysayers and yet continue to work on it nonetheless.


Humans on average are actually very bad drivers. 50 million people die or become injured on the roads every year and up to 99% of those injuries are due to human error. Think about that number. Not only the 50 million as direct victims but their friends and families. That's easily 100s of millions of people that are impacted by your perfect sensing human eyes.

Its not about whether FSD lives up to your expectation. Believe it or not, not everything in this world revolves around boomer's finicky arbitrary feelings. Its a question of data. Once the Tesla fleet is able to collect x number of million KMs driven in shadow mode where it had a high degree of performance (99.999%) then it would be criminally negligent not to release it to that part of the world.
Seems posting without attempted belittlement is beyond you. You may choose to research what such behaviour means about you.
 
>>The automated driving task is one of the biggest most audacious challenges humans have ever tackled. Nobody understands this more than the AP engineers. Ironically all the armchair experts severely underestimate the complexity and yet continue to be bearish on it ever being possible. The engineers actually solving this problem see the problem as a much larger challenge than even the biggest naysayers and yet continue to work on it nonetheless. <<

I would certainly not call myself an armchair "expert" but I would personally assess the difficulty as possibly being more than the software engineers themselves think.
Any autonomy getting close to "real" FSD has got to be able to cope with all the so-called "edge" cases - the phrase suggests that they are infrequent but in actual practice they are encountered almost continually in real life.
The brain is capable of summing up a myriad of situations - a truck ahead beginning to reverse, a car pulling up and the driver looking over his shoulder in preparation to opening his door in front of you, opposing vehicles on a single lane bridge where one must reverse back towards traffic, coming up to a dead end - pointless to go on because this sort of "edge" cases are encountered all the time.
They've done an incredible job of getting to the Beta stage but IMHO it's actually nowhere close to being autonomy. No point in calling a car autonomous if it will not cope with every edge case the way a human does. It's one thing to be safer than a human, quite another to be able to extract itself from those almost stationary little "edge" cases.
 
>>The automated driving task is one of the biggest most audacious challenges humans have ever tackled. Nobody understands this more than the AP engineers. Ironically all the armchair experts severely underestimate the complexity and yet continue to be bearish on it ever being possible. The engineers actually solving this problem see the problem as a much larger challenge than even the biggest naysayers and yet continue to work on it nonetheless. <<

I would certainly not call myself an armchair "expert" but I would personally assess the difficulty as possibly being more than the software engineers themselves think.
Any autonomy getting close to "real" FSD has got to be able to cope with all the so-called "edge" cases - the phrase suggests that they are infrequent but in actual practice they are encountered almost continually in real life.
The brain is capable of summing up a myriad of situations - a truck ahead beginning to reverse, a car pulling up and the driver looking over his shoulder in preparation to opening his door in front of you, opposing vehicles on a single lane bridge where one must reverse back towards traffic, coming up to a dead end - pointless to go on because this sort of "edge" cases are encountered all the time.
They've done an incredible job of getting to the Beta stage but IMHO it's actually nowhere close to being autonomy. No point in calling a car autonomous if it will not cope with every edge case the way a human does. It's one thing to be safer than a human, quite another to be able to extract itself from those almost stationary little "edge" cases.
I’m not aware of any evidence or proof that something that doesnt exist yet is safer. Its all theoretical. consider it like medicine development. The scientists think its safe but until the real worls trials occur there is no way they would make the definitive claim.
I have no doubt that the active safety measures now deployed on my tesla add to my safety. I also have no doubt that if I wasnt holding the wheel in autopilot (lane keeping and active cruise control to other brands) my car would have totalled itself multiple times.
 
I don't have FSD on my car and have never tried it but I do have AP and have tried some of the features like lane keeping and TACC.

I find that lane keeping is pretty good for straight bits of road but feel I can do a much better job on curves. When approaching a curve I can see tens to 100s of metres ahead and apply just the right amount of steering to ease around the curve. I find the autosteer seems to only look a short distance ahead and makes lots of adjustments and so it is not a comfortable experience.

The same goes with TACC. It works fine when the traffic is moving but in stop start traffic it seems to lag so that it has to brake heavily and waits too long before moving off again. As a driver, I can see multiple cars ahead and start to decelerate far sooner so it is a smooth slow down and a smooth take off.

And just general driving, there are cues you get off the other cars, like the way they slow, head turning, even indicators, that let you know that they are about to change lanes that I don't believe the FSD could pick up. I'm not suggesting I'm a perfect driver but after many years of experience I feel there are things that would be difficult to code into AI.
 
atj777 (Triple seven?) Agree entirely. I have paid for FSD purely out of interest: I'm not allowed even to engage TACC when Mrs is with me, which is 95% of the time because of just what you write.
Hopefully the Beta version - if it ever gets released - will be more comfortable but I rather suspect that the programmers are more interested in keeping the car intact than the occupants' comfort. As you say, corners at any speed are "interesting" in the Chinese sense, with little forward planning of the smoothest way in and out.
 
But isnt the software programmed by those same humans, and its learning how to drive like humans?
The software is at least faster to react than humans, and certainly will end up much better than the human driver.
My concern is with the input mechanisms, in other words the cameras.
Even the reverse camera is blurry when it’s raining, I just can’t believe that the current methods of having naked cameras on the outside of the car will ever be robust enough for true self driving.
It might be fine on a sunny day, but as soon as the weather gets bad I can’t imagine it will be long before the car has to stop itself. If there really is no driver at the wheel like in a Robo taxi situation that is a car stopped in the middle of the road.
 
  • Like
Reactions: Wol747
The software is at least faster to react than humans, and certainly will end up much better than the human driver.
My concern is with the input mechanisms, in other words the cameras.
Even the reverse camera is blurry when it’s raining, I just can’t believe that the current methods of having naked cameras on the outside of the car will ever be robust enough for true self driving.
It might be fine on a sunny day, but as soon as the weather gets bad I can’t imagine it will be long before the car has to stop itself. If there really is no driver at the wheel like in a Robo taxi situation that is a car stopped in the middle of the road.

This is the wrong way to think about it. As per my earlier posts, just because the backup camera (which isn't even the same sensor as the main AP cameras) are blurry that does not mean that the data is not there.

Think about it this way: does an object reflect photons in such a way that it is capture by the camera sensor? The answer in almost all cases is yes-- the problem is building an artificial neural net that is able make a signal out of the noise. With enough training data and a fast enough inference chip its only really a matter of time.
 
My take on this:

If Tesla is perceiving depth from the size of the object (as opposed to binocular vision), then it can only make an assessment of its relative distance if it actually recognises the object and has a size comparison in its database. Ok for cars, bikes, humans, traffic cones, bins, animals.

But not ok for unexpected road hazards, eg, surf board, wardrobe, pile of junk.

Since the car is moving, Tesla vision can perceive that it is closing on the object because its size would grow faster than the surroundings, but it would have no idea whatever how close it is because it would not know the proper size of any object not in its database.

That is where radar is indispensible.
 
This is the wrong way to think about it. As per my earlier posts, just because the backup camera (which isn't even the same sensor as the main AP cameras) are blurry that does not mean that the data is not there.

Think about it this way: does an object reflect photons in such a way that it is capture by the camera sensor? The answer in almost all cases is yes-- the problem is building an artificial neural net that is able make a signal out of the noise. With enough training data and a fast enough inference chip its only really a matter of time.
Computing power is not the issue. The cameras as they are currently released have no protection, no way of self cleaning. They can be completely obscured, with the photons never reach the receiver.

One splash of mud, snow, bird droppings, bug strike and the camera is offline. The cameras are not fit for purpose, if that purpose is a fully-autonomous, unattended vehicle like a robotaxi.

Assuming the robotaxi requires all 8 cameras to be working, there are 8 points of failure. If one camera becomes obscured surely the vehicle would have to stop operating. It couldn't hand over to its passenger, it might not even have one. Or it might have a child, someone visually impaired, someone intoxicated, some too old to keep their license. Would it stop in its current lane in the middle of the harbour bridge? Or keep driving with one eye closed (and now no radar) until it reached a street with parking? Would it be the owner's responsibility to find and rescue the vehicle and its passenger?
 
  • Like
Reactions: paulp and Hairyman
... The cameras as they are currently released have no protection, no way of self cleaning.
The three primary forward facing cameras do. They are beneath the wiper arc. I think these three cameras were all that was used for NoAP in earlier iterations, if I understand it correctly.

I found this video on FSD vs Lidar very informative. It describes how the Birds Eye View system works amongst other things.

 
  • Informative
Reactions: Hairyman