Wol747
Active Member
Why should I be doing so? I was told that I have MCU2 since it’s an atom processor - that’s s all I know.
You can install our site as a web app on your iOS device by utilizing the Add to Home Screen feature in Safari. Please see this thread for more details on this.
Note: This feature may not be available in some browsers.
Two eyes that can blink, on a head that can move, inside a car that can wipe snow and dirt from the windscreen, connected to a brain that can operate an arm to pull down the sun visor.I mean it's fairly simple logic..
Humans seem to drive OK with with 2 eyes and a few mirrors.
Your Tesla has 8 cameras, far better positioned than the human eye.
The human eyes can also see any object, situation, or sign and cause the brain to react to it. Including kangaroos. Those eyes can do that in any country whether driving on the left or right, in a tunnel or a back lane.Two eyes that can blink, on a head that can move, inside a car that can wipe snow and dirt from the windscreen, connected to a brain that can operate an arm to pull down the sun visor.
I worry that cameras will fail with direct sunlight or if obscured. How many hours into a robotaxi shift before a bird poos on one of the 8 cameras, or splash of mud obscures the view.
Optimistic in timelines, yes sure that's absolutely true. People tend to overestimate in the short term and underestimate in the long term. To that extent, the absolute capability limit of the current HW3 camera suite is not even close to being realised. The performance scales over time given better and better training data. This requires exponentially more compute power, so for example to get a 2% better performance, it may take 10x more compute power. That's why Tesla again thinking far into the future with the Dojo super computer so they don't encounter a compute shortage.Its quite old footage so it would seem more probable that radar data caused the alarm in this particular incident. If the beta testers offer a window into the progress thus far, I suspect people are overly optimistic on the capabilities of the cameras & NN. The image resolution is severely degraded at night and in poor weather.
Happy to be proved wrong, but just looking at the current trajectory of development ie past 2 years- what is now and not false promises. I have grown weary for the always next release that "Will blow your mind" and in reality delivers only slight improvement in one area and a bunch of bugs in another. I do have an appreciation of the magnitude of the task for the developers and hence my skepticism and caution. I guess we shall have to wait and evaluate similar incidents with non radar equipped / radar disabled cars to see if it turns out to be a better system for the same type of incident in the video clip.
Humans on average are actually very bad drivers. 50 million people die or become injured on the roads every year and up to 99% of those injuries are due to human error. Think about that number. Not only the 50 million as direct victims but their friends and families. That's easily 100s of millions of people that are impacted by your perfect sensing human eyes.The human eyes can also see any object, situation, or sign and cause the brain to react to it. Including kangaroos. Those eyes can do that in any country whether driving on the left or right, in a tunnel or a back lane.
Those same eyes can also see sales gimmicks, but the brain has a habit or over-ruling at times.
Seems posting without attempted belittlement is beyond you. You may choose to research what such behaviour means about you.Optimistic in timelines, yes sure that's absolutely true. People tend to overestimate in the short term and underestimate in the long term. To that extent, the absolute capability limit of the current HW3 camera suite is not even close to being realised. The performance scales over time given better and better training data. This requires exponentially more compute power, so for example to get a 2% better performance, it may take 10x more compute power. That's why Tesla again thinking far into the future with the Dojo super computer so they don't encounter a compute shortage.
The automated driving task is one of the biggest most audacious challenges humans have ever tackled. Nobody understands this more than the AP engineers. Ironically all the armchair experts severely underestimate the complexity and yet continue to be bearish on it ever being possible. The engineers actually solving this problem see the problem as a much larger challenge than even the biggest naysayers and yet continue to work on it nonetheless.
Humans on average are actually very bad drivers. 50 million people die or become injured on the roads every year and up to 99% of those injuries are due to human error. Think about that number. Not only the 50 million as direct victims but their friends and families. That's easily 100s of millions of people that are impacted by your perfect sensing human eyes.
Its not about whether FSD lives up to your expectation. Believe it or not, not everything in this world revolves around boomer's finicky arbitrary feelings. Its a question of data. Once the Tesla fleet is able to collect x number of million KMs driven in shadow mode where it had a high degree of performance (99.999%) then it would be criminally negligent not to release it to that part of the world.
I’m not aware of any evidence or proof that something that doesnt exist yet is safer. Its all theoretical. consider it like medicine development. The scientists think its safe but until the real worls trials occur there is no way they would make the definitive claim.>>The automated driving task is one of the biggest most audacious challenges humans have ever tackled. Nobody understands this more than the AP engineers. Ironically all the armchair experts severely underestimate the complexity and yet continue to be bearish on it ever being possible. The engineers actually solving this problem see the problem as a much larger challenge than even the biggest naysayers and yet continue to work on it nonetheless. <<
I would certainly not call myself an armchair "expert" but I would personally assess the difficulty as possibly being more than the software engineers themselves think.
Any autonomy getting close to "real" FSD has got to be able to cope with all the so-called "edge" cases - the phrase suggests that they are infrequent but in actual practice they are encountered almost continually in real life.
The brain is capable of summing up a myriad of situations - a truck ahead beginning to reverse, a car pulling up and the driver looking over his shoulder in preparation to opening his door in front of you, opposing vehicles on a single lane bridge where one must reverse back towards traffic, coming up to a dead end - pointless to go on because this sort of "edge" cases are encountered all the time.
They've done an incredible job of getting to the Beta stage but IMHO it's actually nowhere close to being autonomy. No point in calling a car autonomous if it will not cope with every edge case the way a human does. It's one thing to be safer than a human, quite another to be able to extract itself from those almost stationary little "edge" cases.
Seems posting without attempted belittlement is beyond you. You may choose to research what such behaviour means about you.
I almost always get errors flash up on my screen about the left pillar camera whenever driving along the M4 in the morning due to the sun shining directly on the camera. Is the fact that the car detects that the camera has "failed" a good thing or a bad thing? I don't know.I worry that cameras will fail with direct sunlight or if obscured.
But isnt the software programmed by those same humans, and its learning how to drive like humans?About those edge cases though...
You know there is a whole insurance industry based around those edge cases... cause the way most humans react to lots of those edge cases is to do it badly and have a accident.
The software is at least faster to react than humans, and certainly will end up much better than the human driver.But isnt the software programmed by those same humans, and its learning how to drive like humans?
The software is at least faster to react than humans, and certainly will end up much better than the human driver.
My concern is with the input mechanisms, in other words the cameras.
Even the reverse camera is blurry when it’s raining, I just can’t believe that the current methods of having naked cameras on the outside of the car will ever be robust enough for true self driving.
It might be fine on a sunny day, but as soon as the weather gets bad I can’t imagine it will be long before the car has to stop itself. If there really is no driver at the wheel like in a Robo taxi situation that is a car stopped in the middle of the road.
Computing power is not the issue. The cameras as they are currently released have no protection, no way of self cleaning. They can be completely obscured, with the photons never reach the receiver.This is the wrong way to think about it. As per my earlier posts, just because the backup camera (which isn't even the same sensor as the main AP cameras) are blurry that does not mean that the data is not there.
Think about it this way: does an object reflect photons in such a way that it is capture by the camera sensor? The answer in almost all cases is yes-- the problem is building an artificial neural net that is able make a signal out of the noise. With enough training data and a fast enough inference chip its only really a matter of time.
The three primary forward facing cameras do. They are beneath the wiper arc. I think these three cameras were all that was used for NoAP in earlier iterations, if I understand it correctly.... The cameras as they are currently released have no protection, no way of self cleaning.