Ok. I'll start submitting them again and see how it goes.I just sent them some feedback. Got the auto-response. I've gotten plenty of these in 2022.
You can install our site as a web app on your iOS device by utilizing the Add to Home Screen feature in Safari. Please see this thread for more details on this.
Note: This feature may not be available in some browsers.
Ok. I'll start submitting them again and see how it goes.I just sent them some feedback. Got the auto-response. I've gotten plenty of these in 2022.
Maybe I should have said "keep in mind" instead of "ignore" but it may not be as "far faster" as it may seem. I also didn't mean to imply that AI learns faster than human if that is your impression. But I wonder what the difference would be if the AI had the benefit of being on par in regard to many parameters. I have already mentioned complexity of the neural nets but the quality of the data human gets is also much better than that fed to AIs. Probably, there're a few more parameters on which AI is still designed to be inferior to human...not really. Give any of the current AI algorithms 16 years of 'learning' and they still won't be at the point of a below average teenage driver. It's not a matter of time, it's a matter of how capable the systems are.
In addition to the complexity of the neural networks, a human driver users a simultaneous combination of rules-based, conscious driving effort and learned reactions (i.e. neural networks or "muscle memory"), where as Tesla FSD is layered - mostly neural networks down at the vision and object recognition level transitioning to procedural code at the higher level path planning and execution. So for humans, we get the best of both worlds and can resolve problems in one realm with input from the other.Maybe I should have said "keep in mind" instead of "ignore" but it may not be as "far faster" as it may seem. I also didn't mean to imply that AI learns faster than human if that is your impression. But I wonder what the difference would be if the AI had the benefit of being on par in regard to many parameters. I have already mentioned complexity of the neural nets but the quality of the data human gets is also much better than that fed to AIs. Probably, there're a few more parameters on which AI is still designed to be inferior to human...
What if autonomous systems used additional data points to assist it? To your example, what if the NN's queried Google Live Traffic during the drive, and noticed that traffic is backing up around the curve, so it begins slowing down? The downside to that is the quality of Google's data. If the road shows red, but there isn't really any traffic there, your car will start slowing down for no reason.For example, I can't see around the curve that cars are stopping at a light but I consciously know that at morning rush when I am approaching an intersection around the curve there may be 10 or 12 cars stopped waiting on the light. So I will begin to slow down even before I come around the curve and actually see the stopped cars, where FSD would just plan to maintain speed up to the stop line until it saw the cars stopped ahead.
In addition to the complexity of the neural networks, a human driver users a simultaneous combination of rules-based, conscious driving effort and learned reactions (i.e. neural networks or "muscle memory"), where as Tesla FSD is layered - mostly neural networks down at the vision and object recognition level transitioning to procedural code at the higher level path planning and execution. So for humans, we get the best of both worlds and can resolve problems in one realm with input from the other.
For example, I can't see around the curve that cars are stopping at a light but I consciously know that at morning rush when I am approaching an intersection around the curve there may be 10 or 12 cars stopped waiting on the light. So I will begin to slow down even before I come around the curve and actually see the stopped cars, where FSD would just plan to maintain speed up to the stop line until it saw the cars stopped ahead. Similarly, I consciously know there's a big difference between driving through a neighborhood with no lines (which is likely to have parked cars on the side of the road) and driving on a curvy street through the countryside with a double yellow line in the middle. I (again, consciously) assume there is nothing but trouble waiting for me on the other side of a double yellow line even if traffic and or curvy road prevents me from seeing it. But Tesla FSD seems to operate as if can go across the double yellow line on a curvy road in the same fashion it would go into the other lane in a neighborhood without markings to pass a "stopped" car (often without being able to ascertain why it is stopped) as long as it doesn't see any danger coming. Another example: I understand that getting into the right lane of a multi-lane interstate to exit is a completely different proposition in rush hour traffic and can take into account the distances between consecutive exits (with entering car merges) where FSD seems to count on distance to the exit only, regardless of speed, traffic, merging cars, etc.
It seems that no amount of additional data can "train out" these deficiencies - it needs contextual intuition to supplement the video processing and object recognition to have true situational awareness on the level of humans. This is why I think FSD is a long way from being safer/better than humans, and why it will require many generational iterations in design and development before it approaches human level of driving. One way it could make up for these deficiencies is adding more and different sensors (i.e. multiple 3D radar and hi-definition maps with precision GPS) so the car has much more information to go on than a human has. But Tesla seems to have abandoned that path and instead relied simply on vision only, which the human already possesses with much better acuity and processing.
Yeah, maybe they'll also start investigating why a stock Tesla can reach a speed of 155 mph.I'm sure NHTSA will unfortunately have something to say on this... just like those stop signs...
You obviously have a different driving experience than I do. Let me explain mine. Generally local rural interstates have three lanes in each direction. The posted speed limit is 65.The far right lane travels at between 68 and 70, unless it is blocked by either Grampa Snerd in his Prius or a truck struggling with a steep grade. If you attempt to drive in this lane you would constantly be weaving in and out of the right and center lanes. That's not even considering merging that occurs at interchanges.If you're on a rural highway whose speed limit is 65 and you need to drive 85 to pass someone...maybe you don't actually need to pass them.
At least let us have a transient excursion to between 81 and 85 mph in order to pass.
I think they will miss big on this number.Elon Musk suggesting FSD Beta increasing to 1 million users by the end of this year probably means there will be a lot of new FSD purchases/subscriptions and probably dropping/significantly-lowering Safety Score requirements for some regions. I believe cumulative Tesla vehicle production will be close to 4 million by end of this year including worldwide vehicles and older ones without the necessary hardware. Maybe there will be a lot more FSD Beta countries added by end of this year too as I would think current FSD is mostly US buyers.
Even an insect already has the brain power to avoid obstacles.not really. Give any of the current AI algorithms 16 years of 'learning' and they still won't be at the point of a below average teenage driver. It's not a matter of time, it's a matter of how capable the systems are.
What if autonomous systems used additional data points to assist it? To your example, what if the NN's queried Google Live Traffic during the drive, and noticed that traffic is backing up around the curve, so it begins slowing down?
Theses types of inputs could only help, it seems to me, but Elon has made it clear he doesn't want this kind of data in FSD. They already said high-definition maps were not needed and not wanted because it would limit the operating domain, and the release notes for 10.11 mentioned that they were even relying less on the conventional map data and more on the vision system. I think Tesla is stuck on this concept of "if humans can do it with two eyes, then Teslas can do it with eight eyes all the better," but that logic (as I explained above) just doesn't ring true to me.Much of these problems could be solved by greater use of fleet data to understand dynamics of specific locations. Experienced humans certainly drive better and safer on routes that they have driven before, and that's the majority of the drive.
Well that's easy - pull up on the right stick to disable AP, then pass the person at whatever speed you require - then double-down on the right stick to re-engage. Voila!You obviously have a different driving experience than I do. Let me explain mine. Generally local rural interstates have three lanes in each direction. The posted speed limit is 65.The far right lane travels at between 68 and 70, unless it is blocked by either Grampa Snerd in his Prius or a truck struggling with a steep grade. If you attempt to drive in this lane you would constantly be weaving in and out of the right and center lanes. That's not even considering merging that occurs at interchanges.
The far left lane travels at between 80 and 85, unless spooked by someone with a radar gun. The result of all this is that the middle lane is the "lane of least resistance". It generally travels at between 72 and 75. So let's say I'm in the middle lane cruising along at 75. If someone else decides that they want to travel in the middle lane at 70, it's very easy in a Tesla traveling 75 mph to pass that vehicle and hit 80 mph. Hah! Caught red handed! Not by the long arm of the law, reaching for my wallet, but rather the Red Steering Wheel of Satan. If I try to enter the left lane at 75 mph, someone going 85 is going to catch up with me pretty quickly.
Exactly .. and the car can do that for ANY curve, not just the ones you happen to know about because you use that route every day. While its true that humans CAN out-think anything we can build today (or in the foreseeable future) I think many people over-estimate human attentiveness in everyday driving. Almost everyone "spaces out" to an extent when driving, especially on familiar routes. That's when the car will do better then a human. Of course, a human will be needed to handle extraordinary (in the literal sense of the word) situations, such as a police stop or an accident blocking the road.What if autonomous systems used additional data points to assist it? To your example, what if the NN's queried Google Live Traffic during the drive, and noticed that traffic is backing up around the curve, so it begins slowing down? The downside to that is the quality of Google's data. If the road shows red, but there isn't really any traffic there, your car will start slowing down for no reason.
Theses types of inputs could only help, it seems to me, but Elon has made it clear he doesn't want this kind of data in FSD. They already said high-definition maps were not needed and not wanted because it would limit the operating domain, and the release notes for 10.11 mentioned that they were even relying less on the conventional map data and more on the vision system. I think Tesla is stuck on this concept of "if humans can do it with two eyes, then Teslas can do it with eight eyes all the better," but that logic (as I explained above) just doesn't ring true to me.
Mobileye uses their "AV Maps" for localization too (they claim 10cm accuracy).This is where Elon's out of his depth, making large scale pronouncements which seem clever but aren't.
There is 'high resolution map data' as used by Waymo (generated from ground truth lidar) used for location and perception, and then there is the lower resolution crowdsourced map data used by MobileEye partly for perception but also for driving policy and semantic understanding. I agree that the first is undesirable, and that the ADAS should be able to do something reasonable if it is missing map/drive data. But it should do something better, and more confidently, when there is map data sourced from human driving performance.
Human eyes and visual system are also much better resolution than the current cameras, which are a poor 1280x960 or something. Eyes are double gimballed in eyeball and neck, and humans use stereoscopic vision (which if Tesla adopted years ago would have bypassed all the issues when radar was deleted). And they have a much deeper visual cortex & semantic understanding, plus humans aren't even permitted to drive until they have 16 years of learning locomotion and vision and watching other drivers.
Furthermore, humans have memories, whether conscious or not, which greatly enhance their performance. Someone driving in their own neighborhood on common routes will do better than someone driving in a foreign country who has never seen the route before. The first is what people are used to and what ADAS systems will be compared to.
This is where Elon's out of his depth, making large scale pronouncements which seem clever but aren't.
There is 'high resolution map data' as used by Waymo (generated from ground truth lidar) used for location and perception, and then there is the lower resolution crowdsourced map data used by MobileEye partly for perception but also for driving policy and semantic understanding. I agree that the first is undesirable, and that the ADAS should be able to do something reasonable if it is missing map/drive data. But it should do something better, and more confidently, when there is map data sourced from human driving performance.
Human eyes and visual system are also much better resolution than the current cameras, which are a poor 1280x960 or something. Eyes are double gimballed in eyeball and neck, and humans use stereoscopic vision (which if Tesla adopted years ago would have bypassed all the issues when radar was deleted). And they have a much deeper visual cortex & semantic understanding, plus humans aren't even permitted to drive until they have 16 years of learning locomotion and vision and watching other drivers.
Furthermore, humans have memories, whether conscious or not, which greatly enhance their performance. Someone driving in their own neighborhood on common routes will do better than someone driving in a foreign country who has never seen the route before. The first is what people are used to and what ADAS systems will be compared to.
You aren't being very clever. We have been discussing this topic to death over the last 5 years.This is where Elon's out of his depth, making large scale pronouncements which seem clever but aren't.
What if autonomous systems used additional data points to assist it? To your example, what if the NN's queried Google Live Traffic during the drive, and noticed that traffic is backing up around the curve, so it begins slowing down? The downside to that is the quality of Google's data. If the road shows red, but there isn't really any traffic there, your car will start slowing down for no reason.
Theses types of inputs could only help, it seems to me, but Elon has made it clear he doesn't want this kind of data in FSD. They already said high-definition maps were not needed and not wanted because it would limit the operating domain, and the release notes for 10.11 mentioned that they were even relying less on the conventional map data and more on the vision system. I think Tesla is stuck on this concept of "if humans can do it with two eyes, then Teslas can do it with eight eyes all the better," but that logic (as I explained above) just doesn't ring true to me.
The problem with all of this is it's dependent on a ton of interconnected technology and falls apart as soon as any of the links fall apart. (just as an example, I was driving to work today and had no connectivity in suburban Minneapolis.) Not to mention the issues faced when you get to less-traveled areas, new obstacles, etc.Exactly .. and the car can do that for ANY curve, not just the ones you happen to know about because you use that route every day. While its true that humans CAN out-think anything we can build today (or in the foreseeable future) I think many people over-estimate human attentiveness in everyday driving. Almost everyone "spaces out" to an extent when driving, especially on familiar routes. That's when the car will do better then a human. Of course, a human will be needed to handle extraordinary (in the literal sense of the word) situations, such as a police stop or an accident blocking the road.