Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
not really. Give any of the current AI algorithms 16 years of 'learning' and they still won't be at the point of a below average teenage driver. It's not a matter of time, it's a matter of how capable the systems are.
Maybe I should have said "keep in mind" instead of "ignore" but it may not be as "far faster" as it may seem. I also didn't mean to imply that AI learns faster than human if that is your impression. But I wonder what the difference would be if the AI had the benefit of being on par in regard to many parameters. I have already mentioned complexity of the neural nets but the quality of the data human gets is also much better than that fed to AIs. Probably, there're a few more parameters on which AI is still designed to be inferior to human...
 
Maybe I should have said "keep in mind" instead of "ignore" but it may not be as "far faster" as it may seem. I also didn't mean to imply that AI learns faster than human if that is your impression. But I wonder what the difference would be if the AI had the benefit of being on par in regard to many parameters. I have already mentioned complexity of the neural nets but the quality of the data human gets is also much better than that fed to AIs. Probably, there're a few more parameters on which AI is still designed to be inferior to human...
In addition to the complexity of the neural networks, a human driver users a simultaneous combination of rules-based, conscious driving effort and learned reactions (i.e. neural networks or "muscle memory"), where as Tesla FSD is layered - mostly neural networks down at the vision and object recognition level transitioning to procedural code at the higher level path planning and execution. So for humans, we get the best of both worlds and can resolve problems in one realm with input from the other.

For example, I can't see around the curve that cars are stopping at a light but I consciously know that at morning rush when I am approaching an intersection around the curve there may be 10 or 12 cars stopped waiting on the light. So I will begin to slow down even before I come around the curve and actually see the stopped cars, where FSD would just plan to maintain speed up to the stop line until it saw the cars stopped ahead. Similarly, I consciously know there's a big difference between driving through a neighborhood with no lines (which is likely to have parked cars on the side of the road) and driving on a curvy street through the countryside with a double yellow line in the middle. I (again, consciously) assume there is nothing but trouble waiting for me on the other side of a double yellow line even if traffic and or curvy road prevents me from seeing it. But Tesla FSD seems to operate as if can go across the double yellow line on a curvy road in the same fashion it would go into the other lane in a neighborhood without markings to pass a "stopped" car (often without being able to ascertain why it is stopped) as long as it doesn't see any danger coming. Another example: I understand that getting into the right lane of a multi-lane interstate to exit is a completely different proposition in rush hour traffic and can take into account the distances between consecutive exits (with entering car merges) where FSD seems to count on distance to the exit only, regardless of speed, traffic, merging cars, etc.

It seems that no amount of additional data can "train out" these deficiencies - it needs contextual intuition to supplement the video processing and object recognition to have true situational awareness on the level of humans. This is why I think FSD is a long way from being safer/better than humans, and why it will require many generational iterations in design and development before it approaches human level of driving. One way it could make up for these deficiencies is adding more and different sensors (i.e. multiple 3D radar and hi-definition maps with precision GPS) so the car has much more information to go on than a human has. But Tesla seems to have abandoned that path and instead relied simply on vision only, which the human already possesses with much better acuity and processing.
 
For example, I can't see around the curve that cars are stopping at a light but I consciously know that at morning rush when I am approaching an intersection around the curve there may be 10 or 12 cars stopped waiting on the light. So I will begin to slow down even before I come around the curve and actually see the stopped cars, where FSD would just plan to maintain speed up to the stop line until it saw the cars stopped ahead.
What if autonomous systems used additional data points to assist it? To your example, what if the NN's queried Google Live Traffic during the drive, and noticed that traffic is backing up around the curve, so it begins slowing down? The downside to that is the quality of Google's data. If the road shows red, but there isn't really any traffic there, your car will start slowing down for no reason.
 
In addition to the complexity of the neural networks, a human driver users a simultaneous combination of rules-based, conscious driving effort and learned reactions (i.e. neural networks or "muscle memory"), where as Tesla FSD is layered - mostly neural networks down at the vision and object recognition level transitioning to procedural code at the higher level path planning and execution. So for humans, we get the best of both worlds and can resolve problems in one realm with input from the other.

For example, I can't see around the curve that cars are stopping at a light but I consciously know that at morning rush when I am approaching an intersection around the curve there may be 10 or 12 cars stopped waiting on the light. So I will begin to slow down even before I come around the curve and actually see the stopped cars, where FSD would just plan to maintain speed up to the stop line until it saw the cars stopped ahead. Similarly, I consciously know there's a big difference between driving through a neighborhood with no lines (which is likely to have parked cars on the side of the road) and driving on a curvy street through the countryside with a double yellow line in the middle. I (again, consciously) assume there is nothing but trouble waiting for me on the other side of a double yellow line even if traffic and or curvy road prevents me from seeing it. But Tesla FSD seems to operate as if can go across the double yellow line on a curvy road in the same fashion it would go into the other lane in a neighborhood without markings to pass a "stopped" car (often without being able to ascertain why it is stopped) as long as it doesn't see any danger coming. Another example: I understand that getting into the right lane of a multi-lane interstate to exit is a completely different proposition in rush hour traffic and can take into account the distances between consecutive exits (with entering car merges) where FSD seems to count on distance to the exit only, regardless of speed, traffic, merging cars, etc.

It seems that no amount of additional data can "train out" these deficiencies - it needs contextual intuition to supplement the video processing and object recognition to have true situational awareness on the level of humans. This is why I think FSD is a long way from being safer/better than humans, and why it will require many generational iterations in design and development before it approaches human level of driving. One way it could make up for these deficiencies is adding more and different sensors (i.e. multiple 3D radar and hi-definition maps with precision GPS) so the car has much more information to go on than a human has. But Tesla seems to have abandoned that path and instead relied simply on vision only, which the human already possesses with much better acuity and processing.

Much of these problems could be solved by greater use of fleet data to understand dynamics of specific locations. Experienced humans certainly drive better and safer on routes that they have driven before, and that's the majority of the drive. Tesla's current ADAS doesn't seem to do this, it's as if each intersection it is coming up upon anew.

They need something like MobileEye's low-medium resolution fleet based mapping.

I do think that additional data could "train out" these deficiencies if they start to use significant supervised training data (i.e. humans driving) for their driving policy as well as their perception. They have done a good job on visual perception (though I think the cameras should still be improved in resolution and made stereoscopic) but the drive policy isn't always great. I think part of the reason is that the drive policy is, or has been, more of a rules based program and less of a neural network system which can be trained on big data---the difficulty is that the nets are black boxes and it's not clear if they might do something really crazy sometime so there has to be some rule based backup.

There will always be semantic understanding deficiencies vs humans, but on the upside it could conceivably use fleet data to learn in a way that humans can't---if you have never been to an intersection before, the fact that thousands of other people have done so doesn't inform your own brain.
 
  • Like
Reactions: Goose66
Elon Musk suggesting FSD Beta increasing to 1 million users by the end of this year probably means there will be a lot of new FSD purchases/subscriptions and probably dropping/significantly-lowering Safety Score requirements for some regions. I believe cumulative Tesla vehicle production will be close to 4 million by end of this year including worldwide vehicles and older ones without the necessary hardware. Maybe there will be a lot more FSD Beta countries added by end of this year too as I would think current FSD is mostly US buyers.

 
  • Like
Reactions: Sigma4Life
Sorry to be replying to an off topic conversation, but just my two cents on the “if you can’t pass at 80 mph…” discussion.

First off, here in Utah we have very long stretches of 80 mph limit interstate.

Keeping that in mind, let’s talk about a stereotype (which I hate to do, as I generally hate stereotypes… ;)) that, unfortunately, is quite accurate when applied to Utah drivers: the vast majority of the time, when you go to pass a Utah driver, they will accelerate to match your speed. Now if you’re persistent, they will eventually back down to their previous speed.

The passing process can be accelerated (for lack of a better term) by increasing your speed substantially. Typically, the Utah driver will accelerate only briefly, then return to their previous speed.

As an example, let’s say you are in the right lane traveling at 80 mph, which is also the speed limit. You come up behind a car going 65, so you move over to the left lane to pass. The car you’re passing will match your speed and remain door to door with you for several miles before slowing back down.

However, if you accelerate to 85 or a bit more, the driver being passed will give up earlier or possibly not even play the Utah passing game at all.

Yes, I know this type of behavior happens everywhere, but after driving in Utah for forty years now, I can tell you it is what you WILL experience here.

This behavior is not only aggravating, but also dangerous, especially if passing the slower car requires you to use the oncoming traffic lane.

Personally, I’m very excited to be getting any faster AP speeds at all.
 
Last edited:
If you're on a rural highway whose speed limit is 65 and you need to drive 85 to pass someone...maybe you don't actually need to pass them.
You obviously have a different driving experience than I do. Let me explain mine. Generally local rural interstates have three lanes in each direction. The posted speed limit is 65.The far right lane travels at between 68 and 70, unless it is blocked by either Grampa Snerd in his Prius or a truck struggling with a steep grade. If you attempt to drive in this lane you would constantly be weaving in and out of the right and center lanes. That's not even considering merging that occurs at interchanges.

The far left lane travels at between 80 and 85, unless spooked by someone with a radar gun. The result of all this is that the middle lane is the "lane of least resistance". It generally travels at between 72 and 75. So let's say I'm in the middle lane cruising along at 75. If someone else decides that they want to travel in the middle lane at 70, it's very easy in a Tesla traveling 75 mph to pass that vehicle and hit 80 mph. Hah! Caught red handed! Not by the long arm of the law, reaching for my wallet, but rather the Red Steering Wheel of Satan. If I try to enter the left lane at 75 mph, someone going 85 is going to catch up with me pretty quickly.
 
At least let us have a transient excursion to between 81 and 85 mph in order to pass.

It actually appears to lock you out at 82 or 83 mph. You can certainly do 81 with override and not get locked out, at least not immediately.

But yeah, an increase in that limit would certainly be nice. 85 would be fine for my purposes, to largely avoid inadvertent "I'm trying not to get in people's way" passing lane disablement catastrophes. Hopefully it'll go to 90mph.
 
Elon Musk suggesting FSD Beta increasing to 1 million users by the end of this year probably means there will be a lot of new FSD purchases/subscriptions and probably dropping/significantly-lowering Safety Score requirements for some regions. I believe cumulative Tesla vehicle production will be close to 4 million by end of this year including worldwide vehicles and older ones without the necessary hardware. Maybe there will be a lot more FSD Beta countries added by end of this year too as I would think current FSD is mostly US buyers.

I think they will miss big on this number.
 
not really. Give any of the current AI algorithms 16 years of 'learning' and they still won't be at the point of a below average teenage driver. It's not a matter of time, it's a matter of how capable the systems are.
Even an insect already has the brain power to avoid obstacles.

So, when I say we have the benefit of a Billion years of evolution I’m not kidding.
 
What if autonomous systems used additional data points to assist it? To your example, what if the NN's queried Google Live Traffic during the drive, and noticed that traffic is backing up around the curve, so it begins slowing down?

Much of these problems could be solved by greater use of fleet data to understand dynamics of specific locations. Experienced humans certainly drive better and safer on routes that they have driven before, and that's the majority of the drive.
Theses types of inputs could only help, it seems to me, but Elon has made it clear he doesn't want this kind of data in FSD. They already said high-definition maps were not needed and not wanted because it would limit the operating domain, and the release notes for 10.11 mentioned that they were even relying less on the conventional map data and more on the vision system. I think Tesla is stuck on this concept of "if humans can do it with two eyes, then Teslas can do it with eight eyes all the better," but that logic (as I explained above) just doesn't ring true to me.
 
You obviously have a different driving experience than I do. Let me explain mine. Generally local rural interstates have three lanes in each direction. The posted speed limit is 65.The far right lane travels at between 68 and 70, unless it is blocked by either Grampa Snerd in his Prius or a truck struggling with a steep grade. If you attempt to drive in this lane you would constantly be weaving in and out of the right and center lanes. That's not even considering merging that occurs at interchanges.

The far left lane travels at between 80 and 85, unless spooked by someone with a radar gun. The result of all this is that the middle lane is the "lane of least resistance". It generally travels at between 72 and 75. So let's say I'm in the middle lane cruising along at 75. If someone else decides that they want to travel in the middle lane at 70, it's very easy in a Tesla traveling 75 mph to pass that vehicle and hit 80 mph. Hah! Caught red handed! Not by the long arm of the law, reaching for my wallet, but rather the Red Steering Wheel of Satan. If I try to enter the left lane at 75 mph, someone going 85 is going to catch up with me pretty quickly.
Well that's easy - pull up on the right stick to disable AP, then pass the person at whatever speed you require - then double-down on the right stick to re-engage. Voila!
 
What if autonomous systems used additional data points to assist it? To your example, what if the NN's queried Google Live Traffic during the drive, and noticed that traffic is backing up around the curve, so it begins slowing down? The downside to that is the quality of Google's data. If the road shows red, but there isn't really any traffic there, your car will start slowing down for no reason.
Exactly .. and the car can do that for ANY curve, not just the ones you happen to know about because you use that route every day. While its true that humans CAN out-think anything we can build today (or in the foreseeable future) I think many people over-estimate human attentiveness in everyday driving. Almost everyone "spaces out" to an extent when driving, especially on familiar routes. That's when the car will do better then a human. Of course, a human will be needed to handle extraordinary (in the literal sense of the word) situations, such as a police stop or an accident blocking the road.
 
  • Like
Reactions: VanFriscia and Dewg
Theses types of inputs could only help, it seems to me, but Elon has made it clear he doesn't want this kind of data in FSD. They already said high-definition maps were not needed and not wanted because it would limit the operating domain, and the release notes for 10.11 mentioned that they were even relying less on the conventional map data and more on the vision system. I think Tesla is stuck on this concept of "if humans can do it with two eyes, then Teslas can do it with eight eyes all the better," but that logic (as I explained above) just doesn't ring true to me.

This is where Elon's out of his depth, making large scale pronouncements which seem clever but aren't.

There is 'high resolution map data' as used by Waymo (generated from ground truth lidar) used for location and perception, and then there is the lower resolution crowdsourced map data used by MobileEye partly for perception but also for driving policy and semantic understanding. I agree that the first is undesirable, and that the ADAS should be able to do something reasonable if it is missing map/drive data. But it should do something better, and more confidently, when there is map data sourced from human driving performance.

Human eyes and visual system are also much better resolution than the current cameras, which are a poor 1280x960 or something. Eyes are double gimballed in eyeball and neck, and humans use stereoscopic vision (which if Tesla adopted years ago would have bypassed all the issues when radar was deleted). And they have a much deeper visual cortex & semantic understanding, plus humans aren't even permitted to drive until they have 16 years of learning locomotion and vision and watching other drivers.

Furthermore, humans have memories, whether conscious or not, which greatly enhance their performance. Someone driving in their own neighborhood on common routes will do better than someone driving in a foreign country who has never seen the route before. The first is what people are used to and what ADAS systems will be compared to.
 
This is where Elon's out of his depth, making large scale pronouncements which seem clever but aren't.

There is 'high resolution map data' as used by Waymo (generated from ground truth lidar) used for location and perception, and then there is the lower resolution crowdsourced map data used by MobileEye partly for perception but also for driving policy and semantic understanding. I agree that the first is undesirable, and that the ADAS should be able to do something reasonable if it is missing map/drive data. But it should do something better, and more confidently, when there is map data sourced from human driving performance.

Human eyes and visual system are also much better resolution than the current cameras, which are a poor 1280x960 or something. Eyes are double gimballed in eyeball and neck, and humans use stereoscopic vision (which if Tesla adopted years ago would have bypassed all the issues when radar was deleted). And they have a much deeper visual cortex & semantic understanding, plus humans aren't even permitted to drive until they have 16 years of learning locomotion and vision and watching other drivers.

Furthermore, humans have memories, whether conscious or not, which greatly enhance their performance. Someone driving in their own neighborhood on common routes will do better than someone driving in a foreign country who has never seen the route before. The first is what people are used to and what ADAS systems will be compared to.
Mobileye uses their "AV Maps" for localization too (they claim 10cm accuracy).
 
  • Like
Reactions: DrChaos
This is where Elon's out of his depth, making large scale pronouncements which seem clever but aren't.

There is 'high resolution map data' as used by Waymo (generated from ground truth lidar) used for location and perception, and then there is the lower resolution crowdsourced map data used by MobileEye partly for perception but also for driving policy and semantic understanding. I agree that the first is undesirable, and that the ADAS should be able to do something reasonable if it is missing map/drive data. But it should do something better, and more confidently, when there is map data sourced from human driving performance.

Human eyes and visual system are also much better resolution than the current cameras, which are a poor 1280x960 or something. Eyes are double gimballed in eyeball and neck, and humans use stereoscopic vision (which if Tesla adopted years ago would have bypassed all the issues when radar was deleted). And they have a much deeper visual cortex & semantic understanding, plus humans aren't even permitted to drive until they have 16 years of learning locomotion and vision and watching other drivers.

Furthermore, humans have memories, whether conscious or not, which greatly enhance their performance. Someone driving in their own neighborhood on common routes will do better than someone driving in a foreign country who has never seen the route before. The first is what people are used to and what ADAS systems will be compared to.

They did show at AI day that crowdsourcing fleet mapping data was something they aspire to do. The part I'm thinking of is where they overlaid 4 or 5 cars' imagery of the same intersection from different directions to essentially create a very accurate average. If you have enough cars on the roads at any locale, you can have essentially real-time access to any changes (construction, accident, snowbanks, etc).

Feels like they won't enable this until they have wider release of FSD beta. But once it's enabled, then the car doesn't have to treat everything as if it's the first time encountering it. The planner can choose the proper lane before any cameras on the car see the lanes. Speed limit changes can be much smoother. And it can be visualized beyond the current fade-out point. Cresting over a hill is no longer a reason to slam on the brakes due to lack of visibility.
 
  • Like
Reactions: DrChaos and edseloh
This is where Elon's out of his depth, making large scale pronouncements which seem clever but aren't.
You aren't being very clever. We have been discussing this topic to death over the last 5 years.

Do the cars need to be able to drive when the HD Maps are not accurate / out-of-date ? Yes. If they can drive when they don't have the benefit of accurate HD maps, why do they need HD Maps to start with ?

Everyone in the industry understands (explicitely stated by GM, MobilEye among others) that HD Maps are not scalable.
 
What if autonomous systems used additional data points to assist it? To your example, what if the NN's queried Google Live Traffic during the drive, and noticed that traffic is backing up around the curve, so it begins slowing down? The downside to that is the quality of Google's data. If the road shows red, but there isn't really any traffic there, your car will start slowing down for no reason.

Theses types of inputs could only help, it seems to me, but Elon has made it clear he doesn't want this kind of data in FSD. They already said high-definition maps were not needed and not wanted because it would limit the operating domain, and the release notes for 10.11 mentioned that they were even relying less on the conventional map data and more on the vision system. I think Tesla is stuck on this concept of "if humans can do it with two eyes, then Teslas can do it with eight eyes all the better," but that logic (as I explained above) just doesn't ring true to me.

Exactly .. and the car can do that for ANY curve, not just the ones you happen to know about because you use that route every day. While its true that humans CAN out-think anything we can build today (or in the foreseeable future) I think many people over-estimate human attentiveness in everyday driving. Almost everyone "spaces out" to an extent when driving, especially on familiar routes. That's when the car will do better then a human. Of course, a human will be needed to handle extraordinary (in the literal sense of the word) situations, such as a police stop or an accident blocking the road.
The problem with all of this is it's dependent on a ton of interconnected technology and falls apart as soon as any of the links fall apart. (just as an example, I was driving to work today and had no connectivity in suburban Minneapolis.) Not to mention the issues faced when you get to less-traveled areas, new obstacles, etc.