Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Autonomous Car Progress

This site may earn commission on affiliate links.
Almost none of these "complex" situations are related to safety though.

At the end of the day, if the AV drives much safer than a human, and has all the right conveniences, then it doesn't matter if it can't see that the person at the crosswalk is waving it to proceed. Who cares if 5 seconds are wasted by a rare situation of an AV not understanding inefficient human behaviors.

Here's a prime example:


Gotta love that beep. It puts context to the video.

It's minor for 1x but when you are following an FSD vehicle making repeated mistakes or multiple FSD vehicles lined up making the same mistake things have a chance of being elevated to the evening news.
 
Plus I'm not sure how to solve the problem of AVs being submissive to more assertive drivers.

One idea would be to just keep training the prediction/planning module, hoping the AV can get smarter at reading when it can be assertive. Perhaps use imitation learning to try to give the AV more human-like assertiveness. Another idea would be to simply dial up the assertiveness of the AV a bit, not too much that the AV is reckless but just enough that it can get in more often. The question then becomes whether the manufacturer would be ok with the increased risk. In some cases, if the AV asserts itself first, the human driver will probably yield, no problem. But if the human does not yield, there could be a collision. It might boil down to who is deemed at-fault for the accident. The AV manufacturer might be "ok" with more accidents if the AV is found not at-fault in an accident. Ultimately, there is always some risk in driving. So some risk is unavoidable.
 
  • Like
Reactions: GSP and JHCCAZ
One idea would be to just keep training the prediction/planning module, hoping the AV can get smarter at reading when it can be assertive. Perhaps use imitation learning to try to give the AV more human-like assertiveness. Another idea would be to simply dial up the assertiveness of the AV a bit, not too much that the AV is reckless but just enough that it can get in more often. The question then becomes whether the manufacturer would be ok with the increased risk. In some cases, if the AV asserts itself first, the human driver will probably yield, no problem. But if the human does not yield, there could be a collision. It might boil down to who is deemed at-fault for the accident. The AV manufacturer might be "ok" with more accidents if the AV is found not at-fault in an accident. Ultimately, there is always some risk in driving. So some risk is unavoidable.
Agreed that some risk is unavoidable, but I think one problem that waymo, Cruise and Tesla are all grappling with is that timidity of the AV may actually increase risk in many situations. Being too slow at multi-way stops,, being too timid to merge when given just enough room etc are examples where hesitancy can be worse than some zero-order concept of "safety".

If you remember the Wayve presentation at the CVPR 2021, they demonstrated training the car to do all sorts of maneuvering and right-of-way taking even in many not strictly legal circumstances. The result was one of the more impressive demo reels making an AV look like a human driver. I found the link for those who haven't seen it:
 
One idea would be to just keep training the prediction/planning module, hoping the AV can get smarter at reading when it can be assertive. Perhaps use imitation learning to try to give the AV more human-like assertiveness. Another idea would be to simply dial up the assertiveness of the AV a bit, not too much that the AV is reckless but just enough that it can get in more often. The question then becomes whether the manufacturer would be ok with the increased risk. In some cases, if the AV asserts itself first, the human driver will probably yield, no problem. But if the human does not yield, there could be a collision. It might boil down to who is deemed at-fault for the accident. The AV manufacturer might be "ok" with more accidents if the AV is found not at-fault in an accident. Ultimately, there is always some risk in driving. So some risk is unavoidable.
^^ This

Aside from distracted driving (vast majority due to cell phone use), another source of collisions is Type-A vs Type-A. A Type-A personality starts merging their car and has the attitude of "get out of my way, I'm coming over and if you don't want to get in an accident, you'll move". This works most of the time, as human nature is to avoid an accident, so the game of chicken is resolved with the "weaker willed" person backing off and letting Type-A win. But when another Type-A is involved, "no, you move - I ain't moving!" Then they end up in an accident (and sometimes some hilarious, caught-on-video fights break out).

Personally, I think AVs will always have to adopt Type-B personalities, and allow Type-As to win, or things could get ugly.
 
  • Like
Reactions: GSP
I think you guys are overcomplicating this.

The stats will win, not one-off situations.

For AVs, all that matters is comfort (jerk measurements), safety (crash stats), and reliability (summons within 5 minutes, gets to destination within 10% of human driver times). When all these stats are good, with safety much superior to humans, it doesn't matter if AVs aren't assertive enough during a merge or lane change.
 
  • Love
Reactions: GSP
Agreed that some risk is unavoidable, but I think one problem that waymo, Cruise and Tesla are all grappling with is that timidity of the AV may actually increase risk in many situations. Being too slow at multi-way stops,, being too timid to merge when given just enough room etc are examples where hesitancy can be worse than some zero-order concept of "safety".

To be fair, Waymo has improved the assertiveness. It is not as timid as the 4th Gen used to be. Not saying the assertiveness is perfect but it is better.

If you remember the Wayve presentation at the CVPR 2021, they demonstrated training the car to do all sorts of maneuvering and right-of-way taking even in many not strictly legal circumstances. The result was one of the more impressive demo reels making an AV look like a human driver. I found the link for those who haven't seen it:

Yeah, I remember that presentation. The demo reels are quite good. My issue with Wayve is that they are still at the demo stage. They did that recent test drive with Bill Gates and apparently, had quite a few interventions on the short drive. To be fair, it was driving in London which can be challenging. But still, if you are only doing short demos and require interventions every few miles, that does not bode well for actually scaling a consumer product. Certainly, Wayve's vision-only end-to-end approach is intriguing. If it works, it would be a very cheap, generalizable and scalable approach that would work great on consumer cars. But I think Wayve's challenge will be getting beyond the demo stage and actually deploying a legit commercial product. If they can't put it on a consumer car that people can buy, what's the point of demos?
 
  • Like
Reactions: JHCCAZ
Certainly, Wayve's vision-only end-to-end approach is intriguing. If it works, it would be a very cheap, generalizable and scalable approach that would work great on consumer cars. But I think Wayve's challenge will be getting beyond the demo stage and actually deploying a legit commercial product. If they can't put it on a consumer car that people can buy, what's the point of demos?
Yes I agree with all that. The Wayve talk (Alex Kendall) was very impressive but they may have a lot more problems with a more complete end-to-end attempt. I think Tesla and probably the others find that they need intermediate engineer-understandable monitoring points, which preclude true end-to-end architecture. Also they may find that it's difficult to manage and partition the task if they are trying to train one giant and to end ML stack.

On the other hand, FSD could definitely use some of the planning capabilities that Wayve has demo'ed. I wonder what Wayve's real goal is - is it to make big money as a standalone IP provider, or to get bought by a deep-pockets OEM?
 
  • Like
Reactions: diplomat33
Really odd behavior by Cruise. The car veers left like it is aiming at the old lady crossing the street and then corrects right again to get back into its lane.


Also, later in the ride, customer service comes on and cancels the ride due to low charge and tells the customer to exist the vehicle and hail another ride on the app. That is terrible service.
 
Really odd behavior by Cruise. The car veers left like it is aiming at the old lady crossing the street and then corrects right again to get back into its lane.

There's nothing odd about these things if you think about it.

Elon and Karpathy and anyone with any sense of what's important in NN training for autonomy will say again and again and again: the fleet, the data are currently a necessity for generalized fsd.

Companies that either don't have a large fleet in diverse areas or can't gather raw sensor data can't compete with Tesla long-term.

I don't understand why it's so difficult to see that now, with V11 brewing. A year from now, everything from Cruise and Waymo will look elementary compared to Tesla FSD.
 
Really odd behavior by Cruise. The car veers left like it is aiming at the old lady crossing the street and then corrects right again to get back into its lane.


Also, later in the ride, customer service comes on and cancels the ride due to low charge and tells the customer to exist the vehicle and hail another ride on the app. That is terrible service.
Yep, that was me on my first daytime non-employee ride. It was not the best way to start. The rest of the ride to my destination and back on that day was generally pretty good though.



On another day I did notice more problems. Sometimes the car was stopped at an intersection on a red light yet repeatedly let off the brakes to inch forward towards the crosswalk lines before applying them again, over and over. Some humans do that and it’s annoying just like it was for me as a passenger in the Cruise.

On a small number of occasions the Cruise had issues with momentary false turn signaling much like FSD sometimes does. It tends to dynamically change routes based upon traffic conditions which may have been part of the problem.

On another occasion, the car failed to advance into the intersection during an attempt at an unprotected left turn. The cars from the other direction kept going through until the last moment and the Cruise failed to take the turn through the entire signaling cycle. A human would have advanced into the intersection and forced the turn at the very end of the cycle

Like FSD, there were rare occasions where the steering wheel “thrashed” while the car was not moving. I don’t understand that. It seems really obvious to inhibit steering wheel motion while stationary.

Another thing that I noticed is that the car reported consuming 3-4 kW, apparently for the sensors and computer, even while stationary. It seemed to result in a city driving range of around 175 miles in the Bolt EV which would normally be 300 miles or more at such slow speeds. Presumably future versions of the tech will draw much less power.

Overall, however, the Cruise does some things clearly better than the latest 11.3.6 FSD. It is gently aggressive at asserting itself at intersections with stop signs. It inflicts much lower G-forces on passengers while accelerating, slowing, and stopping as well as when steering in motion.

Yet it is not perfect. It gets stuck and sometimes needs remote intervention. On another trip it appeared to require at least 3 remote interventions to get itself out of trouble. That may be okay temporarily as they scale up service but it’s probably not sustainable over the long run. I assume the need for remote interventions will be reduced as they continue to develop the system and someday the remote interventions may come from a bigger cloud-based compute system rather than from a human.

I have ridden in a 5th gen Waymo in San Francisco recently but not enough to form a solid opinion of its driving behavior versus Cruise or Tesla FSD. Generally, it seemed more like Cruise in the comfort of its driving style.
 
Last edited:
There's nothing odd about these things if you think about it.

Elon and Karpathy and anyone with any sense of what's important in NN training for autonomy will say again and again and again: the fleet, the data are currently a necessity for generalized fsd.

Companies that either don't have a large fleet in diverse areas or can't gather raw sensor data can't compete with Tesla long-term.

I don't understand why it's so difficult to see that now, with V11 brewing. A year from now, everything from Cruise and Waymo will look elementary compared to Tesla FSD.
Didn't you say the same thing but "6 months" back in 2020?
One thing i have learned about some Tesla fans is they will believe ANYTHING no matter what and its always about the future. Never about the past or the present.


You mean this V11? that almost gets into multiple accidents in a single drive? Or do you just ignore the major mistakes by v11?
 
Last edited:
  • Like
Reactions: nativewolf
Yep, that was me on my first daytime non-employee ride. It was not the best way to start. The rest of the ride to my destination and back on that day was generally pretty good though.



On another day I did notice more problems. Sometimes the car was stopped at an intersection on a red light yet repeatedly let off the brakes to inch forward towards the crosswalk lines before applying them again, over and over. Some humans do that and it’s annoying just like it was for me as a passenger in the Cruise.

On a small number of occasions the Cruise had issues with momentary false turn signaling much like FSD sometimes does. It tends to dynamically change routes based upon traffic conditions which may have been part of the problem.

On another occasion, the car failed to advance into the intersection during an attempt at an unprotected left turn. The cars from the other direction kept going through until the last moment and the Cruise failed to take the turn through the entire signaling cycle. A human would have advanced into the intersection and forced the turn at the very end of the cycle

Like FSD, there were rare occasions where the steering wheel “thrashed” while the car was not moving. I don’t understand that. It seems really obvious to inhibit steering wheel motion while stationary.

Another thing that I noticed is that the car reported consuming 3-4 kW, apparently for the sensors and computer, even while stationary. It seemed to result in a city driving range of around 175 miles in the Bolt EV which would normally be 300 miles or more at such slow speeds. Presumably future versions of the tech will draw much less power.

Overall, however, the Cruise does some things clearly better than the latest 11.3.6 FSD. It is gently aggressive at asserting itself at intersections with stop signs. It inflicts much lower G-forces on passengers while accelerating, slowing, and stopping as well as when steering in motion.

Yet it is not perfect. It gets stuck and sometimes needs remote intervention. On another trip it appeared to require at least 3 remote interventions to get itself out of trouble. That may be okay temporarily as they scale up service but it’s probably not sustainable over the long run. I assume the need for remote interventions will be reduced as they continue to develop the system and someday the remote interventions may come from a bigger cloud-based compute system rather than from a human.

I have ridden in a 5th gen Waymo in San Francisco recently but not enough to form a solid opinion of its driving behavior versus Cruise or Tesla FSD. Generally, it seemed more like Cruise in the comfort of its driving style.

Waymo is atleast 2 years ahead of Cruise and it shows in functionality, agility and reliability.
 
Didn't you say the same thing but "6 months" back in 2020?
One thing i have learned about some Tesla fans is they will believe ANYTHING no matter what and its always about the future. Never about the past or the present.

Wow, 2x safety fsd will be 2 years later than I expected. Time to put me and Elon in jail! Wooptydoo

No one is saying anything about Mobileye, Cruise, and Waymo progressing slowly over the past 5 years. Mobileye isn't even in the picture at this point.
 
Wow, 2x safety fsd will be 2 years later than I expected. Time to put me and Elon in jail! Wooptydoo

No one is saying anything about Mobileye, Cruise, and Waymo progressing slowly over the past 5 years. Mobileye isn't even in the picture at this point.
"2 years later than I expected" would be 2022. In 2022 Tesla FSD couldn't even go 25 miles on average without trying to kill the occupants. Despite the wealth of video evidence highlighting the shortcomings of Tesla FSD, many fans like you have chosen to overlook these glaring issues. On the other hand, Waymo has made significant strides in the same period, deploying driverless technology in various cities and expanding operational design domains to accommodate diverse weather conditions and road scenarios.

Waymo has deployed driverless in East Vally Phoenix, Downtown Phoenix, SF, LA, expanded weather ODD from just sunny and light rain to Heavy Rain and Dense Fog. From no construction to handling construction. Also probably days/weeks away from going driverless on highways. They are doing this while performing the entire DDT.

During these nearly three years, Tesla FSD has yet to support essential Dynamic Driving Tasks:

- Doesn't perform maneuvers that requires reversing

- Doesn't perform U-turn maneuvers

- Doesn't perform three-point turns

- Doesn't handle dead-ends

- Doesn't handle traffic hand signals

- Doesn't handle/stop for emergency vehicles

- Doesn't stop for school bus stop signs

- Doesn't park car at destination

As for Mobileye, it would be disingenuous to conceal the disappointment experienced by Chinese customers. Numerous delays have arisen, partly due to the challenges of establishing infrastructure in China. However, responsibility for 51% of these issues lies with Mobileye, as they have faced difficulties and progressed at a slower pace than anticipated. The company has even admitted that competitors like Waymo and Cruise have outperformed them in Level 4 deployment. So its not just that Tesla is beating them in deploying L2+, Even Huawei has beat them in delivering L2+ and will be in 45 cities by end of the year.

Mobileye's next Supervision over-the-air update is scheduled for June, but it remains uncertain whether this target will be met. The company has encountered several obstacles, including having to develop new safety systems (AEB,FCW, BSW, BEV, etc), OEM launch timing, and creating a base Level 2 system (LCC, ACC, etc) from scratch in 2021. Consequently, the expected timeline for Supervision to be available on consumer vehicles outside of China is 2024 in the European Union and 2025 in the United States. This is because the timing of current launches are late 2023 in EU (Zeekr) and then late 2024 launch in US (Polestar).
 
Last edited:
"2 years later than I expected" would be 2022. In 2022 Tesla FSD couldn't even go 25 miles on average without trying to kill the occupants. Despite the wealth of video evidence highlighting the shortcomings of Tesla FSD, many fans like you have chosen to overlook these glaring issues.

What about the fact that the issues / clips / videos you pointed out a year ago are now fixed?

The problem with you pointing out anything fsd related is that 2 months from now, they're outdated. That's how fast Tesla is moving.

Show us 11.3.6 trying to kill people. That's more interesting right now.
 
Really odd behavior by Cruise. The car veers left like it is aiming at the old lady crossing the street and then corrects right again to get back into its lane.


Also, later in the ride, customer service comes on and cancels the ride due to low charge and tells the customer to exist the vehicle and hail another ride on the app. That is terrible service.
Weren't we taught that old ladies are 10 points? 😂
 
  • Funny
Reactions: diplomat33
What about the fact that the issues / clips / videos you pointed out a year ago are now fixed?

Yes, It is worth noting that some of the concerns initially raised since the FSD Beta release in 2020 have been addressed and resolved. This is very good. However, it is essential to recognize that these issues should not have been present in the first place.

The release of the FSD Beta in its initial state suggests that Tesla expedited the launch. For example, Omar reported that when he first activated FSD Beta, the system doved for parked cars and was failing to do basic lane keeping. Comparatively, a review of 100 Huawei ADS videos from their 2022 release in China shows no such instances of similar performance issues.

The problem with you pointing out anything fsd related is that 2 months from now, they're outdated. That's how fast Tesla is moving.
The video clip in question was taken on March 20th, slightly over a month ago. This exemplifies a common tendency to shift focus away from past performance issues and concentrate on future prospects. Regardless of the current state of the FSD beta, there are those who will continue to assert that all problems will be resolved within the next two weeks. However, as time passes and these predictions prove to be inaccurate, these individuals may not reevaluate their stance or acknowledge the need for a reassessment.

It is particularly striking that some individuals who maintain an unwavering optimism about FSD beta's future improvements are also inclined to use dated materials from competing companies to discredit their progress. For example, the cone incident involving Waymo is still cited by some in debates, despite its age and the clear advancements that Waymo has demonstrated since then.

It is crucial to maintain a balanced perspective when assessing FSD beta. Rather than being overly optimistic for future improvements, a more comprehensive and objective analysis should be employed.

I would also like to present a counterargument by suggesting that Waymo is advancing at an even more rapid pace than FSD Beta. When a system's initial performance is significantly subpar, as it was before version 10.69, improvements are relatively easy to identify. On the other hand, when a system is already functioning at a high level, as evidenced by objective and independent statistical data, it becomes more challenging to recognize enhancements.

It is inherently more difficult to improve upon a system that is already performing well, as it likely has fewer obvious shortcomings. Secondly, these enhancements, when they do occur, may be subtler and therefore harder for testers to detect. In the case of an underperforming system, improvements tend to be more pronounced and readily apparent, whereas with a high-performing system, it may require a more extensive evaluation period to identify the areas where progress has been made.

In conclusion, it is essential to consider the baseline performance of a system when evaluating its rate of improvement. A system that starts with a lower performance level may show more noticeable advancements, whereas a system that is already functioning at a high level may exhibit more subtle enhancements that require a deeper analysis to uncover.



Show us 11.3.6 trying to kill people. That's more interesting right now.
I showed you clips of 11.3.1, I can also show you clips of 11.3.2 which are below.
I'm sure there are clips of 11.3.6 showing similar incidents but I don't watch as much beta videos as I used to.



Three Questions

  1. Do you think Tesla will launch FSD Beta in China by years end and how will it perform in showdowns vs Huawei in the supported *45 cities?
  2. If Huawei outperforms Tesla FSD which you finally admit that Tesla FSD is NOT ahead and that you were mistaken?
  3. You made a new prediction of 1 year, are you saying that in 1 year Tesla will have L5 or L4 everywhere. How would Tesla's Robotaxi rollout look like to you in 1 year?
 
As for Mobileye, it would be disingenuous to conceal the disappointment experienced by Chinese customers. Numerous delays have arisen, partly due to the challenges of establishing infrastructure in China. However, responsibility for 51% of these issues lies with Mobileye, as they have faced difficulties and progressed at a slower pace than anticipated. The company has even admitted that competitors like Waymo and Cruise have outperformed them in Level 4 deployment. So its not just that Tesla is beating them in deploying L2+, Even Huawei has beat them in delivering L2+ and will be in 45 cities by end of the year.

Mobileye's next Supervision over-the-air update is scheduled for June, but it remains uncertain whether this target will be met. The company has encountered several obstacles, including having to develop new safety systems (AEB,FCW, BSW, BEV, etc), OEM launch timing, and creating a base Level 2 system (LCC, ACC, etc) from scratch in 2021. Consequently, the expected timeline for Supervision to be available on consumer vehicles outside of China is 2024 in the European Union and 2025 in the United States. This is because the timing of current launches are late 2023 in EU (Zeekr) and then late 2024 launch in US (Polestar).

Thanks for the update. The delays are disappointing. But I am not ready to write Mobileye off just yet. I still think their approach of building "eyes on" FSD with vision first, crowdsourcing maps, RSS for safety rules and then adding a radar/lidar layer to take the FSD up to "eyes off", is a fundamentally solid approach. The key is execution. If they can fix the execution, I think they will do great. Hopefully, things work better with the US and EU OEMs.