Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Autonomous Car Progress

This site may earn commission on affiliate links.
What is your metric for one company being ahead of the other? Surely we can't just look at the number of sensors and say that makes one approach more advanced.

Take a look at the feature-set of SuperVision: Mobileye SuperVision™ | The Bridge from ADAS to Consumer AVs

View attachment 936113

Most of these were available with Navigate on Autopilot. And keep in mind this is still very much Level 2. "Equally important however, is that this is an ADAS system, so it still requires human oversight – meaning eyes on the road at all times, even if Mobileye’s “hands” are on the wheel."

So by what metric does this exceed Tesla? Sensor redundancy alone?

Uh, I never said that Mobileye is ahead of Tesla because they use more sensors. Adding more sensors does not automatically make the system ahead of another. So I am not sure why you are trying to argue that point with me. I simply pointed out a key difference between the two approaches. Having said that, if I had a choice between two FSD systems equal in "features" but one was vision-only and the other was vision+radar+lidar, I would pick the one that was vision+radar+lidar. The reason I would pick the second one is because the extra sensor redundancy would give me more peace of mind for safety. But that is my personal choice of which system I would prefer, not that the system is ahead of the other. My personal metrics are features, level of supervision and safety.

And yes, most of those SuperVision features were available on NOA. Both the Tesla and Mobileye systems are L2 today. It remains to be seen which system will get to "eyes off" first.
 
Last edited:
  • Like
Reactions: scottf200
Uh, I never said that Mobileye is ahead of Tesla because they use more sensors. Adding more sensors does not automatically make the system ahead of another. So I am not sure why you are trying to argue that point with me.

I'm not arguing with you about who is ahead. I'm arguing with @Bladerskb , who is saying that MobileEye is implicitly superior to FSD Beta because it's being licensed by other automakers.

If Tesla FSD is SO GOOD... Why ain't OEMs flocking to put it on their cars?
 
  • Helpful
Reactions: diplomat33
@willow_hiller @EVNow @powertoold Hmm I kind of rememeber telling you all this...


Hmm this company whose tech is is 5 years behind Tesla is somehow being picked by VW and other companies instead of FSD Beta....
Why isn't anyone licensing FSD Beta?
Whether a manufacturer is willing to license tech from a major competitor has little to do with if it was superior. For example, it's pretty well acknowledged that superchargers are the superior charging option in the US, yet no major automaker has been willing to license it. There's a multitude of possible reasons brought up through the years:
- automakers balking at upfront infrastructure investments Elon suggested was required for licensing/access to stations
- hubris and branding issues from using a Tesla station
- Tesla's patent terms being unfavorable to larger manufacturers with a larger patent pool than Tesla (something Tesla only recently tried to address by opening up the standard)
- Actively helping Tesla, a major competitor, get better

These issues are avoided by choosing a more neutral third party (which CCS is the DC charging equivalent).
 

"This new effort builds on our strategy of advancing autonomy through evolution, starting from today’s eyes-on, hands-on driver assist systems through SuperVision-based systems that enable hands-off operation for identified use cases, leading to eventual eyes-off, hands-off autonomy."

So they're releasing a level 2 ADAS system, and they believe it has the hardware necessary for greater autonomy, but that will come later in an OTA update. Yep, definitely way ahead of Tesla....
Yes SuperVision is a L2 ADAS system just like Tesla's FSD Beta.
Also that statement is referring to a future product being eyes off. SuperVision will never be eyes off, its meant to be a hands-off door to door system.

I'm not arguing with you about who is ahead. I'm arguing with @Bladerskb , who is saying that MobileEye is implicitly superior to FSD Beta because it's being licensed by other automakers.
I'm simply trying to evaluate @powertoold immaculate logic that Mobileye is 5 years behind Tesla yet OEMs are either going with Mobileye or Huawei. What gives?
 
  • Like
Reactions: diplomat33
Large Fleet / Data

Now, if we were to judge AV superiority based on fleet size and data, companies like Mobileye with their full supervision fleet of 100k+ cars, NIO with ~50-100k+ cars, Xpeng with ~100k cars, and Huawei with ~5k cars would all be considered frontrunners ahead of Waymo and Cruise. They boast impressive arrays of sensors, including 8mp cameras (as opposed to Tesla's 1.2mp), *surround radars, *lidars and more powerful compute capabilities, such as NIO's 1000+ TOPS.



When it comes to neural networks, data acquired using radars or lidars typically yields superior results compared to camera data alone. Moreover, one cannot simply add radar or lidar data to pre-existing camera data at a later stage. Consequently, the data collected by other companies is far more valuable than Tesla's, as they can ground it more accurately. Tesla, on the other hand, relied on an outdated ACC radar for ground truth, which was limited to forward tracking of moving objects (not static ones) and had a narrow field of view.

In contrast, other companies train their vision neural networks with rich camera data, seamlessly fused with high-quality HD radar information (and in some cases, even ultra-imaging radar data), as well as high-resolution lidar data. This comprehensive approach to data collection and fusion ultimately leads to more robust and reliable neural network models

ML / Neural Network Architecture

It's crucial to remember that the cutting-edge ML and NN architectures of today didn't materialize out of thin air. Waymo, for example, had been using transformers long before Tesla adopted them. Similarly, other AV companies have been employing multi-modal prediction networks, while Tesla initially relied on their C++ driving policy before eventually making the switch.

Tesla were running an instance of their c++ driving policy (planner) as a prediction of what others would do. They then ditched that and moved to actual prediction networks that others have been using for years. Then they finally caught up and moved to multi modal prediction network which others had also been using. Unlike Tesla fans, Tesla tells you exactly what they are doing and not doing in their tech talks, AI conference and software updates. Its the fans that invent mythical fables and attach it to Tesla.

Heck just days ago Elon admitted their pedestrian prediction is rudimentary.


(Compare that waymo who has been doing this for a long time. Paper: [2112.12141] Multi-modal 3D Human Pose Estimation with 2D Weak Supervision in Autonomous Driving Blog: Waypoint - The official Waymo blog: Utilizing key point and pose estimation for the task of autonomous driving)

When it comes to driving policy, many companies have been incorporating ML into their stacks long before Tesla. In fact, at AI Day 2, Tesla presented a network strikingly similar to Cruise's existing approach. Meanwhile, Waymo had already deployed a next-gen ML planner into their driverless fleet.


Of course, while Tesla was still toying with introducing ML into their planning system, Waymo was already using an ML planner and even releasing a next-gen version for their driverless fleet.


Simulation

As for simulation, Elon Musk initially dismissed its value back in 2019. Back in 2019, Elon musk called it "doing your own homework", basically saying it was useless. This was when Waymo, Cruise and others were knee deep in simulation and using it for basically every part of the stack.
In one of Andrej tech talk he even said that they weren't worried about simulation but on their bread and butter (the real world).
In late 2018, The information released an report saying that Tesla was in the infancy of simulation.
Fast forward to AI Day 2021, and he was singing a different tune, declaring that "all of this would be impossible without simulation." Instead of building their own simulation tech, Tesla opted to modify and use Epic Games' procedural generation system for UE5.

Conclusion

So, if we were to compare Waymo and Tesla in detail, it's clear that Tesla lags behind in several key areas: ML networks, compute power (Waymo's TPU4 vs. Tesla's GPU-based training), sensors (Waymo's higher quality cameras and sensor coverage), simulation tech, driving policy, and support for all dynamic driving tasks. However, this doesn't necessarily mean that Tesla is trailing overall; one could argue that they're still 10 years ahead.

But the fact of the matter is, these facts still exist and remain valid.

It's certainly tempting to lean into the data advantage argument, but I rathe we examine the details and ask logical questions, rather than relying on buzzwords and vague claims. We should consider how data truly affects the perception, prediction, and planning stacks of AV architectures and critically assess the extent to which data augmentation and simulation can compensate for any shortcomings.

  • If billions of miles of real-world data are indeed essential, then why aren't the millions of tourists who visit San Francisco / Phoenix annually from all over the world endangered by Waymo vehicles? They are, after all, not part of the perception dataset.
  • What about the countless tourists who drive into San Francisco / Phoenix and are not rear-ended by Waymo? Their presence, too, is absent from the perception dataset.
  • Why doesn't Waymo mispredict pedestrians' actions and collide with them, or mispredict other vehicles' movements and sideswipe or crash head-on with them? Clearly, these tourist behaviors are not in the prediction dataset either.

These concerns pertain to the perception and prediction stack, but let's also examine the planning and driving policy stack. Within the roughly 200,000 miles of divided highways in the United States, how many miles are truly unique? How many miles are not well represented elsewhere?

From my perspective, a mere 1% of these highways are genuinely distinctive—for example, on-ramp interchanges, cloverleafs, short on/off ramps, close on/off ramps, on/off ramps requiring double lane merges, etc. However, what do you believe? Are 90%, 75%, 50%, or 25% of these highways unique?

This will help us figure out how much data is needed and how much can be covered through data augmentation & simulation at scale.

Large Fleet / Data

NIO / Mobileye don't have anything near 100k users operating software with a full sensor suite. Mobileye has in fact... a few hundred vehicles?


But they do have more sensors, as does basically everyone vs Tesla. So perception should be a slam dunk for all of those companies.

Except perception isn't the main issue limiting progress to geographically robust L4/L5, it's planning / prediction etc...

Tesla's data advantage is not just in historic data collection. You are right that old data cannot be used for perception stack of say HW4.0. But the data throughput from new vehicles is quite high when you are producing an annualized rate of 2 mil / year when you change hardware.

ML / Neural Network Architecture

Waymo wasn't using transformers "long before" Tesla as they were just invented in 2019 :D. You are right Tesla isn't using as much or as refined NNets for planning / prediction as others... so? Do you somethink think they aren't capable?

Simulation

Wow you still don't understand how simulation and real world data are supposed to integrate. Saying you need simulation does not mean you don't need unique real-world data. They do not counter each other. You need as much 'unique' real-world data as you can gather to uncover all the "unknown unknowns" and use simulation to "fill in the gaps". This is well established in the machine learning world! Their combined effect is essentially multiplicitave. The whole gimmick with Waymo / Cruise touting simulation is to replace unique real world data, which is just not possible. Simulation does not uncover "unknown unkowns".

Conclusion

You proclaim that Waymo is doing well in the cities they operate in and there isn't that much unique conditions in U.S. %-wise, therefore any issues can be handled through simulation? Nothing could be further from the truth.

If there is only 0.01% of road / interactions that could be novel vs what is seen in SF & Phoenix that has yet to be uncovered, then that is still too much for Waymo's stack to perform reliably in those areas.

That is why I theorized Waymo / Cruise would not be able to expand without signficant effort - everytime they go to a new city, there is some unexpected data, and they must retrain / refine their models. That does not quickly scale.

And what do you know? Waymo / Cruise have barely progressed. If their stack was so geographically robust, why does it take years to expand SF and Phoenix ODDs?

If their software was so robust, why aren't they operating a few vehicles in every city, or at least 5-10 cities as a show of confidence?

Maybe because it isn't as easy as "map 'n play"?

I personally don't think Tesla is 'ahead' of Waymo or Cruise, it's really hard to say because the methods are apples and oranges.

But here's what I do know:

Perception is mostly a solved problem for AV companies. If Tesla eventually needed better perception, they could add teh hardware and software team relatively easily in a few years.

Planning / prediction / controls development is not hindered by Tesla's perception stack. Tesla can iterate hardware all they want, the learnings of how to architecture all these NNs is translated foward.

Tesla is attempting to solve geographic robustness right now, Waymo / Cruise are waiting. Other AV companies are waiting.

Tesla's approach is financially conservative. They are around cash-flow neutral, whereas Waymo / Cruise aren't even gross margin positive. Scaling for the latter means more cash burn for a few years. This risks investors turning off the faucet if they don't see enough progress.

Waymo / Cruise need to scale rapidly and cost-effectively the next few years. If it takes longer, it's to Tesla's benefit.

TLDR: Tesla does nothing special compared to other AV companies, except not losing money which gives them more time. But they do have access to more geographically diverse and unique data which does matter. No one should take anyone's input on this point who works for one of these companies or isn't a data scientist.
 
Large Fleet / Data

NIO / Mobileye don't have anything near 100k users operating software with a full sensor suite. Mobileye has in fact... a few hundred vehicles?
But they do, let's hone in on Mobileye for a moment. All cars come equipped with SuperVision Hardware and Mobileye software. However, not every car is currently in the beta testing phase for on-ramp to off-ramp and door-to-door capabilities, much like how not everyone had access to the FSD beta.

But It's quite simple to activate the hidden visualization on a Zeekr 001 by tweaking a few configurations.


It's amusing to observe how the goalposts are perpetually shifting.

Initially, the claim was that Tesla was 10 years ahead because they had 100k HW2 cars gathering data. Then the number grew to 250k, then 500k, followed by 1 million HW2 cars. Next, the argument shifted to Tesla's lead being due to 1 million HW3 cars, and then 100k FSD Beta Testers. Most recently, the narrative has morphed into Tesla's supposed 10-year advantage being attributed to 400k FSD Beta Testers collecting data.

The conversation has evolved from merely requiring cars with a full sensor suite to collect data, to now insisting that only cars with door-to-door driving software actually count.

But they do have more sensors, as does basically everyone vs Tesla. So perception should be a slam dunk for all of those companies.

Except perception isn't the main issue limiting progress to geographically robust L4/L5, it's planning / prediction etc...

You've made the point that perception systems are generally applicable, but you haven't provided any logical reasoning or evidence to support this. At the same time, you claim that prediction and planning systems are NOT general, but you haven't explained how you arrived at this conclusion either.

Can you see the issue here?

On my end, I can explain why their perception system is general. It's due to something known as sensor fusion! Because LiDAR has far less variability than cameras (as it deals with 3D shapes rather than colors), it can be trained on, say, San Francisco pedestrians and still effectively detect someone from Africa or someone wearing a Halloween costume. A camera-only system might struggle in these cases. When you factor in general object detection and overall accuracy, it's clear that there's no real competition.

Now it's your turn. Can you elaborate on why Waymo's prediction system is not general and why millions of people visiting San Francisco, Phoenix, and Los Angeles each year aren't being fatally struck by driverless Waymo vehicles due to mis-predicting their behavior?

Perhaps you have a well-reasoned explanation, and I'd love to hear it, rather than simply stating that planning/prediction doesn't scale.
Tesla's data advantage is not just in historic data collection. You are right that old data cannot be used for perception stack of say HW4.0. But the data throughput from new vehicles is quite high when you are producing an annualized rate of 2 mil / year when you change hardware.
Tesla definitely has the ability to use HW2/HW3 data in conjunction with HW4 camera data for training purposes. That's not the point I'm disputing.

What I'm emphasizing is that in pretty much any machine learning scenario, neural networks perform better when they have access to LiDAR data. Given that Tesla doesn't have LiDAR or high-res surround radars on their vehicles, they miss out on those advantages. Consequently, their perception and prediction systems will always fall short compared to a model equipped with LiDAR and high-res surround radars.

ML / Neural Network Architecture

Waymo wasn't using transformers "long before" Tesla as they were just invented in 2019
:D. You are right Tesla isn't using as much or as refined NNets for planning / prediction as others... so? Do you somethink think they aren't capable?
No Transformers were invented by Google Brain in 2017. Come on, these are basic ML facts.

Simulation

Wow you still don't understand how simulation and real world data are supposed to integrate. Saying you need simulation does not mean you don't need unique real-world data. They do not counter each other. You need as much 'unique' real-world data as you can gather to uncover all the "unknown unknowns" and use simulation to "fill in the gaps". This is well established in the machine learning world! Their combined effect is essentially multiplicitave. The whole gimmick with Waymo / Cruise touting simulation is to replace unique real world data, which is just not possible. Simulation does not uncover "unknown unkowns".
That's not quite accurate. It seems like you've consistently sidestepped the main idea and purpose behind my statements regarding simulations. While Tesla was focusing on roughly 1% simulation, Waymo was devoting about 99% of their efforts to it. You appear to be suggesting that all simulations are created equal, which is obviously not the case, as no two software systems are identical, as you're well aware. In any case, I've provided evidence that Tesla was indeed putting minimal emphasis on simulation, concentrating mainly on real-world data, and then completely changed their approach on AI Day 1.
 
Last edited:
You proclaim that Waymo is doing well in the cities they operate in and there isn't that much unique conditions in U.S. %-wise, therefore any issues can be handled through simulation? Nothing could be further from the truth.

If there is only 0.01% of road / interactions that could be novel vs what is seen in SF & Phoenix that has yet to be uncovered, then that is still too much for Waymo's stack to perform reliably in those areas.

That is why I theorized Waymo / Cruise would not be able to expand without signficant effort - everytime they go to a new city, there is some unexpected data, and they must retrain / refine their models. That does not quickly scale.
Waymo claims that their objective is to select a city, operate within it for a month, make necessary improvements or adjustments, and then roll out their driverless service during the subsequent month.

Contrarily, you assert that this is an unattainable feat. You argue that when Waymo's driverless system is activated in this new city on day one, and is required to execute a three-point turn to navigate away from a dead-end, it will inexplicably experience a lapse in memory, forgetting how to perform the maneuver. This is despite the numerous successful videos of this ability in San Francisco and Phoenix.
And what do you know? Waymo / Cruise have barely progressed. If their stack was so geographically robust, why does it take years to expand SF and Phoenix ODDs?
You are misunderstanding Waymo's intentions. They are not primarily focused on expanding geographically, but rather on refining the Operational Design Domain (ODD) of their Waymo Driver:

  • In 2020, their driverless operation included: Suburb ODD, Light Rain ODD, Daytime ODD, and Nighttime ODD.
  • By 2023, they've added:
    • City ODD
    • Urban ODD
    • Moderate Rain ODD
    • Heavy Rain ODD
    • Light Fog ODD
    • Heavy Fog ODD
    • Construction ODD
    • Reroute ODD
    • Dense/Unstructured Parking Lot ODD
    • and soon, Highway ODD
Their current goal is to scale driverless ODDs, not cities. This way, when they decide to expand into new cities, there will be fewer limitations and obstacles.

You seem to be suggesting that when they're prepared to scale to a new city, their system will suddenly suffer from amnesia and forget all these capabilities.

If their software was so robust, why aren't they operating a few vehicles in every city, or at least 5-10 cities as a show of confidence?
Even if they did, it wouldn't matter to you. I could see you coming up with another criticism. Maybe to the effect that they are just running a single car in these 10 cities and how it's an absolute joke, blah blah.
Maybe because it isn't as easy as "map 'n play"?
Waymo is saying they are aiming for a map, drive (for a month) then driverless process.
Tesla's approach is financially conservative. They are around cash-flow neutral, whereas Waymo / Cruise aren't even gross margin positive. Scaling for the latter means more cash burn for a few years. This risks investors turning off the faucet if they don't see enough progress. Waymo / Cruise need to scale rapidly and cost-effectively the next few years...
So you answered your own question on why Waymo wouldn't want to scale a system in 2021 which driverless ODD consisted of basically suburb and light rain.
 
Last edited:
But they do, let's hone in on Mobileye for a moment. All cars come equipped with SuperVision Hardware and Mobileye software. However, not every car is currently in the beta testing phase for on-ramp to off-ramp and door-to-door capabilities, much like how not everyone had access to the FSD beta.

But It's quite simple to activate the hidden visualization on a Zeekr 001 by tweaking a few configurations.


It's amusing to observe how the goalposts are perpetually shifting.

Initially, the claim was that Tesla was 10 years ahead because they had 100k HW2 cars gathering data. Then the number grew to 250k, then 500k, followed by 1 million HW2 cars. Next, the argument shifted to Tesla's lead being due to 1 million HW3 cars, and then 100k FSD Beta Testers. Most recently, the narrative has morphed into Tesla's supposed 10-year advantage being attributed to 400k FSD Beta Testers collecting data.

The conversation has evolved from merely requiring cars with a full sensor suite to collect data, to now insisting that only cars with door-to-door driving software actually count.



You've made the point that perception systems are generally applicable, but you haven't provided any logical reasoning or evidence to support this. At the same time, you claim that prediction and planning systems are NOT general, but you haven't explained how you arrived at this conclusion either.

Can you see the issue here?

On my end, I can explain why their perception system is general. It's due to something known as sensor fusion! Because LiDAR has far less variability than cameras (as it deals with 3D shapes rather than colors), it can be trained on, say, San Francisco pedestrians and still effectively detect someone from Africa or someone wearing a Halloween costume. A camera-only system might struggle in these cases. When you factor in general object detection and overall accuracy, it's clear that there's no real competition.

Now it's your turn. Can you elaborate on why Waymo's prediction system is not general and why millions of people visiting San Francisco, Phoenix, and Los Angeles each year aren't being fatally struck by driverless Waymo vehicles due to mis-predicting their behavior?

Perhaps you have a well-reasoned explanation, and I'd love to hear it, rather than simply stating that planning/prediction doesn't scale.

Tesla definitely has the ability to use HW2/HW3 data in conjunction with HW4 camera data for training purposes. That's not the point I'm disputing.

What I'm emphasizing is that in pretty much any machine learning scenario, neural networks perform better when they have access to LiDAR data. Given that Tesla doesn't have LiDAR or high-res surround radars on their vehicles, they miss out on those advantages. Consequently, their perception and prediction systems will always fall short compared to a model equipped with LiDAR and high-res surround radars.


No Transformers were invented by Google Brain in 2017. Come on, these are basic ML facts.


That's not quite accurate. It seems like you've consistently sidestepped the main idea and purpose behind my statements regarding simulations. While Tesla was focusing on roughly 1% simulation, Waymo was devoting about 99% of their efforts to it. You appear to be suggesting that all simulations are created equal, which is obviously not the case, as no two software systems are identical, as you're well aware. In any case, I've provided evidence that Tesla was indeed putting minimal emphasis on simulation, concentrating mainly on real-world data, and then completely changed their approach on AI Day 1.

Well actually I agree with you on the benefits of Lidar with a direct physics relationship means the perception systems should scale more robustly than a camera only system.

Meaning, if someone trained their camera only perception system in SF and Phoenix, it probably hasn't seen all sorts of weird optical edge cases like the sunset in some high latitude city causing a certain distribution of light wavelengths it hasn't encountered, or some mix of hail precipitation w/ sunlight, or handling reflections off of certain signs / buildings that were constructed with materials different from was seen before.

Some of those things could be helped by simulation, but some would have been missed and only discovered through widespread data collection to make the system more robust.

All statistically learned models need this, but especially ones that are more heavily depedent on statistical estimation vs. direct derivation from physics.

A Tesla FSD based on 1.2 MP cameras trained only in the Bay Area would have much worse perception that Waymo. A Tesla FSD trainined nationwide will get much closer to Waymo perception (especially with increased camera resolution).

Now, you are asking me why I would think the same concept applies to planning / prediction? Becuase it applies in all statistically learned models - the question is how much / how often?

You are essentially asking me to list the "unknown unknowns" that could create different problems in different cities that haven't been previously learned. Of course I can't because they aren't well evident - if they were, they would already be modeled and handled in simulation. That is not a cop-out,that is the point.

I can guess the type of examples. For instance, Waymo probably doesn't have a lot of data on how pedestrians and cars act when there is black ice on the roads and sidewalks, while it is raining and people are in a hurry and "act" differently. Or how pedestrians and cars interact with unique intersections in a suburban town in the upper midwest vs the deep south. These examples aren't solved by premapping - they need to encounter the actual real-world data to learn better.

These examples may seem extreme, but that's the problem with robust autonomy, the standards of accuracy are so high that these weird cases matter.

You like to picture me as some Tesla FSD uber bull which is laughable if you read my post history - I was always bearish on FSD for robotaxi purposes as being 1) not possible with previous hardware (and even HW 3.0) 2) long tail of progression to get to a FSD product even when it 'seems' close. I've always been focused on its L2 benefits to margins.

Waymo has made great progress of course, my 'criticism' has been consistent in that data diversity matters. Waymo can ultimately get this data diversity, but their trajectory means it will take quite a while. Like, maybe 5 years? This means other companies (like Tesla) have that much leeway to "catching up".

Remember, ultimately no one will care on the state of models today, everything is preparation for retraining models in 2-5 years from now by hitting "return" to start the automated training pipeline without whatever NNet architecture has been optimized for over the course of many years of experience combined with the data set chosen that is ideal to maximize real world performance.

I don't know what that will look like between companies in a few years. What I do know is that Waymo / Cruise aren't scaling as quickly as I would think if their models were already robust (even within the same metro area), and Tesla's progress will likely allow for a highly competent L2 system that boosts margins in say a year and a half.
 
New software update release notes from Cruise:

Improved core driving behavior
  • Shipped DLA v3 & KSE VRU v13 which improves tracking of bikes by up to 30%, allowing the AV to behave and respond more safely around these vehicles on higher speed roads
  • Shipped PSeg v5.1.5 that improves tracking by up to 25% for several classes of less common objects, such as animals and small debris in the road.
  • Shipped STA-V v24 that improves prediction of incoming cross traffic vehicles in the right lane when the AV is taking a right turn on major roads with splitting lanes by 15%
  • Shipped Vehicle MTL v3 which improves vehicle open door detections by 49%.
  • Shipped improvements to our ability to predict trajectories for articulated vehicles improving behavior around these vehicles by 20-30%.
  • Further increased safety on higher speed roads and low friction surfaces by giving the AV more lateral maneuverability to make evasive maneuvers when needed.
  • Shipped TREX V2 which improves the positioning of the AV when preparing to go around a double-parked vehicle and improves braking smoothness when traveling at higher speeds.
  • Improved AV performance and reliability when operating at very low speeds.
  • We improved the AV’s ability to maintain lane positioning and speed at lane merges and forks when traveling at higher speeds.
Improved reliability
  • Shipped TSEL v13 and v14 which improved maneuver completion around double parked vehicles on the road by 50%; including situations encountered during challenging high traffic day time environments.
  • Improved remote assistance reroute capabilities to support higher speed roads and better handle unexpected road closures.
  • Increased overall AV stability through bug fixes and improved fault tolerance.
Improved rider experience
  • Improved overall trip time in 12% of trips through improved routing and lane changes.
  • Added audio chime to remote door close scenario for better rider experience.
  • Improved feedback workflow post ride.

 
I know Ghost Autonomy is not a leader or anything (they just have "basic AP" with auto lane changes on highways). But their concept of "collaborate autonomy" is interesting. Basically, the ADAS is always on, guiding the car in the lane and maintaining speed with traffic but the driver can take over at any time by simply grabbing the wheel. And when the driver takes their hands off the steering wheel, the ADAS automatically resumes lane keeping.

Here is a video where they discuss it:

 
  • Informative
Reactions: EVNow and scottf200