Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Autonomous Car Progress

This site may earn commission on affiliate links.
As mentioned, judging by how people drive around Teslas they know are loaded with cameras (there are channels dedicated to showing this on YouTube), I highly doubt many care. I should also note there is zero indication Cruise is providing video footage to law enforcement even when accidents do occur (for example the report for the recent left turn accident appears to be all done on scene reconstruction and witness accounts).
I doubt 1 in 10 drivers know a Tesla is loaded with cameras. Heck, the majority probably don't even realize they're near a Tesla. A Cruise with a giant sensor array stuck on top is a different story.

Have you seen a police report for the left turn accident on Geary & Spruce? I've only seen the DMV report prepared by a Cruise lawyer. I can't imagine an AV company can get a permit to operate driverless in CA without agreeing to share accident footage.
 
  • Like
Reactions: AlanSubie4Life
I had to dig a bit to find it again, but there was a video years ago of AP doing the same thing in similar conditions. Tesla was travelling 45mph at night and someone turned left in front of the car, and car brakes to save the day (driver claimed he did not touch brakes at all).


There are plenty of videos if you look up compilation videos of Teslas saving the day, with car stopping just in time to avoid crashing into a car that is rapidly crossing in front (usually red light runners) although those are typically in daytime. They also look "superhuman".

Quite good, but not nearly as impressive. The Tesla took about 2 seconds to stop after the other driver’s intent became clear. The Cruise took closer to 1.3 seconds.

Unfortunately we don’t know what the speeds were either (appear similar but not clear).

I haven’t seen anything as impressive as the Cruise video from Tesla (and also of course we need to know whether the driver intervened which is difficult without cabin footage with pedal use documented - and I’d certainly expect this Uber driver used the brake regardless of statements - if not, yikes!!!). Whether driver intervened is neither here nor there though - I would just look at reaction time. In this Tesla case the nose dipped around 5 seconds, while driver intent was clear just after 4 seconds (have not gone through frame by frame on my desktop). Arguably that nearly 1-second reaction time is worse than a human (who would likely be covering the brake and slowing in this situation, or at least primed to do so, anyway - brake lights had just flashed on, on lead vehicle).

Brakes applied approximately here; not superhuman!

91862E3B-BAC9-4ED0-B68D-B865D45D3663.png


Cruise braking onset (approximate), but much less time since intent was clear than in Tesla case:
A7670D1A-E2D0-4248-BA98-43CF7D9693A2.png
 
Last edited:
I doubt 1 in 10 drivers know a Tesla is loaded with cameras. Heck, the majority probably don't even realize they're near a Tesla. A Cruise with a giant sensor array stuck on top is a different story.
Well I'm in an area where Cruise and Waymo vehicles operate all the time and I don't see drivers attempting to drive any safer than they usually do. I highly doubt most people care, especially given they don't expect to crash, and any violations of law that does not result in a crash are unlikely to be reported.
Have you seen a police report for the left turn accident on Geary & Spruce? I've only seen the DMV report prepared by a Cruise lawyer.
Looking back, yes it appears to be the DMV report, not an actual police report.
I can't imagine an AV company can get a permit to operate driverless in CA without agreeing to share accident footage.
Look through the program and there are only requirements to submit reports, no requirement to share footage:
Autonomous Vehicles Testing with a Driver - California DMV
https://www.dmv.ca.gov/portal/file/...program-for-manufacturers-testing-permit-pdf/

I did dig a bit more however, and Cruise did make a statement about video sharing with law enforcement in this article:
"We share footage and other information when we are served with a valid warrant or subpoena, and we may voluntarily share information if public safety is at risk."
Report: SFPD Already Using Surveillance Video From Self-Driving Cars
 
  • Informative
Reactions: Doggydogworld
In my opinion Tesla is 90% there. Granted the last 10% will be difficult but they are making consistent improvements and haven’t even fully incorporated all the technology they plan to use yet. Teslas goal is much more ambitious than Cruise or Waymo because they want a consumer car that can drive anywhere, not just in premapped limited areas.
I'll bet it is not even 50% there.
 
  • Funny
Reactions: Daniel in SD
It is an interesting take on Tesla's "business model" - in that they do state a goal of robo-taxies, but unlike every robo-taxi company, are already getting revenue for the unfinished product. If they do get to robo-taxies, then that's even more revenue.

My uninformed prediction is Tesla won't achieve Level 5 without at least one more hardware revision and without Dojo up and running. There's a possibility a new "FSD" chip prototype was running in Optimus, so that aspect may be sooner than people think.
They do but I haven't seen them do any real material work towards robotaxis. For one thing, robotaxis need more than just software, they need all this fleet support infrastructure (that's another reason they get geofenced). Even if the software is "perfect" and never gets stuck (which it certainly will all the time), it still needs people nearby to clean the cars and keep things maintained. It's like the scooter rental industry. The costs are much larger than one would think, and maybe Tesla can do robotaxi maintenance as an off-shoot of their mobile service fleet, but it's not going to suddenly materialize overnight, even if the software was there. It would be years in the making.
 
  • Like
Reactions: Doggydogworld
Waymo endorses the guidelines by the League of American Bicyclists and Argo.AI to enhance cyclist safety on US roads.

Here are the 6 guidelines:

#1: Cyclists Should Be a Distinct Object Class

Due to the unique behaviors of cyclists that distinguish them from scooter users or pedestrians, a self-driving system (or “SDS”) should designate cyclists as a core object representation within its perception system in order to detect cyclists accurately. By treating cyclists as a distinct class and labeling a diverse set of bicycle imagery, a self-driving system detects cyclists in a variety of positions and orientations, from a variety of viewpoints, and at a variety of speeds. It should also account for the different shapes and sizes of bikes—like recumbent bikes, bicycles with trailers, electric bikes, and unicycles—as well as different types of riders.

#2: Typical Cyclist Behavior Should Be Expected

An advanced understanding of potential cyclist patterns of movement is necessary to best predict their intentions and prepare the self-driving vehicle’s actions. A cyclist may lane split, yield at stop signs, walk a bicycle, or make quick, deliberate lateral movements to avoid obstacles on the road, like the sudden swinging open of a car door. A SDS should utilize specialized, cyclist-specific motion forecasting models that account for a variety of cyclist behaviors, so when the self-driving vehicle encounters a cyclist, it generates multiple possible trajectories capturing the potential options of a cyclist’s path thus enabling the SDS to better predict and respond to the cyclist’s actions.

#3: Cycling Infrastructure and Local Laws Should Be Mapped

A self-driving system should use high definition 3D maps that incorporate details about cycling infrastructure, like where dedicated bike lanes are located, and include all local and state cycling laws to ensure its self-driving system is compliant. Accounting for bike infrastructure enables the SDS to anticipate cyclists and to maintain a safe distance between the self-driving vehicle and the bike lane. When driving alongside a bike lane, the SDS will consider the higher potential for encountering a cyclist and common cyclist behavior, like merging into traffic to avoid parked cars blocking a bike lane, or treating a red light as a stop sign, which is known as an “Idaho Stop” and is legal in some states.

#4: A SDS Should Drive in a Consistent And Understandable Way

Developers of self-driving technology should strive for the technology to operate in a naturalistic way so that the intentions of autonomous vehicles are clearly understood by other road users. In the presence of nearby cyclists or when passing or driving behind cyclists, a SDS should target conservative and appropriate speeds in accordance with local speed limits, and margins that are equal to or greater than local laws, and only pass a cyclist when it can maintain those margins and speeds for the entire maneuver. In situations where a cyclist encroaches on a self-driving vehicle—for example when lane splitting between cars during stopped traffic—the vehicle should minimize the use of actions which further reduce the margin or risk unsettling the cyclist’s expectations. The SDS should also maintain adequate following distances so that if a cyclist happens to fall, the self-driving vehicle has sufficient opportunity to maneuver or brake. Self-driving vehicles should provide clear indications of intentions, including using turn signals and adjusting vehicle position in lane when they are preparing to pass, merge lanes, or turn.

#5: Prepare for Uncertain Situations and Proactively Slow Down

The reality of the road is that sometimes other road users act unpredictably. A self-driving system should account for uncertainty in cyclists’ intent, direction, and speed—for instance reducing vehicle speed when a cyclist is traveling in the opposite direction of the vehicle in the same lane. When there is uncertainty, the self-driving system should lower the vehicle’s speed and, when possible, increase the margin of distance to create more time and space between the self-driving vehicle and the cyclist and drive in a naturalistic way.

#6: Cyclist Scenarios Should Be Tested Continuously

The key to developing safe and robust autonomy software is thorough testing. Developers of self-driving technology should be committed to continuous virtual and physical testing of its self-driving system with a specific focus on cyclist safety in all phases of development.


 
They do but I haven't seen them do any real material work towards robotaxis. For one thing, robotaxis need more than just software, they need all this fleet support infrastructure (that's another reason they get geofenced). Even if the software is "perfect" and never gets stuck (which it certainly will all the time), it still needs people nearby to clean the cars and keep things maintained. It's like the scooter rental industry. The costs are much larger than one would think, and maybe Tesla can do robotaxi maintenance as an off-shoot of their mobile service fleet, but it's not going to suddenly materialize overnight, even if the software was there. It would be years in the making.
Not that I expect this to happen, but I thought the initial idea of the "Tesla Network" was to have owners run their private cars as robotaxies. If that's the case, I would presume the owners would do the maintenance.
 
  • Informative
Reactions: pilotSteve
Not that I expect this to happen, but I thought the initial idea of the "Tesla Network" was to have owners run their private cars as robotaxies. If that's the case, I would presume the owners would do the maintenance.
That was the gist I got back then. Basically a mix of Uber and Turo. Anyways seems very far in the future at this point. I think they still have a ways to go to even release the door-to-door L2 feature first. The players in China may beat them to the punch even for that (they are starting to do limited testing).
 
Last edited:
An interesting video about nVidia's Hyperion platform compared to Tesla's FSD approach. Watch the whole video for some perspective, but here is a timestamp discussing auto-labeling and its use in simulator runs:


It also made me think about Tesla laying off labelers, which had people questioning the approach. Replacing those humans with machine auto-labelers may give them an advantage. I wonder if we'll see nVidia and others follow suit.
 
It also made me think about Tesla laying off labelers, which had people questioning the approach. Replacing those humans with machine auto-labelers may give them an advantage. I wonder if we'll see nVidia and others follow suit.
One of the things that people who are not really paying attention to this field always do is assume no one else is working on or doing what Tesla is doing. Everyone recognizes that you can't possibly use people to label the massive amount of data needed to train a NN.

Accelerating AD/ADAS Development with Auto-Machine Learning + Auto-Labeling at Scale​

 
I think this video of Huawei's newly released NCA (point to point system) in Shenzhen showcases just how different it is to drive in China versus in alot of other places.

I know FSD Beta even in the 10.69 version series have had versions where it would panic brake when it even sees a pedestrian on a sidewalk.
In china, a system like that wouldn't work because its not something you can do. You have to blend in with the traffic. There's another level of agility that's required to drive in China's downtown urban areas.

As you are surrounded by actors from all sides coming from every direction.

 
Making your own maps in China is a crime so the only option for foreign ADAS companies like Tesla is to license them from state-run companies. I bet Tesla's multi-trip reconstruction ground truth stack would be illegal over there, unless they can score some special exemption/bribe/whatever.
 
  • Informative
Reactions: pilotSteve