Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Autonomous Car Progress

This site may earn commission on affiliate links.
Yeah, maybe I am too optimistic. We shall see. I just think that with the experience that Waymo has gained from Chandler and the improvements that they've made to the FSD since then, they should be able to cut the 3.5 years down by a lot. How long it takes to go public with no NDAs, will be a good indicator IMO if Waymo has really sped up their testing process or not.
With their previous experience I would hope they would be able to cut down the time significantly, but on the flip side, the SF driving environment is a lot tougher than Chandler, so that balances things out. The other thing is if Alphabet still has the same patience they had 4 years ago. Alphabet/Google has a propensity of suddenly orphaning projects that don't see a return, so there may be more pressure on the Waymo team to delivery something more quickly.

I will say I see a ton of them here in SF recently (way more than Cruise, which previously was quite common too).

And as a side note, I have taken a close in person look at both the gen 4 (when they first started there were were more of these) and gen 5 (now it's mostly these). There was mention by a while back that the gen 4 cars already had 20-something cameras (instead of just the eight in the dome that I was aware of, which are documented in the paper below as 1920x1280 cameras inside the dome), but I don't see any locations below the dome level where they could be installed. Other than the standard rear view camera that came with the Pacifica, the only sensors below dome level visible are the honeycomb lidar sensors, plus sections of housing that likely houses radar sensors. There are no visible windows/cutouts below dome level that could house a camera and if that's true they basically had no vision coverage of the perimeter of the vehicle (other than standard rear view camera).

https://arxiv.org/pdf/1912.04838.pdf

In gen 5 however, it's easy to see the cameras. The cameras on the top are no longer in a small dome and are now exposed in a much larger circular housing (it looks like 8 pairs of cameras so 16 total), there are 3 additional cameras visible in the box underneath the dome, and it's plain to see the 8 cameras in the perimeters (3 each on each of the front corners, and 1 on each rear corner), plus 1 on the grill, and 1 on the trunk. This accounts for all 29 cameras that is reported for gen 5.
 
Last edited:
  • Like
Reactions: diplomat33
Some of us called it. Solving for the vision-only stack unblocked Tesla so they can move forward rapidly.

- Tesla goes vision-only on high-volume M3/MY, drops radar
- Tesla unveils their new $$$ training supercomputer, made rapid retraining of massive NNs possible
- Tesla releases FSD v9.x firmwares in quick succession, testers see significant improvement
- Tesla unveils new chips for their $$$$ home-grown Dojo supercomputer
- FSD v10.x imminent

So everyone see Musk tweet on public release of FSD Beta button in 4 weeks (I know... I know). Version 10 coming next, no 9.3
 
  • Funny
Reactions: Doggydogworld
Some of us called it. Solving for the vision-only stack unblocked Tesla so they can move forward rapidly.

- Tesla goes vision-only on high-volume M3/MY, drops radar
- Tesla unveils their new $$$ training supercomputer, made rapid retraining of massive NNs possible
- Tesla releases FSD v9.x firmwares in quick succession, testers see significant improvement
- Tesla unveils new chips for their $$$$ home-grown Dojo supercomputer
- FSD v10.x imminent
Dojo supercomputer will not be operational until next year at the earliest.
 
With their previous experience I would hope they would be able to cut down the time significantly, but on the flip side, the SF driving environment is a lot tougher than Chandler, so that balances things out.

And as a side note, I have taken a close in person look at both the gen 4 (when they first started there were were more of these) and gen 5 (now it's mostly these). There was mention by a while back that the gen 4 cars already had 20-something cameras (instead of just the eight in the dome that I was aware of, which are documented in the paper below as 1920x1280 cameras inside the dome), but I don't see any locations below the dome level where they could be installed. Other than the standard rear view camera that came with the Pacifica, the only sensors below dome level visible are the honeycomb lidar sensors, plus sections of housing that likely houses radar sensors. There are no visible windows/cutouts below dome level that could house a camera and if that's true they basically had no vision coverage of the perimeter of the vehicle (other than standard rear view camera).

https://arxiv.org/pdf/1912.04838.pdf
According to the paper those specs are after down-sampling and not the original raw image spec and are meant for the Open Dataset to share with the ML community. Not representative of the raw resolution of their cameras or the image resolution their ML system processes.

"The image sizes reflect the results of both cropping and downsampling the original sensor data."

Also from Waymo Communication Manager:
"But McGoldrick said its trajectory here to fully autonomous paid rides will be quicker than that in Phoenix."
view-source:After riding in several driverless cars, how does Waymo's latest compare?
 
Last edited:
  • Like
Reactions: Microterf
That's wildly optimistic, IMHO. Chandler public beta started in April 2017. Driverless (mostly) service available to everyone started 3.5 years later, in October 2020.

Waymo One (or Waymo Two? ha) service in SF will be much harder to open up. SF has many more riders. They'll need hundreds of active cars to achieve acceptable wait times, instead of "5-10" as in Chandler. Make that thousands of cars if they include the NW quadrant. And orders of magnitude more cars means orders of magnitude bigger support infrastructure.

They could cherry-pick a couple high traffic routes and build out from there. But that's not their style, and it's bad optics to take riders away from mass transit which typically serves those routes. They could also slipstream into Uber or Lyft's fleet, eliminating the logistical burdens and letting them focus 100% on improving the Waymo Driver. There's a lot of bad blood with Uber, though. Lyft might work, they recently sold their own self-driving unit to Toyota. But they still have ties with GM who owns Cruise. And though Waymo announced some kind of Lyft trial in Chandler, it doesn't seem anything ever actually happened.

I would say "they'll figure it out as they go along". But their track record argues against that.
I think 12-15 months is right on the money.

Especially with Waymo's Comm Manager comments:
"But McGoldrick said its trajectory here to fully autonomous paid rides will be quicker than that in Phoenix."
 
  • Like
Reactions: diplomat33
I think 12-15 months is right on the money.

Especially with Waymo's Comm Manager comments:
"But McGoldrick said its trajectory here to fully autonomous paid rides will be quicker than that in Phoenix."
I say 18-24 months. At least for an open service in something close to the current area. They could do a very limited paid service sooner, for bragging rights.

On the other hand, they might open the whole area and use their inept marketing to avoid scaling problems. Morning Brew asked a couple dozen Chandler residents about Waymo. All had seen the vans and knew they were self-driving. Not one was aware they could sign up and hail rides!
 
Toyota Motor Corporation has announced an immediate halt to all of its e-Palette self-driving transportation pods operating at the Tokyo Paralympic Games. The decision comes on the heels of an accident that took place in the Paralympic Village yesterday, when a Toyota e-Palette collided with a visually impaired athlete, injuring them.
According to Toyota, the self-driving vehicle had stopped at a T-junction and was about to make a turn under manual control of the operator using a joystick. The vehicle then collided with the athlete going at a speed of around 1 or 2 km/hr.


This is a terrible incident. But if the vehicle was in manual mode, how is it the fault of the autonomous driving?
 
Mobileye, an Intel company, is expanding its global influence in the advanced driver-assistance systems (ADAS) industry with a new partnership with ZEEKR, the global premium electric mobility technology brand from Geely Holding Group. Together, Mobileye and ZEEKR will introduce the world’s most highly advanced safety technology available in the market for advanced, intelligent vehicles.

As part of the long-term agreement, Mobileye will work with ZEEKR to create advanced ADAS systems with increasingly sophisticated capabilities for a variety of ZEEKR models. The collaboration will begin with the launch of ZEEKR vehicles in the fourth quarter of 2021 featuring Mobileye® SuperVision™, a full-stack ADAS solution powered by two EyeQ5® system-on-chip (SoC) devices processing data from 11 cameras. The two companies also plan to collaborate further on a next-generation system powered by six EyeQ5 SoCs to deliver a new standard for a comprehensive ADAS experience. It is expected to make its global debut as soon as 2023.

“ZEEKR’s powerful vision for the future of driving make them an ideal partner to Mobileye,” said Prof. Amnon Shashua, co-founder and CEO of Mobileye and senior vice president of Intel. “By working closely together, we have an exciting opportunity to reach a new level of excellence in ADAS, bringing to market what will be the industry’s most state-of-the-art, full-feature system.”

The collaboration follows an equity investment in ZEEKR by Intel Capital.

 

This is a terrible incident. But if the vehicle was in manual mode, how is it the fault of the autonomous driving?

I see it more as a cautionary move to investigate why it happened, and also a PR move. You, and I will see a distinction between auto control and manual control but the average person in that village likely won't. All they'll know is that one of them injured someone.

That being said I do wonder what safety elements are in place when under manual control. Obviously its intended for very crowded areas with lots of unpredictable pedestrians so I'd have some active safety features even when in manual mode.

It's possible the driver overrode a safety feature because they didn't see what the safety feature saw. So maybe they need time to retrain the safety drivers.
 
  • Helpful
Reactions: diplomat33
I see it more as a cautionary move to investigate why it happened, and also a PR move. You, and I will see a distinction between auto control and manual control but the average person in that village likely won't. All they'll know is that one of them injured someone.

That being said I do wonder what safety elements are in place when under manual control. Obviously its intended for very crowded areas with lots of unpredictable pedestrians so I'd have some active safety features even when in manual mode.

It's possible the driver overrode a safety feature because they didn't see what the safety feature saw. So maybe they need time to retrain the safety drivers.

It is looking like human error:

 
  • Informative
Reactions: S4WRXTTCS
Last edited:
Game over for Tesla IMO.
That 2023 car will be a replica of their Vision and Lidar car as it has 6x EyeQ5 and with the funding and partnership at this level.
It means that when Zeekr does eventually comes to EU and US. It will definitely come with supervision.
I see EyeQ6 is scheduled for 2023.
 
  • Like
Reactions: Terminator857

This is a terrible incident. But if the vehicle was in manual mode, how is it the fault of the autonomous driving?
Perhaps not the autonomous driving directly, but it could still be the fault of the vehicle design if the interface was poor for the handoff or for manual driving.
 
  • Helpful
Reactions: diplomat33
According to the paper those specs are after down-sampling and not the original raw image spec and are meant for the Open Dataset to share with the ML community. Not representative of the raw resolution of their cameras or the image resolution their ML system processes.

"The image sizes reflect the results of both cropping and downsampling the original sensor data."

Also from Waymo Communication Manager:
"But McGoldrick said its trajectory here to fully autonomous paid rides will be quicker than that in Phoenix."
view-source:After riding in several driverless cars, how does Waymo's latest compare?
Yeah, it does look like it might be different resolution when raw, but the point was more about the location and number of cameras. From the paper (which only allows access to the 5 cameras on the front and sides and not the 3 back cameras if you extrapolate from the FOV of each camera), it seems like there are 8 cameras in the dome. But anyways, regardless of how the dome cameras are laid out, I'm didn't see any cameras on the body of the Gen 4 cars and I'm not sure how they would have 20+ cameras on the car (unless they were all in the dome). The Gen 5 is completely different in this regard.
 
Last edited:
  • Like
Reactions: Bladerskb
Waymo is dying. They've been attacking fsd with a small pickaxe for years. The SF service is their final push in hopes that they can believe in their approach again. You all should be wondering why they've yet to expand their 5-10 cars and/or geofence in Phoenix and are now trying to market SF with a sad promo video showing no actual fsd performance.

In a year or two, we can all remember Waymo by this short video representing its meaningless drive to nowhere, doing nothing special, and offering not much of anything:

 
Waymo is dying. They've been attacking fsd with a small pickaxe for years. The SF service is their final push in hopes that they can believe in their approach again. You all should be wondering why they've yet to expand their 5-10 cars and/or geofence in Phoenix and are now trying to market SF with a sad promo video showing no actual fsd performance.

In a year or two, we can all remember Waymo by this short video representing its meaningless drive to nowhere, doing nothing special, and offering not much of anything:

That seems like a more moderate fog for SF (you can still see the bus fairly far in the distance). I've been in fog where visibility is more like a block.
Something more like this:

The visualization is not done very well in this case. The bounding boxes of the cars are too thin and it's hard to map over the objects in the scene. Would be much better if they did a overlay on the video input. They should also have a legend to show what the colors mean if it's meant for the public. For example what does the yellow mean for the cloud that is around the car? Is that what their system have rejected as just fog?