Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Waymo

This site may earn commission on affiliate links.
I doubt Waymo will expand in Phoenix any time soon. Their original plan was to scale in easy, low-regulation areas like the Phoenix suburbs while saving more difficult urban and bad weather areas for later. That's why they hired Krafcik in 2015, designed Gen 4 sensors for volume production, ordered 82k cars in 2018 and did fleet maintenance deals with Avis and Autonation.

It wasn't a terrible plan, but their Robotaxi business model failed in Phoenix. Startups pivot quickly when this happens, but the non-entrepreneurial Krafcik wasn't wired that way. Waymo has now moved on to slam-dunk business models like urban Robotaxi and long distance trucking, even though those present tougher technical and regulatory hurdles. I think Phoenix could have worked for them with the right leadership and focus. But that's water under the bridge now.
 
  • Like
Reactions: mikes_fsd
CVPR 2021 is starting this weekend.

Waymo will be hosting and presenting at the conference:



Drago Anguelov will speak on Long Term Prediction in Complex Interactive Environments.
 
Very impressive lidar. Can read signs.

This is at 3:13:20 mark in the video.

Yep. I shared this earlier in the thread from a different presentation from Anguelov. The 5th gen lidar is a huge improvement over the 4th gen lidar. The resolution is much much higher. The 5th gen lidar appears to be on par with camera vision. It is one of the reasons why I am so looking forward to seeing videos of the Waymo 5th Gen I-Pace in SF. With the better hardware and software, I suspect we will see better autonomous driving than what we see in Chandler.
 
Last edited:
Are the fuel stops for the semi mapped out? With the autonomous semi who is responsible for the safety of the load. It being tied down and secured? Normally with loads the driver is responsible to ensure that the load doesn't come loose. Will State Police departments have control over the semis at weigh stations?
 
BjxKdun.png


@Bladerskb I grabbed this screenshot from the Waymo presentation at CVPR.

It is impressive that Waymo has a method that is 36-40% better than the current Uflow. Shows that Waymo has state of the art ML.

But I am trying to understand more about this subject. Would you be able to explain this slide in layman's terms please?

I looked up "optical flow" and I found this definition:
"Optical flow or optic flow is the pattern of apparent motion of objects, surfaces, and edges in a visual scene caused by the relative motion between an observer and a scene. Optical flow can also be defined as the distribution of apparent velocities of movement of brightness pattern in an image."

So my rudimentary understanding is that Waymo is doing unsupervised machine learning with camera vision where the NN looks at several frames and can auto label the object based on the motion of the object. Do I have that right?

Thanks.
 

Thanks for sharing.

IMO, Simulation City is a total game changer.

Short version: Simulation City will allow Waymo to train their FSD much faster than real world driving and will allow them to deploy faster in new areas.

Some key quotes:

"The company has also been using Simulation City to run tests in new operating domains to better prepare its vehicles to launch in new cities. "

= Waymo can use Simulation City to test their FSD before deploying to new cities. This will speed up deployment to new cities.

"Simulation is a critical piece of the puzzle for autonomous vehicles. These programs allow Waymo’s engineers to test — at scale — common driving scenarios and safety-critical edge cases, the learnings from which it then feeds into its real-world fleet. The key word is “scale” because these simulators allow Waymo to far exceed the distances its vehicles travel on public roads. "

= Simulation City will allow Waymo to solve many more edge cases and scenarios faster than they could with real world driving.

"In Simulation City, those real-world miles are now informing the miles driven in simulation, meaning the company has more confidence in the validity and reliability of the virtual situations it constructs for its vehicles. Once that relationship is established in an increasingly strong way, we need fewer additional miles driven in the real world to basically say what we learned in simulation is correct,” Frankel said."

= Once Simulation City is proven to be as accurate as real world driving, Waymo won't need as much real world driving to achieve the same results.

"Simulation City is also computationally more advanced than Waymo’s previous virtual world testing in the level of detail it can create. For example, Waymo’s engineers can simulate something as small as raindrops or as complex as late afternoon solar glare. In the past, these situations have been known to confuse an autonomous vehicle’s perception hardware, which can make it difficult to read critical signage like traffic lights."

= Simulation is incredibly realistic.
 
Last edited:
  • Funny
Reactions: mikes_fsd
  • Like
Reactions: diplomat33
BjxKdun.png


@Bladerskb I grabbed this screenshot from the Waymo presentation at CVPR.

It is impressive that Waymo has a method that is 36-40% better than the current Uflow. Shows that Waymo has state of the art ML.

But I am trying to understand more about this subject. Would you be able to explain this slide in layman's terms please?

I looked up "optical flow" and I found this definition:
"Optical flow or optic flow is the pattern of apparent motion of objects, surfaces, and edges in a visual scene caused by the relative motion between an observer and a scene. Optical flow can also be defined as the distribution of apparent velocities of movement of brightness pattern in an image."

So my rudimentary understanding is that Waymo is doing unsupervised machine learning with camera vision where the NN looks at several frames and can auto label the object based on the motion of the object. Do I have that right?

Thanks.

The paper explains optical flow and much more about all of your questions. The trick to reading these white papers is just skip over the equations and read the overviews, look at pictures and impressive results: https://arxiv.org/pdf/2105.07014.pdf

Optical flow describes a dense pixel-wise correspondence between two images, specifying for each pixel in the first image, where that pixel is in the second image. The resulting vector field of relative pixel locations represents apparent motion or “flow” between the two images. Estimating this flow field is a fundamental problem in computer vision and any advances in flow estimation benefit many downstream tasks such as visual odometry, multiview depth estimation, and video object tracking.
After having covered the foundation that our method builds on, we will now explain our three major improvements: 1) enabling the RAFT architecture [29] to work with unsupervised learning, 2) performing full-image warping while training on image crops, and 3) introducing a new method for multi-frame self-supervision.
...
RAFT works by first generating convolutional features for the two input images and then compiling a 4D cost volume C ∈ R H×W×H×W that contains feature-similarities for all pixel pairs between both images. This cost volume is then repeatedly queried and fed into a recurrent network that iteratively builds and refines a flow field prediction. The only architectural modification we make to RAFT is to replace batch normalization with instance normalization [31] to enable training with very small batch sizes. Reducing the batch size was necessary to fit the model and the more involved unsupervised training steps into memory. But more importantly, we found that leveraging RAFT’s potential for unsupervised learning requires key modifications to the unsupervised learning method, which we will discuss next