Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Profound progress towards FSD

This site may earn commission on affiliate links.
So, the delivery company will have a say in what routes some L4 provides?
What happens if the production facility is 100+ miles away from the main DC, and a portion of the drive has only one highway mapped and geo-fenced?

I think you are making a lot of assumptions and stating them as fact.
What is practical for a particular delivery driver or route is not guaranteed to be the focus of the self driving solution provider.

I am not making up assumptions. I've looked into this. Yes, of course, the delivery company will have a say in what routes the L4 provides. The delivery company is the client that pays the self-driving solution provider for a L4 that works for what they need. So, the self-driving solution provider customizes a L4 that works for what their client, ie the delivery company, needs. That is why Waymo has partnered with trucking companies. It will be Waymo's job to provide the delivery company with a L4 that meets their needs and works on the routes that they need. So Waymo would make sure that everything is mapped that needs to be mapped and that the FSD is reliable in whatever geofenced area the delivery company needs.
 
  • Informative
Reactions: pilotSteve
I've looked into this.
Tell me what other gems have you gleaned from your looking?
It will be Waymo's job to provide the delivery company with a L4 that meets their needs and works on the routes that they need. So Waymo would make sure that everything is mapped that needs to be mapped and that the FSD is reliable in whatever geofenced area the delivery company needs.
In a couple sentences you just proved why Waymo is going to fail. THAT is not a business plan, that is an maintenance/administrative nightmare.
Oh, I am sure they will try to pass the costs on to the "client" but the costs will soon ensure that there are no clients for such non-sense.
 
  • Disagree
Reactions: diplomat33
In a couple sentences you just proved why Waymo is going to fail. THAT is not a business plan, that is an maintenance/administrative nightmare. Oh, I am sure they will try to pass the costs on to the "client" but the costs will soon ensure that there are no clients for such non-sense.

Wrong. It's a perfectly good business model. Companies have clients all the time that they build custom solutions for.

And what you do propose as a better business model since you are apparently such an all knowing genius when it comes to all things full self-driving?

I'll grab the pop corn. This is going to be good.

6ad7dd2211579f026ebd1f85bc4d2236.jpg
 
  • Disagree
Reactions: mikes_fsd
I'll grab the pop corn.
Don't choke!
And what you do propose as a better business model
I do not have a business model as I am not a FSD supplier.

You are taking this so very personally, again, as if you are directly invested in this field. Don't worry Lidar will die out soon enough for the consumer car business, you can then pivot to whatever new shiny thing that "experts" will tell you is a must have.

But then again, Tesla has been selling EV's for over a decade and yet we barely see anything compelling from others, so there is hope for you yet to have lidar linger for another decade.
 
  • Funny
Reactions: diplomat33
Is it fair to compare miles driven on AP between Tesla and others? Tesla AP can be used virtually anywhere there are road lines vs others are restricted to specific places.
Right now, it could be argued either way, but...
That's a tough one going forward.
As the feature set for Tesla FSD gets expanded, and like you said it is available on any car that paid for FSD, it would be as silly as comparing a personal car to a train service.
 
Is cut-in detection and handling active? In the Autonomy Day, Karpathy said that it was in shadow mode.
For me it seems like it's still in shadow mode from my own anecdotal experience.

It triggers with some pretty specific conditions; normal "cars merging while going the same speed as you" doesn't really trigger it and that case is handled using more conventional methods, resulting in lag.
 
Is cut-in detection and handling active? In the Autonomy Day, Karpathy said that it was in shadow mode.
For me it seems like it's still in shadow mode from my own anecdotal experience.

Seems to be active in the UK. Did a 200mile trip yesterday on 28.6 and had cut-in detection happen once.

Can't wait for the re-write.
 
I think we're all going to be surprised when Tesla comes out with the AP rewrite in full. A new patent has been made public titled "Predicting three-dimensional features for autonomous driving." It became public on August 6, but was filed on February 1, 2019. Tesla Has Published A Patent 'Predicting Three-Dimensional Features For Autonomous Driving'

I imagine this means that Tesla may have been working on writing and training the rewrite since before Autonomy Day.
 
I think we're all going to be surprised when Tesla comes out with the AP rewrite in full. A new patent has been made public titled "Predicting three-dimensional features for autonomous driving." It became public on August 6, but was filed on February 1, 2019. Tesla Has Published A Patent 'Predicting Three-Dimensional Features For Autonomous Driving'

I imagine this means that Tesla may have been working on writing and training the rewrite since before Autonomy Day.


I certainly hope we are pleasantly surprised.
 
  • Like
Reactions: willow_hiller
I think we're all going to be surprised when Tesla comes out with the AP rewrite in full. A new patent has been made public titled "Predicting three-dimensional features for autonomous driving." It became public on August 6, but was filed on February 1, 2019. Tesla Has Published A Patent 'Predicting Three-Dimensional Features For Autonomous Driving'

I imagine this means that Tesla may have been working on writing and training the rewrite since before Autonomy Day.
I've been waiting for this and agree with your assessment.
But, I have a feeling that we will have the usual suspects continue to pontificate.
Oh well! I cannot wait to get the rewrite!
 
  • Like
Reactions: willow_hiller
But, I have a feeling that we will have the usual suspects continue to pontificate.

I hope the rewrite brings a lot of progress. I paid for FSD so I certainly have every reason to hope for good things. Having said that, I also try to be realistic. We've seen missed timelines before. I got really excited and carried away during Autonomy Day and got really disappointed. So I am going to wait and see. And when the rewrite is released, I will report honestly what I see in my car.
 
Managed to find the PDF, I don't know why articles these days don't link to the original sources anymore: http://www.freepatentsonline.com/20200249685.pdf

As we've been hoping, the keyword I'm happy to see in the abstract is "trajectory."

"The image data is used as a basis of an input to train a machine learning model trained to predict a three-dimensional trajectory of a machine learning feature. The three-dimensional trajectory of the machine learning feature is provided for automatically controlling the vehicle."

And interestingly enough, even though it's not mentioned in the abstract, a lot of the text of the patent relates to the streamlining of labeling training data. It sounds like they're manually labeling one frame, and then letting the network label the rest of the video.

"In some embodiments, a three-dimensional representation of a feature, such as a lane line, is created from the group of time series elements that corresponds to the ground truth. This ground truth is then associated with a subset of the time series elements, such as a single image frame of the group of captured image data. For example, the first image of a group of images is associated with the ground truth for a lane line represented in three-dimensional space. Although the ground truth is determined based on the group of images, the selected first frame and the ground truth are used to create a training data. As an example, training data is created for predicting a three-dimensional representation of a vehicle lane using only a single image. In some embodiments, any element or group of elements of a group of time series elements is associated with the ground truth and used to create training data. For example, the ground truth may be applied to an entire video sequence for creating training data."
 
Managed to find the PDF, I don't know why articles these days don't link to the original sources anymore: http://www.freepatentsonline.com/20200249685.pdf

As we've been hoping, the keyword I'm happy to see in the abstract is "trajectory."

"The image data is used as a basis of an input to train a machine learning model trained to predict a three-dimensional trajectory of a machine learning feature. The three-dimensional trajectory of the machine learning feature is provided for automatically controlling the vehicle."

And interestingly enough, even though it's not mentioned in the abstract, a lot of the text of the patent relates to the streamlining of labeling training data. It sounds like they're manually labeling one frame, and then letting the network label the rest of the video.

"In some embodiments, a three-dimensional representation of a feature, such as a lane line, is created from the group of time series elements that corresponds to the ground truth. This ground truth is then associated with a subset of the time series elements, such as a single image frame of the group of captured image data. For example, the first image of a group of images is associated with the ground truth for a lane line represented in three-dimensional space. Although the ground truth is determined based on the group of images, the selected first frame and the ground truth are used to create a training data. As an example, training data is created for predicting a three-dimensional representation of a vehicle lane using only a single image. In some embodiments, any element or group of elements of a group of time series elements is associated with the ground truth and used to create training data. For example, the ground truth may be applied to an entire video sequence for creating training data."

It seems the patent relates to using machine learning to train a NN to extrapolate 3D paths from sensor data. Put simply, the "4D rewrite" will be able to extrapolate a lane or the path of a vehicle in both space and time. I expect the rewrite should solve the problem of braking for cars that pass in front of us since the 4D rewrite will know that the path of the car is not a threat. The rewrite should help with things like unprotected left turns as the car will be able to anticipate the paths of other vehicles at the intersection. I think Elon also mentioned that the rewrite will allow for smart summon to work on hills since the rewrite will be able to understand the parking lot or driveway in 3D. Understanding trajectories in 4D is essential to FSD. So this rewrite is really key if Tesla wants to do FSD the right way.

Interestingly on page 2, the patent mentions a lot of possible sensors, including lidar. Tesla seems to be covering all their bases with this patent in case they do add more sensors in the future.

2wuj1Va.png
 
Not quite how I see it. They are covering their bases in the patent so that the patent cannot be bypassed just because you are using a different type of censor.

Yeah, that is what I am getting at. The patent covers all the bases so that if Tesla adds a new sensor, the patent is not invalidated.

It is always smart to make patents as broad as possible so that the competition can't sue you.
 
Given the timing of this patent, I wonder whether the test-drives given to investors at Autonomy Day were in fact early iterations of the AP rewrite. And Tesla has been improving it ever since. We still haven't seen UI graphics similar to those shown on those screens during the demo (showing non-drivable space on the UI, for e.g.).