There’s been a lot of discussion and speculation on the progress of Tesla Autoplot 2.
As of August 2017, it still lags far behind Autopilot 1. However, it has been making steady, albeit. Frustratingly slow, progress.
So where do you think it will be by end of 2017? How about end of next year?
I want to link you guys to this: Patents by Assignee Mobileye Vision Technologies Ltd. - Justia Patents Search
It’s a list of MobilEye patents, many of which were granted this Summer. As an AI developer, I have wondered how much of the slow progress has been due to these patents and having to find a different, possibly more difficult, way to implement vision based systems. How hard, for instance, can it be to do pattern recognition for different types of vehicles to display them? I’d say the answer is it depends on the method used, and whether that method has been patented.
Here are my predictions:
I think they’ll achieve near AP1parity by end of this year. I don’t think we’ll get rain sensing, speed limit sign detection, vehicle type identification or showing cars in other lanes but I think general behavior will be pretty close. I think they’ll have to work out a licensing arrangement on patents to get the missing things and patent licensing can be slow.
I think we will need more specialized hardware (ap2.5) to get to real enhance d autopilot. Not because of raw computing power but because there will be a need to hardware accelerate patent-bypassing methods.
I think by this time next year we will have the enhanced auto pilot behavior we are hoping for. I don’t think we will see FSD for at least a year after that (2019) but it really depends on patents, not government regulation imo.
What do you think?
As of August 2017, it still lags far behind Autopilot 1. However, it has been making steady, albeit. Frustratingly slow, progress.
So where do you think it will be by end of 2017? How about end of next year?
I want to link you guys to this: Patents by Assignee Mobileye Vision Technologies Ltd. - Justia Patents Search
It’s a list of MobilEye patents, many of which were granted this Summer. As an AI developer, I have wondered how much of the slow progress has been due to these patents and having to find a different, possibly more difficult, way to implement vision based systems. How hard, for instance, can it be to do pattern recognition for different types of vehicles to display them? I’d say the answer is it depends on the method used, and whether that method has been patented.
Here are my predictions:
I think they’ll achieve near AP1parity by end of this year. I don’t think we’ll get rain sensing, speed limit sign detection, vehicle type identification or showing cars in other lanes but I think general behavior will be pretty close. I think they’ll have to work out a licensing arrangement on patents to get the missing things and patent licensing can be slow.
I think we will need more specialized hardware (ap2.5) to get to real enhance d autopilot. Not because of raw computing power but because there will be a need to hardware accelerate patent-bypassing methods.
I think by this time next year we will have the enhanced auto pilot behavior we are hoping for. I don’t think we will see FSD for at least a year after that (2019) but it really depends on patents, not government regulation imo.
What do you think?