Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Why AP 2.0 Won't Be Here Soon, and It Won't Be What You Think It Is

This site may earn commission on affiliate links.
No hands hovering near steering wheel?
Big ass box with a few more than 4 wheels?
Few more gears to shift?
Slight risk of jackknifing?
Bigger horn? :D
Spilling crappy Bud all over the highway? :eek:

Gears is an interesting point. I thought nearly all semis were still manual transmission - often with eighteen gears.

Teaching Otto to shift that might be an adventure all its own. Or did they find/make an automatic semi to convey?
 
Gears is an interesting point. I thought nearly all semis were still manual transmission - often with eighteen gears.

Teaching Otto to shift that might be an adventure all its own. Or did they find/make an automatic semi to convey?

A new type of transmission for trucks has been coming online over the last decade called the Automatic Manual Transmission. From context of a couple of articles I scanned, it appears the gears make physical contact like a manual transmission (instead of a torque converter), but some kind of controller decides which gear is the best for any given speed.

Automated Manual Transmissions | Trucking Efficiency
 
Sounds like the double-clutch "automatic" manual transmissions in BMWs (and I think now other cars, too).

Borg Warner and VW teamed up to develop those originally, which have been on some VW products for a decade.

I think VW had a five year exclusive deal, based I when they started showing up I the rest of the market.

This year, dual clutch transmissions are in everything from my Ford Focus rental in July to the Bugatti Veyron - though I haven't heard of a heavy truck version.
 
Those accidents were all due to documented limitations of the early hw/sw. The system ignored stationary returns from the radar, since there are many such returns, such as posts, concrete blocks, road surface changes etc which would have caused braking if not ignored. This was documented specifically in the cars' manuals. In addition, the drivers were and are responsible for the driving. These accidents were caused by people, not AP. One can argue about 'loss of attention' and we have had those repeatedly. Tesla, who knows more about their requirements, thinks AP2 HW is sufficient. While they may be wrong, their opinion is much better than the peanut gallery, and I include myself in that category. o_O
I understand that this was documented etc. I know this is off topic but did AP V8 address the bad behavior with stationary vehicles? Elon Musk claimed it was going to but I never really saw any user feedback on that?

Call me short term minded but I am far more interested in having these kind of issues resolved soon than thinking of being 10 years older in an autonomous vehicle ride sharing :)
 
  • Like
Reactions: David29
1h0u44.jpg
 
You think it will take longer to modify the existing AP1 software to adapt to AP2 hardware than it took to create it in the first place? LOL is right.

Thank you kindly.

What isn't known is how much of AP1 came from mobileye and how much came from Tesla. The delayed and disappointing "launch" of AP2 so far indicates to me not much was Tesla after all.

Kindly stop thanking us.
 
If I had to guess, Tesla wrote much of the "front end software" while Mobileye did the high-level object detection, vector/lane/path processing. I believe what Mobileye was doing was more difficult. For example, the EyeQ3 would process (overly simplified example) "right lane detected, curving left at 30 degrees" while Tesla would write the software telling the car what to do in that situation. Or, Mobileye would report "Vehicle detected at 1 degree ahead, 200ft away, moving at 45 mph" and the Telsa software would take that to (1) update the dash GUI (2) adjust the speed if necessary (3) help position Autosteer more accurately. Tesla was working on making the car drive comfortably and safely using that data.

So, I believe Tesla has a very good handle on what to do with the world it sees, when it knows what it sees, but it's now building the neural network to accurately turn the pixels from a camera (or 8 cameras) into categorized objects and vectors. I think we're seeing trouble in THIS part of the equation, since this was being developed by Mobileye for years on multiple generations of hardware. Until recently, Tesla's experience with Autopilot (again, my speculation) was limited to what to do and how the car should react when those objects/lanes/vectors were/were not detected by 3rd party hardware and software.

To be clear, I'm sure they're get there but any hiccups are likely from this development process.
 
Ummm, lots to comment on. AP1 uses a layered approach, ie Mobileye does object recognition and Tesla software integration, planning, and driving execution. AP2 uses deep AI to process from camera to execution in a more integrated way. AP1 functionality will be duplicated by AP2 very shortly (1-3 mo), but in parallel will extend its capabilities. Tesla will havd been working on this for longer than we are aware. The OP missed in his predictions re hw and I predict in his sw predictions, too. Still, the situation is pretty fluid so predicting accurately is hard, so we shouldn't be critical of any of our predictions -- but it is all coming.
 
The delayed and disappointing "launch" of AP2 so far indicates to me not much was Tesla after all.

From the fact that Tesla decided to release the hardware before the software, you are projecting many years of development delay? They did exactly the same thing with AP1, and that didn't take years to come on line. And they aren't starting from scratch.

Thank you kindly.
 
If I had to guess, Tesla wrote much of the "front end software" while Mobileye did the high-level object detection, vector/lane/path processing. I believe what Mobileye was doing was more difficult. For example, the EyeQ3 would process (overly simplified example) "right lane detected, curving left at 30 degrees" while Tesla would write the software telling the car what to do in that situation. Or, Mobileye would report "Vehicle detected at 1 degree ahead, 200ft away, moving at 45 mph" and the Telsa software would take that to (1) update the dash GUI (2) adjust the speed if necessary (3) help position Autosteer more accurately. Tesla was working on making the car drive comfortably and safely using that data.

So, I believe Tesla has a very good handle on what to do with the world it sees, when it knows what it sees, but it's now building the neural network to accurately turn the pixels from a camera (or 8 cameras) into categorized objects and vectors. I think we're seeing trouble in THIS part of the equation, since this was being developed by Mobileye for years on multiple generations of hardware. Until recently, Tesla's experience with Autopilot (again, my speculation) was limited to what to do and how the car should react when those objects/lanes/vectors were/were not detected by 3rd party hardware and software.

To be clear, I'm sure they're get there but any hiccups are likely from this development process.
I completely agree. Tesla didn't have object/path detection algorithms and data. Otherwise they could've easily adopted it to the new system by just enabling one camera.

MobileEye didn't like the fact that Tesla was using its technology to autosteer the car for long period of time, and after the fatal accident, they wanted to end the deal. It looks like AP2 solution by Tesla was rushed because of that. Otherwise Tesla would migrate AP1 algorithms and data to AP2 before releasing it.
I wonder what will happen if AP2 cars start to crush and nvidia wants to end the deals because of bad PR?
 
Otherwise Tesla would migrate AP1 algorithms and data to AP2 before releasing it.

Not if they were planning on changing the hardware. Once they decided the hardware, it makes the most sense to get it into as many cars as possible, even if software had to wait. Which is why they did exactly that last time too.

Thank you kindly.
 
I completely agree. Tesla didn't have object/path detection algorithms and data. Otherwise they could've easily adopted it to the new system by just enabling one camera.

MobileEye didn't like the fact that Tesla was using its technology to autosteer the car for long period of time, and after the fatal accident, they wanted to end the deal. It looks like AP2 solution by Tesla was rushed because of that. Otherwise Tesla would migrate AP1 algorithms and data to AP2 before releasing it.
I wonder what will happen if AP2 cars start to crush and nvidia wants to end the deals because of bad PR?

Unless I'm very confused, Tesla is now developing all of the software. Even if NVidia felt the need to end the relationship, Tesla should be able to port the DNNs to another hardware base without anywhere near as much difficulty as they are having now.

Keep in mind, though, Mobileye didn't end the relationship over crashes, despite the sour grapes comments about Tesla taking chances with safety.

They notified Tesla that after the contract ended, they wanted a bunch more money for the next lot of the same chips, and they also insisted they would have to get access to Tesla's fleet learning data pool.

Tesla said no, so Mobileye told them it was over, and then tried to blacken Tesla's eye in with news stories, which is why we've heard the whole falling out story. It was never about the fatal accident.
 
I think Tesla had been working on this for a while, and I think they thought the approach of using Mobileye for object detection was limiting, they did not have full control over it, and in their estimation Mobileye was moving too slowly. Tesla has the basic object recognition layers done, as we have seen in the demo videos. I expect that things will improve greatly over the next 3 months, and AP1 functionality will be exceeded.
 
I'm pretty sure any new software they develop will not be tied directly to DrivePX2 or any hardware they don't control. I wouldn't be surprised to see their own chipset eventually.

When they started, Tesla selected Mobileye. So, at some point, Mobileye was ahead of Tesla's internal object detection capabilities. Mobileye has 600 employees and 17 years of experience. It's a 9bn (market cap) company which released their first generation of EyeQ over 10 years ago. (In fact, the original DrivePX even used an EyeQ3 processor on it.) There is a significant amount of work on the object detection/computer vision side for Tesla to catch up. Again, I think they can do this.. and they may have been working on this for a while, but we're talking approximately 2-3 years verses all of Mobileye's experience.