Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Buyer Beware: AP2 Extremely Dangerous

This site may earn commission on affiliate links.
1. Hacking software = DMCA violation. Hacking anything like this would be criminal to begin with, so any results are further crimes.
...
It's perfectly legal, at least for now

The hack proposed was illegal and this still illegal. Read the article you linked.

Yes, read the article. The article quoted (New DMCA Exemptions Mean You Can Now Hack Your Own Car) is from 2015 and The Library of Congress DID allow hacking and modification

Here:
https://www.gpo.gov/fdsys/pkg/FR-2015-10-28/pdf/2015-27212.pdf#page=10
"Proposed Class 21: This proposed class would allow circumvention of TPMs protecting computer programs that control the functioning of a motorized land vehicle, including personal automobiles, commercial motor vehicles, and agricultural machinery, for purposes of lawful diagnosis and repair, or aftermarket personalization, modification, or other improvement. Under the exemption as proposed, circumvention would be allowed when undertaken by or on behalf of the lawful owner of the vehicle."
 
  • Helpful
Reactions: NerdUno
Untrue. The 24 DL TOPS would require two cards. Tesla is only using one, like that Nvidia demo buggy
No, not at all.
Nvidia Drive PX 2 Uses Integrated and Discrete Pascal GPU Cores - 24 DL TOPS, 8 TFLOPs and Up To 4GB GDDR5 [Updated]

Each Drive PX 2 board contains 2 GPUs and and 2 Tegra chips. You can see the demo unit in the picture.

img_1908.jpg


This is in the trunk of the the nvidia demo car BB8 (a single board):
nvidia-car-drive-px-2-2.jpg


First picture of Tesla’s new NVIDIA onboard supercomputer for Autopilot installed in a car

MasterT not sure why you're on a disagree spree, but it's a little difficult to disagree with facts.
 
Last edited:
  • Funny
Reactions: NerdUno
@JeffK such a typical response, "facts", he says :D

I have not disagreed with a single of your post containing facts, only posts with your opinion, and only those with clear Tesla-fanboy sentiment
Actually, I do try to link to my sources, but yes, I'm a Tesla fanboy for sure.

If you follow the links you'll see the evidence of the factual nature of my statements :) such as when citing the Tesla owners manual and even giving the page numbers.
 
  • Disagree
Reactions: ABC2D
I am a little late to the party but have enjoyed reading through this long thread. I am awaiting delivery of my X75, and was concerned about the AP2 progress so far. I wanted to link to the Nvidia CES 2016 video which I think talks about the PX2 that is in the Tesla AP2 cars.


Key notes that I took away from this video:
  • @ 24:30 - they show what Nvidia's own car was able to do after starting from scratch. It was able to learn at a high rate over 5 months starting at 39% object detection and finishing at 88% object detection.
    • In my opinion this shows that the growing pains that owners are experiencing should be seen as "normal" for the growth of this hardware set and hopefully Tesla's software capabilities can shorten this ramp up. I would even argue that Elon seeing this information would feel that he could shorten that graph by 50% which is how he cam up with the 2-3 month timeline for AP1 parity. In typical Elon fashion I don't think he gave wiggle room for things like the Sterling Anderson issue, camera calibration issues, and other issues that may not have been released publicly.
  • @ 59 min - They demo the UI and it clearly shows the issues owners have reported about the car jumping around in the lane.
    • To me this shows that it is not a bug specific to Tesla, but really something that has just not been given attention too at this point. IMO I think Tesla may go away with the lane lines as the vehicle gets more FSD features. I ask myself how many drivers are really looking down to see what the AP wants to do when the car makes troubling decisions, or instead just take over out of reaction. It just seems like a unneeded feature.
 
I ask myself how many drivers are really looking down to see what the AP wants to do when the car makes troubling decisions, or instead just take over out of reaction. It just seems like a unneeded feature.
... but what if you didn't have to look down? What if you had an augmented reality HUD on the windshield that could overlay this information onto real objects as you're seeing them?

The other side of the argument (for no HUD) is that is AP 2.0 is going to be good enough for FSD then there's plenty of time to look at the display. If FSD was perfect then it wouldn't be needed at all, but there's going to be a transition phase where humans need to see something like that in order to feel more comfortable with what the autonomous systems are doing/"thinking". Like many of the NVIDIA demos say, it also provides a very clear indicator that you are in autonomous mode.
 
  • Like
Reactions: Josh88
I am a little late to the party but have enjoyed reading through this long thread. I am awaiting delivery of my X75, and was concerned about the AP2 progress so far. I wanted to link to the Nvidia CES 2016 video which I think talks about the PX2 that is in the Tesla AP2 cars.


Key notes that I took away from this video:
  • @ 24:30 - they show what Nvidia's own car was able to do after starting from scratch. It was able to learn at a high rate over 5 months starting at 39% object detection and finishing at 88% object detection.
    • In my opinion this shows that the growing pains that owners are experiencing should be seen as "normal" for the growth of this hardware set and hopefully Tesla's software capabilities can shorten this ramp up. I would even argue that Elon seeing this information would feel that he could shorten that graph by 50% which is how he cam up with the 2-3 month timeline for AP1 parity. In typical Elon fashion I don't think he gave wiggle room for things like the Sterling Anderson issue, camera calibration issues, and other issues that may not have been released publicly.
  • @ 59 min - They demo the UI and it clearly shows the issues owners have reported about the car jumping around in the lane.
    • To me this shows that it is not a bug specific to Tesla, but really something that has just not been given attention too at this point. IMO I think Tesla may go away with the lane lines as the vehicle gets more FSD features. I ask myself how many drivers are really looking down to see what the AP wants to do when the car makes troubling decisions, or instead just take over out of reaction. It just seems like a unneeded feature.

Your keynote video combined with current AP2 user reports rather persuasively make the case that the Tesla self-driving video of November 18, 2016 could not possibly have been using Nvidia hardware and software as was represented by Tesla. Obviously 88% object detection isn't that great if you happen to encounter a new road hazard or if you're the unlucky 1 in 10 pedestrian or bicyclist that goes undetected in a Tesla encounter. :eek:
 
  • Love
Reactions: lunitiks
Your keynote video combined with current AP2 user reports rather persuasively make the case that the Tesla self-driving video of November 18, 2016 could not possibly have been using Nvidia hardware and software as was represented by Tesla.

Hmmm. So Nvidia wastes 11 months of the year and then only innovates the next generation in December. Impressive.


What's not clear from Tesla is how many shadow mode runs were involved in the build up to that November AP video. Given the early comments from AP users, I'm not sure about the balance of general fleet-learning vs specific route-learning.

Still, it would appear that Truck Lust is starting to mellow into Truck Hugging, which is still uncomfortable of course.
 
Your keynote video combined with current AP2 user reports rather persuasively make the case that the Tesla self-driving video of November 18, 2016 could not possibly have been using Nvidia hardware and software as was represented by Tesla. Obviously 88% object detection isn't that great if you happen to encounter a new road hazard or if you're the unlucky 1 in 10 pedestrian or bicyclist that goes undetected in a Tesla encounter. :eek:

I feel the opposite of your opinion. I think to go from 39% object detection to 88% in 5 months shows that the system can learn quickly. I agree that 88% detection is nothing to be giving awards for but the rate of learning is the point. I also think I heard Tesla was working with the Nvidia hardware around the time of this keynote so presumably the November autonomous video was possible with 11 months of programming.

However, it would make the slow roll out of AP2 a little hard to understand...
 
... but what if you didn't have to look down? What if you had an augmented reality HUD on the windshield that could overlay this information onto real objects as you're seeing them?

The other side of the argument (for no HUD) is that is AP 2.0 is going to be good enough for FSD then there's plenty of time to look at the display. If FSD was perfect then it wouldn't be needed at all, but there's going to be a transition phase where humans need to see something like that in order to feel more comfortable with what the autonomous systems are doing/"thinking". Like many of the NVIDIA demos say, it also provides a very clear indicator that you are in autonomous mode.

I agree with all of this, and would be over the moon to have a augmented reality HUD.

I still feel that I don't need to see where exactly the AP thinks my car is in the Lane. I feel a solid blue line on each side of the car accomplishes the clear indicator the AP is enabled. Ultrasonic sensor warnings/arcs provide enough feedback to know what the car is thinking IMO.
 
@JeffK - it does seem possible based on musk's recent comments that a board swap may be necessary at some point. That would not require major surgery - no rewiring or new sensors. The current brain does 12 tflops but they have more in the pipeline that do significantly more. This upgrade, if it proves needed, should be fast and simple.

Is this the brain you are reffering to?

DRIVE PX 2 FOR AUTOCRUISE
Small form factor DRIVE PX 2 for AutoCruise is designed to handle functions including highway automated driving, as well as HD mapping. This platform will be available in Q4 2016.

agx1-top.png


- See more at: Autonomous Car Development Platform from NVIDIA DRIVE PX2
 
Is this the brain you are reffering to?

DRIVE PX 2 FOR AUTOCRUISE
Small form factor DRIVE PX 2 for AutoCruise is designed to handle functions including highway automated driving, as well as HD mapping. This platform will be available in Q4 2016.

agx1-top.png


- See more at: Autonomous Car Development Platform from NVIDIA DRIVE PX2

No it's one of the variations NVIDIA has on the roadmap for next year. The one posted I think is less powerful than the one in the cars currently. Not sure - they are making a number of variations of this hardware.
 
This one perhaps?



Introducing Xavier, the NVIDIA AI Supercomputer for the Future of Autonomous Transportation

At the inaugural GPU Technology Conference Europe, NVIDIA CEO Jen-Hsun Huang today unveiled Xavier, our all-new AI supercomputer, designed for use in self-driving cars.

“This is the greatest SoC endeavor I have ever known, and we have been building chips for a very long time,” Huang said to the conference’s 1,600 attendees.

Xavier is a complete system-on-chip (SoC), integrating a new GPU architecture called Volta, a custom 8 core CPU architecture, and a new computer vision accelerator. The processor will deliver 20 TOPS (trillion operations per second) of performance, while consuming only 20 watts of power. As the brain of a self-driving car, Xavier is designed to be compliant with critical automotive standards, such as the ISO 26262 functional safety specification.


At the inaugural GPU Technology Conference Europe, NVIDIA CEO Jen-Hsun Huang today unveiled Xavier, our all-new AI supercomputer, designed for use in self-driving cars.
Packed with 7 billion transistors, and manufactured using cutting-edge 16nm FinFET process technology, a single Xavier AI processor will be able to replace today’s DRIVE PX 2 configured with dual mobile SoCs and dual discrete GPUs — at a fraction of the power consumption.

Because autonomous driving is an incredibly compute-intense process, the need for an efficient AI processor is paramount. Xavier will bring self-driving car technology to automakers, tier 1 suppliers, startups and R&D organizations that are building autonomous vehicles, whether cars, trucks, shuttles or taxis.

Xavier samples will be available the fourth quarter of 2017 to automakers, tier 1 suppliers, startups and research institutions who are developing self-driving cars.
 
This one perhaps?



Introducing Xavier, the NVIDIA AI Supercomputer for the Future of Autonomous Transportation

At the inaugural GPU Technology Conference Europe, NVIDIA CEO Jen-Hsun Huang today unveiled Xavier, our all-new AI supercomputer, designed for use in self-driving cars.

“This is the greatest SoC endeavor I have ever known, and we have been building chips for a very long time,” Huang said to the conference’s 1,600 attendees.

Xavier is a complete system-on-chip (SoC), integrating a new GPU architecture called Volta, a custom 8 core CPU architecture, and a new computer vision accelerator. The processor will deliver 20 TOPS (trillion operations per second) of performance, while consuming only 20 watts of power. As the brain of a self-driving car, Xavier is designed to be compliant with critical automotive standards, such as the ISO 26262 functional safety specification.


At the inaugural GPU Technology Conference Europe, NVIDIA CEO Jen-Hsun Huang today unveiled Xavier, our all-new AI supercomputer, designed for use in self-driving cars.
Packed with 7 billion transistors, and manufactured using cutting-edge 16nm FinFET process technology, a single Xavier AI processor will be able to replace today’s DRIVE PX 2 configured with dual mobile SoCs and dual discrete GPUs — at a fraction of the power consumption.

Because autonomous driving is an incredibly compute-intense process, the need for an efficient AI processor is paramount. Xavier will bring self-driving car technology to automakers, tier 1 suppliers, startups and R&D organizations that are building autonomous vehicles, whether cars, trucks, shuttles or taxis.

Xavier samples will be available the fourth quarter of 2017 to automakers, tier 1 suppliers, startups and research institutions who are developing self-driving cars.
The Xavier is based on Volta architecture, it won't be out until next year with developer preview models coming out late this year.

The Drive PX2 based on Pascal architecture in Tesla cars is the full board:
NVIDIA-DRIVE-PX2-self-driving-cars-computer.jpg
 
  • Informative
Reactions: djdimsum
When are these coming in the Model 3?

Well the PX2 is all ready to go on the M3 launch.

No idea about Xavier unless there's a clever piece of design which allows Tesla to swap form factors - i.e. build a PX2-sized dummy cradle which holds/connects to Xavier and duplicates the PX2 connectors when the whole thing is slotted into the car.
 
Well the PX2 is all ready to go on the M3 launch.

No idea about Xavier unless there's a clever piece of design which allows Tesla to swap form factors - i.e. build a PX2-sized dummy cradle which holds/connects to Xavier and duplicates the PX2 connectors when the whole thing is slotted into the car.
Elon has already mentioned that these are designed to be upgradable. Since Xavier is physically smaller, this should be trivial.
 
  • Like
Reactions: malcolm