Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

HW2.5 capabilities

This site may earn commission on affiliate links.
Give Andrej a couple months and huge amounts of data. He'll lead Tesla's Autopilot team to better results, no doubt.
You might be joking, but it's all kind of a joke now. But if anyone is going to pull a rabbit out of the hat, I do feel like it could be him. They need a miracle so just doing what everyone else is doing isn't going to work.
 
AP1, aka "Autopilot 1", has a tiny brain and only one eye, like this cat:

View attachment 250510

AP2, aka "Autopilot 2.0", has a big brain and several eyes. Like this crazy robot from Animatix:

View attachment 250511

Now appearances can be deceiving, because the crazy robot is actually just an infant. You don't want to implant the one-eyed cat's thoughts into a crazy robot infant, do you? The crazy robot infant will go even more crazy, and the one-eyed cat will lawyer up because you're invading it's privacy.

Instead what you wanna do, is to make the crazy robot infant try to learn some of the one-eyed cat's moves. You don't want it to copy the cat, but if you give it enough time it might learn a thing or two on it's own. (You do have to be very patient, though, because there's bugs everywhere and the owner needs to be super-OCD about getting rid of them.)
Brilliant on so many levels.
 
@verygreen - any estimation as to when it might get activated for real on cars? Presumably the NN for vision changed size finally with this addition?
No idea when. This NN is a separate file.

Doubtful, I don't see the s/x getting rid of the IC and rotating the CID Elon made to big of a deal with the m3 reveal of the dual displays. Now we may see an ICE cpu upgrade to the intel chips ( that I believe is a near certainty ), when is the magic question, and my money would be on next year. Personally, if I didn't already own my car, I would wait until Dec 31st and delay delivery for as long as possible into 2018 and get free SC for life and whatever HW goodies are currently baking at Tesla.
While they probably won't get rid of ic (I consider HUD to be a kind of IC too), they can drive multiple displays from a single computer and that's likely what they would do.
As for display orientation, you know that the current CID has display attached in landscape and so for it to work as is they need to rotate the picture 90 degrees in software (making things needlessly slower in process too).
 
  • Informative
Reactions: lunitiks
@verygreen - ok last question, then I'll leave you alone :)

Theoretically; could the DrivePX2 power the IC/CID, graphics wise? As in, could it act as a GPU for the main display as well?

I'm just trying to imagine a scenario where my 2017 Tesla could achieve frame rates of seemingly more than 5fps on the map and media player etc.
 
@verygreen - ok last question, then I'll leave you alone :)

Theoretically; could the DrivePX2 power the IC/CID, graphics wise? As in, could it act as a GPU for the main display as well?

I'm just trying to imagine a scenario where my 2017 Tesla could achieve frame rates of seemingly more than 5fps on the map and media player etc.

Do you really want to take cycles from the APE to give you a better refresh vs crunching numbers to stay on the road? There's a reason it's completely separate hardware.
 
  • Like
Reactions: zmarty
AP1, aka "Autopilot 1", has a tiny brain and only one eye, like this cat:

View attachment 250510

AP2, aka "Autopilot 2.0", has a big brain and several eyes. Like this crazy robot from Animatix:

View attachment 250511

Now appearances can be deceiving, because the crazy robot is actually just an infant. You don't want to implant the one-eyed cat's thoughts into a crazy robot infant, do you? The crazy robot infant will go even more crazy, and the one-eyed cat will lawyer up because you're invading it's privacy.

Instead what you wanna do, is to make the crazy robot infant try to learn some of the one-eyed cat's moves. You don't want it to copy the cat, but if you give it enough time it might learn a thing or two on it's own. (You do have to be very patient, though, because there's bugs everywhere and the owner needs to be super-OCD about getting rid of them.)
I’m still a little unclear as to why we can’t brainwash the robot for a while into acting like the cycloptic cat? How is it possible for MobileEye to take their tech with them? Another analogy would be helpful :)
 
@verygreen - ok last question, then I'll leave you alone :)

Theoretically; could the DrivePX2 power the IC/CID, graphics wise? As in, could it act as a GPU for the main display as well?
Yeah, possible, but it's not connected to any displays ATM so would need quite some rewiring. This is totally ignoring the security issue
of lack of separation of autopilot from web browser and whatnot.

Some platforms offer separation by the way of VMs for less critical parts of the system, but nvidia is not one of them as far as I am aware.
 
As for display orientation, you know that the current CID has display attached in landscape and so for it to work as is they need to rotate the picture 90 degrees in software (making things needlessly slower in process too).

Do they really rotate and scale in software?? I honestly can’t remember the last time I’ve worked on something doing it that way, these days most SoC display pipelines have some sort of rotating, scaling, and compositing stage built in which has no overhead outside of the latency of the pipeline.
 
Do they really rotate and scale in software?? I honestly can’t remember the last time I’ve worked on something doing it that way, these days most SoC display pipelines have some sort of rotating, scaling, and compositing stage built in which has no overhead outside of the latency of the pipeline.
I believe so, when I tried to output my own X window it came rotated 90 degrees.
The logs confirm it:

Code:
[     3.893] (II) TEGRA(0): Output LVDS-1 using initial mode 1920x1200
 
  • Informative
Reactions: lunitiks
I’m still a little unclear as to why we can’t brainwash the robot for a while into acting like the cycloptic cat? How is it possible for MobileEye to take their tech with them? Another analogy would be helpful :)
Ahhh... Allright then...

The cat and the crazy robot infant are not friends anymore, ok? They split up when the crazy robot infant told the cat that it wanted to eat its brain. Now the cat wouldn't let this happen --- unless the robot paid some crazy, c-e-hraaazy moolah. Unfortunately the robot didn't have the dead presidents to spend on the cat, so wtf. Anyway they got angry at each other like two children in a schoolyard and now they're only communicating through legal documents with extremely fine print on it. Meanwhile the crazy robot infant struggles to learn some moves from the cat but can't touch its brain since that would be illegal (robot probably doesn't fully understand how to access it yet either). But times will change. Robot just got a big brother with liquid cooling and a big fat new brain update plus new wiring. All we need now is for daddy to resolve his OCD and OTA some SW on their L5 AP ECU ASAP
 
Ahhh... Allright then...

The cat and the crazy robot infant are not friends anymore, ok? They split up when the crazy robot infant told the cat that it wanted to eat its brain. Now the cat wouldn't let this happen --- unless the robot paid some crazy, c-e-hraaazy moolah. Unfortunately the robot didn't have the dead presidents to spend on the cat, so wtf. Anyway they got angry at each other like two children in a schoolyard and now they're only communicating through legal documents with extremely fine print on it. Meanwhile the crazy robot infant struggles to learn some moves from the cat but can't touch its brain since that would be illegal (robot probably doesn't fully understand how to access it yet either). But times will change. Robot just got s big brother with liquid cooling and a big fat new brain update plus new wiring. All we need now is for daddy to resolve his OCD and OTA some SW on their L5 AP ECU ASAP

Dear god you’re good...
 
  • Funny
  • Love
Reactions: BigD0g and lunitiks
I’m still a little unclear as to why we can’t brainwash the robot for a while into acting like the cycloptic cat? How is it possible for MobileEye to take their tech with them? Another analogy would be helpful :)

OK, I'll try to give an answer without resorting to unspeakable horrors. Mobileye's magic sauce is embedded in their silicon in the form of deep learning network AKA artificial neural net. The ME's brain interacts with Tesla's software via a well-defined communication protocol. This protocol basically lets the ME tell the Tesla "I see a car at these coordinates and a pedestrian at these coordinates". Tesla's own software takes it from there and decides what to do with that information.

Tesla has never had direct access to Mobileye's neural net and does not need that access in order to improve their own software that interacts with the ME device. What Tesla cannot do is ever improve the ME performance at recognizing objects. They can only improve their part of the system, which decides what to do based on what the ME sees.

I don't know whether the ME device is OTA upgradable or not, but if it is that would undoubtedly take the form of ME handing Tesla a "binary blob" to upload to the camera that Tesla can't really do anything with other than upload to the camera, for both legal and technical reasons. The binary blob is not itself the magic sauce. The magic sauce is the data and algorithms that produce that blob, and you can bet ME never gave those to Tesla.

And here I will make a completely non-horrific analogy: The binary blob encoding the neural net is distilled from a vast set of data through proprietary algorithms, like a fine double malt scotch is distilled from grains roasted and fermented according to a secret recipe, and aged in multiple stages according to secret procedures handed down through the generations. If all you have is the bottle of scotch and not the vast acres of farmland to produce the grain and the secret recipes to turn it into liquor, you can't do anything but drink it and be happy.
 
OK, I'll try to give an answer without resorting to unspeakable horrors. Mobileye's magic sauce is embedded in their silicon in the form of deep learning network AKA artificial neural net. The ME's brain interacts with Tesla's software via a well-defined communication protocol. This protocol basically lets the ME tell the Tesla "I see a car at these coordinates and a pedestrian at these coordinates". Tesla's own software takes it from there and decides what to do with that information.

Tesla has never had direct access to Mobileye's neural net and does not need that access in order to improve their own software that interacts with the ME device. What Tesla cannot do is ever improve the ME performance at recognizing objects. They can only improve their part of the system, which decides what to do based on what the ME sees.

I don't know whether the ME device is OTA upgradable or not, but if it is that would undoubtedly take the form of ME handing Tesla a "binary blob" to upload to the camera that Tesla can't really do anything with other than upload to the camera, for both legal and technical reasons. The binary blob is not itself the magic sauce. The magic sauce is the data and algorithms that produce that blob, and you can bet ME never gave those to Tesla.

And here I will make a completely non-horrific analogy: The binary blob encoding the neural net is distilled from a vast set of data through proprietary algorithms, like a fine double malt scotch is distilled from grains roasted and fermented according to a secret recipe, and aged in multiple stages according to secret procedures handed down through the generations. If all you have is the bottle of scotch and not the vast acres of farmland to produce the grain and the secret recipes to turn it into liquor, you can't do anything but drink it and be happy.
^ This!
 
  • Like
Reactions: zmarty
The ME advantage of AP1 is pretty clear and simple…. They've got a very stable and incredibly capable neural net that acts as a black box for Tesla's software, and tells Autopilot exactly where lane lines are, what cars are in view, even approximate distance of cars. And it updates belief of the lane lines based off what it sees other cars doing.

That makes Tesla's control algorithms a million times easier to code up since the input data isn't noisy/garbage.


If you look at the areas where AP2 is suffering, it's usually because there's dancing lane lines all over the screen, or it's mis-reading random grooves as lane lines, or just flat out not detecting certain types of vehicles (e.g. garbage trucks) or showing them in the incorrect lane which causes inappropriate TACC braking.

But OTOH, AP2's faster hardware and faster vision gives it some advantages too like the ability to deal with cresting a sharp hill where it has a split second to re-parse the world and determine which way to steer. It also allows AP2 to take sharper turns since it gets quicker feedback to how its steering adjustments are changing the car's position.


But right now, Tesla Vision being less capable than AP1 is a serious limiting factor. I honestly believe at this point that the AP2 control algorithm is better than the one in AP1, but you know what they say, garbage in, garbage out. I look forward to the future neural net updates.
 
  • Love
Reactions: buttershrimp
The ME advantage of AP1 is pretty clear and simple…. They've got a very stable and incredibly capable neural net that acts as a black box for Tesla's software, and tells Autopilot exactly where lane lines are, what cars are in view, even approximate distance of cars. And it updates belief of the lane lines based off what it sees other cars doing.

That makes Tesla's control algorithms a million times easier to code up since the input data isn't noisy/garbage.


If you look at the areas where AP2 is suffering, it's usually because there's dancing lane lines all over the screen, or it's mis-reading random grooves as lane lines, or just flat out not detecting certain types of vehicles (e.g. garbage trucks) or showing them in the incorrect lane which causes inappropriate TACC braking.

But OTOH, AP2's faster hardware and faster vision gives it some advantages too like the ability to deal with cresting a sharp hill where it has a split second to re-parse the world and determine which way to steer. It also allows AP2 to take sharper turns since it gets quicker feedback to how its steering adjustments are changing the car's position.


But right now, Tesla Vision being less capable than AP1 is a serious limiting factor. I honestly believe at this point that the AP2 control algorithm is better than the one in AP1, but you know what they say, garbage in, garbage out. I look forward to the future neural net updates.

Tourist watching this thread. Would this then mean that the harder part is the vision and when vision is fully solved, then the control software for FSD would be mostly evolved. That is to say, if the EAP gets working really well (I know, I know), the jump needed for a good level 3/4 FSD will be minimal.

The pricing of options sort of indicates this.
 
Tourist watching this thread. Would this then mean that the harder part is the vision and when vision is fully solved, then the control software for FSD would be mostly evolved. That is to say, if the EAP gets working really well (I know, I know), the jump needed for a good level 3/4 FSD will be minimal.

The pricing of options sort of indicates this.


I dunno, I think there's more complexity behind FSD. If we want to boil it down, I'd say between EAP and FSD, there's a few fundamental things that need to be done:


(1) EAP needs to be much more robust at handling mundane traffic situations and holding its lane compared to right now. We can simply call that "polish" because it's making existing functionality work better.
(2) The car needs to understand traffic intersections. Where's the line to stop? What's the state of the traffic light? Which traffic light corresponds to which lane? Roundabouts, unusual intersections, crossing guards / pedestrian right of way, etc etc etc.
(3) The car needs to be able to autonomously navigate — be in the right lane, make the correct turns, obey right-of-way….
(4) The system arguably needs to be redundant to failures in order to either stop safely or transition control back to the user in a graceful manner.


IMO, #1 can be distilled to "make Tesla Vision work better". The others are fundamentally new features, and seem pretty challenging to me, as someone who writes a lot of code for a living but has never formally worked on autonomous driving.


But OTOH, IMO if Tesla can finish #1, that arguably is the only thing that prevents Tesla's AP2 from matching the L3 or L3-ish systems being advertised (e.g. Audi A8, Cadillac SuperCruise) that in theory either permit or can reasonably operate for extended durations on highways without human intervention.
 
  • Informative
Reactions: buttershrimp