Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

HW3 soon moving from emulation to native mode

This site may earn commission on affiliate links.
Uh, has anybody anywhere with inside knowledge said anything about emulation? That seems highly unlikely to me, given the specialized nature of the neural net accelerator. More likely they used some kind of cross-compiler to recompile their existing models for HW3.

^^ This. Ya'll are debating some blogger's guess as if it's actually what's happening. C'mon now.
 
I'm to wonder if this might be the reason of the weaving between lanes or (ping ponging) while on Autopilot since my upgrade to HW3. Emulation isn't reacting fast enough to keep the car centered in the lane. Just thinking out loud.

No, because HW2.5 does that, too. Besides being late to respond, it tends to overcorrect, which is likely the main cause of the problem — probably some miscalculation in the distance to turn the wheel and/or in the control logic that determines what signals to send to the steering rack itself to put it in a particular spot.


Uh, has anybody anywhere with inside knowledge said anything about emulation? That seems highly unlikely to me, given the specialized nature of the neural net accelerator. More likely they used some kind of cross-compiler to recompile their existing models for HW3.

No inside knowledge, but I'd imagine they are using a library that emulates the various CUDA APIs (and similar) and converts (cross-compiles, essentially) the various matrix math operations into the nearest equivalent tensor operation. I wouldn't expect the performance hit to be too horrible (no worse than using a middleware layer like OpenGL/OpenCL), but without a native compiler for the tensor processor, your compiled models would be limited to whatever the original NVIDIA hardware supported, which would probably limit them considerably, either in the form of having to compose complex operations in terms of smaller ones or in the form of being unable to handle models beyond a certain size at all.

If so then yeah, moving to a native compiler would be a necessary step towards making things work well.

Or they could just mean that HW3 uses the same model as HW2.5, and hasn't meaningfully diverged yet.


I doubt that's the problem.

keeping a feedback loop ('servo') working - in physical 'car time' is not too hard. its millisconds, not microseconds. easy peasy.

Yeah, I don't think it's that at all. It could, however, be that it is so bogged down that the decision-making logic realizes too late that it needs to do something.


CleanTechnica feels that the rewrite is a good thing but will set FSD back a year:

"My take: Unlike the conclusion drawn in this article, this is a major delay to the Full Self Driving program. This means the demo we saw at Autonomy Day last year was judged to be not good enough to put into production and they had to go back to the drawing board. As we learned from listening to Andrej Karpathy talk about the tradeoffs between combining things and keeping them independent, putting 3 functions together will make if more efficient, but that will have 2 other effects. The interfaces to all the other parts of the system need to at a minimum be retested and maybe redesigned. Also, the ability to just quickly hand code around a small known issue in planning, perception, and image recognition is removed. Instead, you have to train the neural network around each known issue. I think this is a good change, but it probably sets the system back close to a year."
Timestamped Guide To Part 2 Of Elon Musk Interview By Third Row Tesla | CleanTechnica

I think it is more likely that the demo we saw was the current state of the refactoring up to that point, and that they have continued to merge in more functionality, replacing more of the procedural logic with machine-learning models, etc., one piece at a time.

And that's progress, not a setback. I doubt that anybody two years ago thought that the AP code stack as it existed at the time would bear even the slightest resemblance to the final stack, i.e. I'm sure they've known for a long time that this work was coming.
 
When I listened I didn’t hear that they would rewrite the code from emulation. What I heard was that they were moving from labelling images in 2D to labelling videos in 3D and that they decided to let the neural network do perception and control at the same time instead of doing one first and the latter later. Elon also clarified that Dojo was not for inference or data labelling but only for training.

IE what we learnt was
1. Change the data the NN is trained on from 2D frames to 3D video, 3 orders of magnitude more efficient labelling
2. Do control and perception in the neural network at the same time
3. Use dojo to train the network starting maybe end of 2019 definetly 2020.
 
  • Like
Reactions: Kmartyn and Joerg
Why would sensor redundancy be required for L3? If the side or back cameras fail just have the driver take over. Heck, even if the front cameras and RADAR fail you could probably safely have the driver take over 99.9% of the time.

There are two main issues with Tesla doing L3 driving.

The first issue is that it has no driver monitoring system. So it simply can't do L3 driving because it's never going to be able to prevent/detect the driver falling asleep. So if a takeover event happens the driver might be totally unprepared to take over.

My concern about the lack of redundancy among the rear/corner sensors isn't that the sensor will totally fail. But, that it could get into bad state where even the car doesn't know it's failed. Or that some drop of rain (especially on the rear camera) reduces its effectiveness at detection. Or maybe what's next to you isn't trained in the Neural network so it doesn't even see it.

Simply put I'm not very comfortable with not having any redundancy for lane change maneuvers.

L3 requires that car gives the driver a reasonable amount of time to take over. You can't have a reasonable amount if the ultrasonics detect a car that it's about to crash into doing a lane change into it.

Simply put I don't see AP3 pulling off L3 driving with the current HW sensor suite. At least not at freeway speeds. Maybe a traffic assistance only system like the Audi A8 is trying to do in Germany.
 
Doesn't matter until you remove the steering wheel.

Have you read up on the autonomy levels?
Primer on SAE Levels of Autonomy

Basically I feel like the lack of rear/corner redundancy will prevent the car from performing the following function:

"Determines whether there is a DDT performance-relevant system failure of the ADS and, if so, issues a timely request to the human to intervene."

Camera Pipelines fail in various kinds of ways.

Sometimes an image will get split (people have seen this with the backup camera)
Sometimes an image will freeze
Sometimes the sun angle hits it just right

I really like the rear camera in the Model 3 as it's better than any other car I've owned. But, it's not good enough for L3.
 
I'm confused by this question.

Didn't we conclude that HW3 can't do L3 due to the lack of an effective driver monitoring system?

No, that was your conclusion. I don't see why the system can't be L3 and provide 5 seconds of notification to facilitate handover. If driver is asleep, well, that's their problem and not really supported by L3 anyways.
 
End of last year meant nothing.
Now in a few months means nothing.

They're just trying to figure things out as they go and every so often Elon Musk just makes some timeline up.

Ignore the sales bullshit from Elon Musk (and any of the other autonomy companies).

The only thing that matters is what they've actually delivered to owners.
 
I'm confused by this question.

Didn't we conclude that HW3 can't do L3 due to the lack of an effective driver monitoring system?

A driver facing camera is only really needed for hands-free L2+ because you need a hands-free method of making sure the driver is able to supervise. A driver facing camera is the best way to do that. L3 means the car is autonomous in its ODD so it can drive itself without driver supervision within its ODD. So Tesla could do L3 highway by removing the steering wheel nags when the car is on the highway and NOA is on. You would have hands free driving on the highway. But the nags would resume when you are about to leave the highway. And if the driver fails to hold the wheel, the system would either pull over automatically or come to a controlled stop with the hazards on.
 
There are two main issues with Tesla doing L3 driving.

The first issue is that it has no driver monitoring system. So it simply can't do L3 driving because it's never going to be able to prevent/detect the driver falling asleep. So if a takeover event happens the driver might be totally unprepared to take over.

My concern about the lack of redundancy among the rear/corner sensors isn't that the sensor will totally fail. But, that it could get into bad state where even the car doesn't know it's failed. Or that some drop of rain (especially on the rear camera) reduces its effectiveness at detection. Or maybe what's next to you isn't trained in the Neural network so it doesn't even see it.

Simply put I'm not very comfortable with not having any redundancy for lane change maneuvers.

L3 requires that car gives the driver a reasonable amount of time to take over. You can't have a reasonable amount if the ultrasonics detect a car that it's about to crash into doing a lane change into it.

Simply put I don't see AP3 pulling off L3 driving with the current HW sensor suite. At least not at freeway speeds. Maybe a traffic assistance only system like the Audi A8 is trying to do in Germany.

I don’t think it is a deal breaker to lose a repeater camera. The system does a pretty decent job telling us when it can’t be utilized due to debris or glare already. All the car has to do if the driver is not responding is keep going until it goes past a car and that car shows up in the rear camera. It will also be tracked to a degree with the ultrasonics. It will only lose track of the car for another half a car length at most. Once it’s reasonably certain it has past the car it can move over and ultimately pull off on the shoulder if need be. It still has ultrasonics after all and AP1 cars can make lane changes with just those all the time. You don’t need the ultrasonics to alert the driver so much as just abort the lane change if it senses something. I dunno, just a thought, but I think the car has the tools necessary to at least safely get itself off the road if one of the side (repeater or B-pillar) cameras fails.
 
My concern about the lack of redundancy among the rear/corner sensors isn't that the sensor will totally fail. But, that it could get into bad state where even the car doesn't know it's failed. Or that some drop of rain (especially on the rear camera) reduces its effectiveness at detection. Or maybe what's next to you isn't trained in the Neural network so it doesn't even see it.
I'm not sure that would be a problem. In a sense there is redundancy for the side cameras since they are seeing the same things as the front cameras just from a different angle and at a different time. I bet any system sophisticated enough to drive the car would be able to detect discrepancies.
I feel like the driver monitoring requirements for L3 are much less stringent than L2. A failure of the driver to take over is unlikely to result in an accident and should be very infrequent. How many driving situations are there where a car would be able to handle the situation for 10 seconds after it detects a problem but then fail after that?
You could implement a rudimentary system where you nag every 100 miles and if the driver doesn't respond to the nag within 10 seconds you ban them from using the system for 1000 miles. You could use the driver facing camera and send marginal cases to a human for review. How long do people stay absolutely still if they're not sleeping?
Of course I don't have much confidence that anyone will release a L3 system any time soon. Supposedly Mercedes will be releasing one this year but my guess is it will be vaporware like Audi's system.
 
  • Like
Reactions: APotatoGod
The suggestion of this article, is that the software rewrite mentioned by Elon in yesterday’s Third row video, is a move from emulation, to native mode on HW3.

Tesla Autopilot Mystery Solved — HW3 Full Potential Soon To Be Unlocked | CleanTechnica

This rewrite also combines all the cameras from individual analysis of still shots to one real-time 3D video, with an order of magnitude improvement in labeling accuracy.

This looks to be the step that will give us feature complete FSD, by the end of 2020.

Really hope Tesla can find a way to pick up the pace of HW3 upgrades.
Is there any evidence that they were doing an emulation on HW3 - what I remember is that they were just cross-compiling.

Rest of the article is useless if the author doesn't know this or can't make out the difference. If you see author's bio you will notice that he doesn't have a tech background.

I've a different view of the "rewrite" terminology.

Musk could be talking about
- Natively training for HW3 instead of training for HW2.5 and cross-compiling
- Move from procedural code to NN for certain aspects - planning and perception

Now - the above two things could have resulted in a completely rethinking of tasks needed and the way they are organized i.e the whole NN architecture. You could definitely call that a "rewrite".

To me, more interesting thing is the second part. Moving planning and perception feels more end-to-end than just image recognition that they were mostly doing on HW2.5. But a caution here - Musk seems to have just repeated the question which asked about "planning". Not sure whether planning has really moved since he talks about 3D path planning using video that would come later.

All in all - disappointing interview. Fanboyish rather than useful, intelligent questions (on the AP/FSD topic).
 
Last edited:
Of course I don't have much confidence that anyone will release a L3 system any time soon. Supposedly Mercedes will be releasing one this year but my guess is it will be vaporware like Audi's system.

Audi elected to only release the system in Germany, but I haven't heard any updates on whether it got regulatory approval there. It's a very watered down system that has a lot of restrictions so it could be that Audi decided not to bother releasing it. Or it could be that the regulatory body is just being extremely stingy. I simply don't have information on it to determine why it hasn't been released.

With Tesla I'm more concerned about the rear camera than anything else due to how vulnerable it is due to weather. Being in Seattle we have constant drizzle, and it really messes with the image. There are quite a few times where it's completely blind, and the car doesn't report anything. It's as if the car doesn't even care about it during NoA driving. I see the repeater camera warnings fairly regularly (usually during the first 5 min of a drive, and they go away after awhile unless conditions are really bad). The lack of an image from the rear camera means the system has reduced situational awareness. I also have some serious questions about whether the repeater cameras can see far back enough to track a high speed incoming vehicle. Sadly where I live it's not a rare event at all.

We don't really have to wait till L3 to find out whether I'm wrong or right. The path to L3/L4 has at least 6-12 months of L2 driving where the system under the good is the L3/L4 system, and all we're doing is validating it.

Currently the rear/side behavior of HW2.5 is so bad that I never trust it. There are numerous reports of it trying to change lanes into a car. But, that will get better over time. So there will be a huge chunk of time where drivers are completely trusting it while not overseeing it like they should be. If the system has any major weakness it will fail.

As to actual L3 I remain convinced that it needs a driver monitoring system, and this will actually be required by regulatory agencies in various countries. Especially in Europe.
 
It still has ultrasonics after all and AP1 cars can make lane changes with just those all the time.

There are lots of people like WK057 who insist that lane changes are better with AP1. But, that's only because the car doesn't have much information on what's next to it. The ultrasonics fail to see trailers that ride above the sensor for example.

In AP1 the lane changes are driver initiated so to them it works beautifully. It does exactly what it asks them to do without any aborts. With AP2 it tends to abort lane changes. Lots of times I've noticed it aborts them due to ghost cars. Where the system detects a car that isn't there.

In my analysis I gave Tesla the benefit of the doubt that they'd solve this issue.

But, it's too early to really say whether they will or not.

The problem with this whole question on Camera sensing is a lot of it is theoretical. Sure its theoretically possible to estimate distance from a mono camera (to detect what the ultrasonics fail to test), but is it going to fast enough and accurate enough to avoid an accident? Who knows. Can it see far enough back, and track speed with enough accuracy to get a good overall situational awareness even with crazy teenage drivers speeding by?

The theoretical aspect is also why it's taking so long. Where we just wait till that magical day it is good enough.

Without rear/corner radars we're not just missing out on more reliable lane changes, but other things as well. Like we don't have rear cross traffic alert. We also don't have real blind spot monitoring in the mirrors.

We don't have down facing cameras either, and these could be used to provide additional redundancy for lane changes (for last second aborts).
 
  • Informative
Reactions: thedm96
Audi elected to only release the system in Germany, but I haven't heard any updates on whether it got regulatory approval there. It's a very watered down system that has a lot of restrictions so it could be that Audi decided not to bother releasing it. Or it could be that the regulatory body is just being extremely stingy. I simply don't have information on it to determine why it hasn't been released.
My suspicion is that it hasn't been released because it doesn't work. Verifying such a system would require millions of miles of real world testing, I wonder if they did that before they announced it?
It's as if the car doesn't even care about it during NoA driving.
I'm pretty sure it doesn't use the rear camera for NoA. I've got a bike rack on the back of my car and it doesn't care except to warn me that the ultrasonics are blocked.
I also suspect that L3 is not possible with the current sensor suite without some sort of breakthrough in machine learning.
 
A driver facing camera is only really needed for hands-free L2+ because you need a hands-free method of making sure the driver is able to supervise. A driver facing camera is the best way to do that. L3 means the car is autonomous in its ODD so it can drive itself without driver supervision within its ODD. So Tesla could do L3 highway by removing the steering wheel nags when the car is on the highway and NOA is on. You would have hands free driving on the highway. But the nags would resume when you are about to leave the highway. And if the driver fails to hold the wheel, the system would either pull over automatically or come to a controlled stop with the hazards on.

No you need driver monitoring camera for L3 unless you want an exponentially more deaths. Its even mentioned in the SAE doc.
 
Last edited:
  • Helpful
Reactions: diplomat33
No you need driver monitoring camera for L3 unless you want an exponentially more deaths. Its even mentioned in the SAE doc.
Practically speaking it seems like the only time a L3 system would require the user to take over would be when it's leaving its ODD or there's a system failure. For example a "traffic jam pilot" system could simply slow to a stop if the user failed to take over. Would that really cause exponentially more deaths?
 
Does anybody have any thoughts on how close Tesla might get to say L3 autonomy on the highway with this new AP/FSD rewrite?

I am curious about this rewrite too and how it will increase autopilot performance and how it will lead to integrating more advanced features...

but this doesn't have anything with L3... Tesla has never made any public statements about making autopilot L3 or anything about L3. And right now they are not even working towards that.

Why would sensor redundancy be required for L3? If the side or back cameras fail just have the driver take over. Heck, even if the front cameras and RADAR fail you could probably safely have the driver take over 99.9% of the time.

I think you are not getting how L3 works.... L3 systems must be and are designed so that if driver doesn't take over there wont an accident. (with the exception of the possibility that eventually the vehicle slowly comes to a stop in lane and then gets read ended).

L3 needs all of the redundancy that L4 requires.... vehicle actuation, power, sensors, computation, software

I'm confused by this question.

Didn't we conclude that HW3 can't do L3 due to the lack of an effective driver monitoring system?

Lmao... that could be a reason, but far from the only reason

A driver facing camera is only really needed for hands-free L2+ because you need a hands-free method of making sure the driver is able to supervise. A driver facing camera is the best way to do that. L3 means the car is autonomous in its ODD so it can drive itself without driver supervision within its ODD. So Tesla could do L3 highway by removing the steering wheel nags when the car is on the highway and NOA is on. You would have hands free driving on the highway. But the nags would resume when you are about to leave the highway. And if the driver fails to hold the wheel, the system would either pull over automatically or come to a controlled stop with the hazards on.

I actually agree with this. I have more to say about this post that I will post back later.