M3BlueGeorgia
Active Member
Doesn't matter until you remove the steering wheel.It doesn't change the fact that Tesla has no sensor redundancy for the rear camera or the two back facing cameras on the sides.
You can install our site as a web app on your iOS device by utilizing the Add to Home Screen feature in Safari. Please see this thread for more details on this.
Note: This feature may not be available in some browsers.
Doesn't matter until you remove the steering wheel.It doesn't change the fact that Tesla has no sensor redundancy for the rear camera or the two back facing cameras on the sides.
...with an order of magnitude improvement in labeling accuracy.
This looks to be the step that will give us feature complete FSD, by the end of 2020.
Really hope Tesla can find a way to pick up the pace of HW3 upgrades.
Uh, has anybody anywhere with inside knowledge said anything about emulation? That seems highly unlikely to me, given the specialized nature of the neural net accelerator. More likely they used some kind of cross-compiler to recompile their existing models for HW3.
I'm to wonder if this might be the reason of the weaving between lanes or (ping ponging) while on Autopilot since my upgrade to HW3. Emulation isn't reacting fast enough to keep the car centered in the lane. Just thinking out loud.
Uh, has anybody anywhere with inside knowledge said anything about emulation? That seems highly unlikely to me, given the specialized nature of the neural net accelerator. More likely they used some kind of cross-compiler to recompile their existing models for HW3.
I doubt that's the problem.
keeping a feedback loop ('servo') working - in physical 'car time' is not too hard. its millisconds, not microseconds. easy peasy.
CleanTechnica feels that the rewrite is a good thing but will set FSD back a year:
"My take: Unlike the conclusion drawn in this article, this is a major delay to the Full Self Driving program. This means the demo we saw at Autonomy Day last year was judged to be not good enough to put into production and they had to go back to the drawing board. As we learned from listening to Andrej Karpathy talk about the tradeoffs between combining things and keeping them independent, putting 3 functions together will make if more efficient, but that will have 2 other effects. The interfaces to all the other parts of the system need to at a minimum be retested and maybe redesigned. Also, the ability to just quickly hand code around a small known issue in planning, perception, and image recognition is removed. Instead, you have to train the neural network around each known issue. I think this is a good change, but it probably sets the system back close to a year."
Timestamped Guide To Part 2 Of Elon Musk Interview By Third Row Tesla | CleanTechnica
Why would sensor redundancy be required for L3? If the side or back cameras fail just have the driver take over. Heck, even if the front cameras and RADAR fail you could probably safely have the driver take over 99.9% of the time.
Doesn't matter until you remove the steering wheel.
Does anybody have any thoughts on how close Tesla might get to say L3 autonomy on the highway with this new AP/FSD rewrite?
I'm confused by this question.
Didn't we conclude that HW3 can't do L3 due to the lack of an effective driver monitoring system?
I'm confused by this question.
Didn't we conclude that HW3 can't do L3 due to the lack of an effective driver monitoring system?
There are two main issues with Tesla doing L3 driving.
The first issue is that it has no driver monitoring system. So it simply can't do L3 driving because it's never going to be able to prevent/detect the driver falling asleep. So if a takeover event happens the driver might be totally unprepared to take over.
My concern about the lack of redundancy among the rear/corner sensors isn't that the sensor will totally fail. But, that it could get into bad state where even the car doesn't know it's failed. Or that some drop of rain (especially on the rear camera) reduces its effectiveness at detection. Or maybe what's next to you isn't trained in the Neural network so it doesn't even see it.
Simply put I'm not very comfortable with not having any redundancy for lane change maneuvers.
L3 requires that car gives the driver a reasonable amount of time to take over. You can't have a reasonable amount if the ultrasonics detect a car that it's about to crash into doing a lane change into it.
Simply put I don't see AP3 pulling off L3 driving with the current HW sensor suite. At least not at freeway speeds. Maybe a traffic assistance only system like the Audi A8 is trying to do in Germany.
I'm not sure that would be a problem. In a sense there is redundancy for the side cameras since they are seeing the same things as the front cameras just from a different angle and at a different time. I bet any system sophisticated enough to drive the car would be able to detect discrepancies.My concern about the lack of redundancy among the rear/corner sensors isn't that the sensor will totally fail. But, that it could get into bad state where even the car doesn't know it's failed. Or that some drop of rain (especially on the rear camera) reduces its effectiveness at detection. Or maybe what's next to you isn't trained in the Neural network so it doesn't even see it.
Is there any evidence that they were doing an emulation on HW3 - what I remember is that they were just cross-compiling.The suggestion of this article, is that the software rewrite mentioned by Elon in yesterday’s Third row video, is a move from emulation, to native mode on HW3.
Tesla Autopilot Mystery Solved — HW3 Full Potential Soon To Be Unlocked | CleanTechnica
This rewrite also combines all the cameras from individual analysis of still shots to one real-time 3D video, with an order of magnitude improvement in labeling accuracy.
This looks to be the step that will give us feature complete FSD, by the end of 2020.
Really hope Tesla can find a way to pick up the pace of HW3 upgrades.
Of course I don't have much confidence that anyone will release a L3 system any time soon. Supposedly Mercedes will be releasing one this year but my guess is it will be vaporware like Audi's system.
It still has ultrasonics after all and AP1 cars can make lane changes with just those all the time.
My suspicion is that it hasn't been released because it doesn't work. Verifying such a system would require millions of miles of real world testing, I wonder if they did that before they announced it?Audi elected to only release the system in Germany, but I haven't heard any updates on whether it got regulatory approval there. It's a very watered down system that has a lot of restrictions so it could be that Audi decided not to bother releasing it. Or it could be that the regulatory body is just being extremely stingy. I simply don't have information on it to determine why it hasn't been released.
I'm pretty sure it doesn't use the rear camera for NoA. I've got a bike rack on the back of my car and it doesn't care except to warn me that the ultrasonics are blocked.It's as if the car doesn't even care about it during NoA driving.
A driver facing camera is only really needed for hands-free L2+ because you need a hands-free method of making sure the driver is able to supervise. A driver facing camera is the best way to do that. L3 means the car is autonomous in its ODD so it can drive itself without driver supervision within its ODD. So Tesla could do L3 highway by removing the steering wheel nags when the car is on the highway and NOA is on. You would have hands free driving on the highway. But the nags would resume when you are about to leave the highway. And if the driver fails to hold the wheel, the system would either pull over automatically or come to a controlled stop with the hazards on.
Practically speaking it seems like the only time a L3 system would require the user to take over would be when it's leaving its ODD or there's a system failure. For example a "traffic jam pilot" system could simply slow to a stop if the user failed to take over. Would that really cause exponentially more deaths?No you need driver monitoring camera for L3 unless you want an exponentially more deaths. Its even mentioned in the SAE doc.
Does anybody have any thoughts on how close Tesla might get to say L3 autonomy on the highway with this new AP/FSD rewrite?
Why would sensor redundancy be required for L3? If the side or back cameras fail just have the driver take over. Heck, even if the front cameras and RADAR fail you could probably safely have the driver take over 99.9% of the time.
I'm confused by this question.
Didn't we conclude that HW3 can't do L3 due to the lack of an effective driver monitoring system?
A driver facing camera is only really needed for hands-free L2+ because you need a hands-free method of making sure the driver is able to supervise. A driver facing camera is the best way to do that. L3 means the car is autonomous in its ODD so it can drive itself without driver supervision within its ODD. So Tesla could do L3 highway by removing the steering wheel nags when the car is on the highway and NOA is on. You would have hands free driving on the highway. But the nags would resume when you are about to leave the highway. And if the driver fails to hold the wheel, the system would either pull over automatically or come to a controlled stop with the hazards on.