Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Nissan launches hands-off ProPilot 2.0 On-Ramp to Off-Ramp (Rivals Navigate on Autopilot)

This site may earn commission on affiliate links.
Humans are able to navigate without an HD map so theoretically a neural network should be able to do the same from a better full 360 degree vantage point.
Humans rely heavily on past experience and/or signs to navigate (such as signs that tell you which lane you should be in to go in a certain direction). It will take some time until autonomous vehicles can read and interpret all signs reliably, which is part of the reason why everyone uses maps today. The other is that it provides another source of redundancy in case the sensor input is ambiguous.
 
  • Helpful
Reactions: croman
And camera monitoring would have prevented the latest death? The guy initiated AP 10 seconds before the accident occurred.
So, presumably, he stopped paying attention at some point within that time frame. How exactly would a camera operated system have acted differently to ensure the driver saw the truck in that amount of time?

The answer, of course, is that it wouldn't.
It might have. Apparently Super Cruise starts beeping after 4 seconds of inattention. My guess is it probably wouldn't stop the car in another 6 seconds though so probably not. Of course Super Cruise would have never allowed the system to be enabled there in the first place.
 
Humans rely heavily on past experience and/or signs to navigate (such as signs that tell you which lane you should be in to go in a certain direction). It will take some time until autonomous vehicles can read and interpret all signs reliably, which is part of the reason why everyone uses maps today.
This is a good point. Humans are much more comfortable driving somewhere where they've driven before because they have a "map" of the area in their brains.
 
HD Maps don't need to include every single object. That would be foolish both in the time and resources required to create such detail and the fact that it would increase the chances that your map would be wrong. Rather, maps just need to include the important fixed landmarks such as the road itself, overpasses, signs etc... But you would not include objects like construction cones.

HD Maps are also not your primary source for driving for the obvious reason that you don't want to crash the car if the map is wrong. Rather, HD maps serve as a back up system. For example, if there is poor visibility such as fog, where your cameras will struggle, the self-driving car can fall back on the HD maps for confirmation of where the road is. The primary source will be your sensors such as cameras, lidar, radar etc... They are the ones that actually guide the car.
 
  • Like
Reactions: willow_hiller
HD Maps don't need to include every single object. That would be foolish both in the time and resources required to create such detail and the fact that it would increase the chances that your map would be wrong. Rather, maps just need to include the important fixed landmarks such as the road itself, overpasses, signs etc... But you would not include objects like construction cones.

HD Maps are also not your primary source for driving for the obvious reason that you don't want to crash the car if the map is wrong. Rather, HD maps serve as a back up system. For example, if there is poor visibility such as fog, where your cameras will struggle, the self-driving car can fall back on the HD maps for confirmation of where the road is for example. The primary source will be your sensors such as cameras, lidar, radar etc... They are the ones that actually guide the car.
The problem with treating HD Map as redundant backup for Vision is that it is not a real time system (unlike Lidar etc, which are realtime). Cars shouldn't drive using HD Maps at all - when the primary system fails, the backup HD Maps can only be used to safely disengage the AVs (like go to the side of the road and park).

So the question is when the vision is working, how should EyeQ use HD Maps - and what does it do when the vision and HD Maps differ. Complicated questions and depending on the risk a company is willing to take, they can do different things.
 
  • Like
Reactions: willow_hiller
Always good to hear a Tesla critic from the ICE capital. Someday Musk will get a clue and follow Detroit engineers.

I mean, I'd be pretty in pretty poor spirits too if I was a software engineer reporting to Detroit execs that have proven utterly clueless on anything software-related. Have you tried using any traditional manufacturers media systems recently? Most have punted on that entire stack to Apple and Google, which IMO is a good thing as I'm highly skeptical many existing car companies will ever transition to compete with world-class software companies. It's not in their culture and that change would have to start at the top and would take many years (if ever). But they are definitely very good at other things like mass production at consistent quality levels and bean counting. Chevy still ties advanced safety features to things like leather seats and engine size; You can't get one without the others. I guess those shoppers that prefer cloth and fuel efficiency deserve less safety? :confused:

And Nissan isn't any better:
The CarWings hack in the Nissan app reportedly required only the vehicle’s VIN to authenticate the app; whereas most other apps require at least one other identifier, like an e-mail and personalized PIN or password.

Can you imagine the breathless worldwide coverage if it were Tesla that just "forgot" to authenticate connections to their cars? I wonder how many affected Nissan owners were ever aware of that issue.

Even my BMW seems to struggle with maintaining a basic connection to the vehicle from their app. I'd estimate the connection fails about 1/3 of the time. And even when it does connect it takes forever. This is a premium brand? o_O
 
Last edited:
The problem with treating HD Map as redundant backup for Vision is that it is not a real time system (unlike Lidar etc, which are realtime). Cars shouldn't drive using HD Maps at all - when the primary system fails, the backup HD Maps can only be used to safely disengage the AVs (like go to the side of the road and park).

So the question is when the vision is working, how should EyeQ use HD Maps - and what does it do when the vision and HD Maps differ. Complicated questions and depending on the risk a company is willing to take, they can do different things.

Which is why I think you still need excellent vision NN no matter what.
 
So the question is when the vision is working, how should EyeQ use HD Maps - and what does it do when the vision and HD Maps differ. Complicated questions and depending on the risk a company is willing to take, they can do different things.
It's not an either/or question. The car already uses a combination of multiple inputs to evaluate the situation (sensor fusion of vision, radar and ultrasonics, and in other cars also Lidar). Mapping data is another source that it can use to increase the confidence in the result.
 
It's not an either/or question. The car already uses a combination of multiple inputs to evaluate the situation (sensor fusion of vision, radar and ultrasonics, and in other cars also Lidar). Mapping data is another source that it can use to increase the confidence in the result.
Either way they have to have heuristics to figure out what to do when they are different, and if using NN train it to do something (prefer Vision) when they are different, right ?
 
Either way they have to have heuristics to figure out what to do when they are different, and if using NN train it to do something (prefer Vision) when they are different, right ?
Yes. There are a number of different methods that can be used for sensor fusion and combining different data sources, e.g. Bayesian models or convolutional neural networks. The goal is for the combined result to have a higher degree of confidence than the individual data sources.
 
So Elon musk is god? Whatever he says goes? He's alpha and omega? The final decision maker? The easy conclusion was their implementation was horrific and with Elon's track record he told them to scrap it before they had chance to right the ship. Basically the whole automatic windshield wiper scenario all over again.

There is a reason that still today there is still only one company doing crowd sourced HD Mapping in production and that's because its a hard problem. Its not easy.

If we look at Elon's track record on these type of decisions. He also said driver monitoring camera is a mistake yet 3 ppl have died because of sleeping while on AP? Also there are maybe hundreds of videos of ppl sleeping while on AP. In-fact we now have regular police reports of people being arrested for sleeping while on AP.

Not only that but he refused to even add capacitor touch to steering wheel and stuck with reading the torque input from the driver, which has been poor.

Elon hasn't been making the best decisions when it comes to semi autonomous / autonomous driving implementation and tech.

Capacitance touch sensing would be very easy to fool. Maybe their implementation of torque sensing could be better, but I think it is the right approach. Eye tracking is another bad idea IMO, certainly for those that wear sunglasses ( like me ).
 
Just to add one more thing: I am not dismissive of other's tech progress. It's great that Nissan is developing a great driver assist, similar to NOA. I am dismissive of falsely characterizing other's tech progress as being way superior to Tesla when it is not. Big difference!

And how do YOU know whether it's superior or not? Care to know how many times Tesla's AutoPilot almost drove me into a ditch, a barrier, or another vehicle? I don't have enough fingers and toes to do the math, but I have more than I would have had if I'd actually depended upon AutoPilot. My daughter's Nissan Rogue with ProPilot 1.0 has never once darted into another lane of traffic. Wish I could say the same for my (since retired) Tesla AP2 vehicle.
 
Last edited:
  • Informative
Reactions: OPRCE and Inside
And how do YOU know whether it's superior or not? Care to know how many times Tesla's AutoPilot almost drove me into a ditch, a barrier, or another vehicle? I don't have enough fingers and toes to do the math, but I have more than I would have had if I'd actually depended upon AutoPilot. My daughter's Nissan Rogue with ProPilot 1.0 has never once darted into another lane of traffic. Wish I could say the same for my (since retired) Tesla AP2 vehicle.

First of all, I was comparing features. In terms of features, ProPilot 2 is essentially the same as NOA. Obviously, I can't compare performance since I have not tried ProPilot 2. But you have to compare apples to apples. ProPilot 1 is not the same feature set as NOA.

Second, do you know how many times Tesla's EAP almost drove me into a ditch, a barrier or another vehicle? ZERO. And my EAP has never tried to dart me into another lane of traffic either.
 
Yes. There are a number of different methods that can be used for sensor fusion and combining different data sources, e.g. Bayesian models or convolutional neural networks. The goal is for the combined result to have a higher degree of confidence than the individual data sources.
Whatever model you use, the end result should be higher confidence when the signals converge and lower when they diverge. So, ultimately a policy decision needs to be made on what to do when the confidence is lower - and if this happens frequently because of things like construction, HD Map would be more of a nuisance than otherwise. If the training data is such that Vision always wins when there is a divergence, NN will learn to ignore HD Map.
 
Whatever model you use, the end result should be higher confidence when the signals converge and lower when they diverge. So, ultimately a policy decision needs to be made on what to do when the confidence is lower - and if this happens frequently because of things like construction, HD Map would be more of a nuisance than otherwise. If the training data is such that Vision always wins when there is a divergence, NN will learn to ignore HD Map.
Think about the simple example of overpasses. Let's say you've got a radar system that can't tell the difference between a stopped truck and an overpass. Without a map the car could slam on the brakes at every overpass (unacceptable!) or you could run into all stopped trucks. With a map you can now brake for stopped trucks unless they're under an overpass. Since overpasses only cover <1% of the road you've now reduced the number of stopped trucks you hit by >99%. That's a huge improvement in safety!
More data is better.
If you watch Cruise's videos you can see that they deal with construction all the time but they still use detailed maps.
 
  • Like
Reactions: OPRCE
Whatever model you use, the end result should be higher confidence when the signals converge and lower when they diverge. So, ultimately a policy decision needs to be made on what to do when the confidence is lower - and if this happens frequently because of things like construction, HD Map would be more of a nuisance than otherwise.
You could make the same argument about Radar. The fallacy in this argument is obviously that in some situations one source will be better, in others the other. In many cases the combination will provide more certainty.
If the training data is such that Vision always wins when there is a divergence, NN will learn to ignore HD Map.
That will not be the case for well-chosen training data. For example, if the vision result has a low probability (e.g. because lane markings are faded) and the map data high confidence (e.g. the map was updated very recently) the map data will probably be prioritized. But if at the same location the vision system recognizes traffic cones (indicating construction activity) it would obviously make sense to override the map data.
 
Let's say you've got a radar system that can't tell the difference between a stopped truck and an overpass.
Why not just train NN with ample data about stopped trucks vs overpass. Humans never confuse between them.

To me HD Map looks like a lot of added complexity without much benefit. It is useful as a failsafe backup system to safely park the car at the side.
 
Last edited:
It also doesn't make much sense in a real life context. Let's say some new construction cones are placed on the edge of a road. Ten ProPilot2 vehicles pass by, and dodge the construction cones using a Neural Net instead of the HD map because it's a new obstacle. Now, since 10 vehicles have passed the HD map is updated to include that as an obstacle. Then, in the afternoon the cones are removed. Do the next 10 ProPilot2 vehicles dodge a random bit of road that once had cones? Or do they rely in their Neural Net to immediately see there's no obstacle there. And at that point, the HD map becomes functionally useless since all cars are using the Neural Nets to override erroneous HD map data.

HD map doesn't include cones. Hd map doesn't include moveable obstacles/objects.
 
Capacitance touch sensing would be very easy to fool. Maybe their implementation of torque sensing could be better, but I think it is the right approach. Eye tracking is another bad idea IMO, certainly for those that wear sunglasses ( like me ).

No because driver monitoring system uses infared that works through sunglasses/at night and also they use pose detection to track where you are looking at aswell if it can't see your eyes. Tesla driver monitoring have killed 4 people and led to hundreds of accidents. Point me to the same ratio in other Level 2 systems? Seems like irrationally defense to me.
 
Last edited:
  • Disagree
Reactions: diplomat33
Why not just train NN with ample data about stopped trucks vs overpass. Humans never confuse between them.
Humans have a few billion more neurons and much better training "algorithms".

BTW, how will you know when the training data is "ample"? There are all kinds of interesting computer vision challenges out there. ;)


Untitled1.jpg