Anyways, I could live with a car looking like that if it really could drive my son to the karate all by itself!
But I think your son might choose to re-program the drop-off and pick-up points a block away and walk the rest.
You can install our site as a web app on your iOS device by utilizing the Add to Home Screen feature in Safari. Please see this thread for more details on this.
Note: This feature may not be available in some browsers.
Anyways, I could live with a car looking like that if it really could drive my son to the karate all by itself!
Two factors involved, the availible data and the use of that data. There are human drivers who are safer than other human drivers with the same data. Remove distractions and excessive speed for conditions and you should be down to a zero "at-fault" accident rate.Vision only would work if the expectation of safety from an autonomous car wasn't orders of magnitudes greater than a human driver.
Cameras can have a greater dynamic range than the eye, along with faster response. In addition, for frontal impacts, there are multiple cameras covering. Lastly, if everyone used the boating safety requirement : "always be able to stop in half your visible distance" then things are slow, but safe.Hitting a kid because the cameras were blinded by the sun won't be acceptable
Crashing into cars in the fog won't be acceptable
Those drivers are not safe (well meaning or not), if you can't see, don't move.Well meaning drivers get into accidents every day because they just couldn't see. Why would we give autonomous cars this handicap?
Is this for reversing out of perpendicular parking spaces/ dead ends? Extending past human POV, a 180 degree fish eye rear camera helps a lot (wider view than standard reverse cam (hypothetical: why the 3 backup looked so bad to start). Side view rear mounted cameras would also show oncoming traffic. (out of scope: but not as much as cross traffic being more aware of encroaching vehicles. Or backing into the spot (not always possible) ) Other option: safe cars don't (normally) park in high risk spots.Even non-autonomous cars need have better sensing capability to help drivers out. One of these is cross-traffic rear radar because the backup camera by itself isn't good enough.
Try closing one eye and diving around a car park. Make sure you have insurance.
Pretty much all of the states allow one-eyed drivers, so long as their eye is correctable to close to 20/20 vision.
Plus the cameras can't move, and don't have stereo vision for depth perception.
Try closing one eye and diving around a car park. Make sure you have insurance.
I have been driving with one eye since last summer (2017) when a deer came through my Model X windshield. I drive regularly now (bought service loaner in Dec 2017). It was certainly uncomfortable at first because you are more used to peripheral vision and your distance judging had to be reprogrammed for different clues. Stereo vision is MOSTLY really useful when things are close ... so less than arms length. Your eyes are only a few inches apart!!! Big whoop at long distances. Out to 20-25' you have to retrain your brain. Playing racquet and ball sports has been a challenge but my brain is adjusting (pickleball, racquetball, tennis, etc). I'm finding out MANY people have problems with one eye and do about everything under the sun still. People who have one eye usage since they were born or at a very young age have adapted unbelievably well. I have a friend with one eye since a very young age and he plays hockey and racquetball !!!! -- very well I might add.Human vision is enough to drive. Tesla has low quality camera vision. It would fail the sight test in many European countries. Plus the cameras can't move, and don't have stereo vision for depth perception.
Try closing one eye and diving around a car park. Make sure you have insurance.
I have been driving with one eye since last summer (2017) when a deer came through my Model X windshield. I drive regularly now (bought service loaner in Dec 2017). It was certainly uncomfortable at first because you are more used to peripheral vision and your distance judging had to be reprogrammed for different clues. Stereo vision is MOSTLY really useful when things are close ... so less than arms length. Your eyes are only a few inches apart!!! Big whoop at long distances. Out to 20-25' you have to retrain your brain. Playing racquet and ball sports has been a challenge but my brain is adjusting (pickleball, racquetball, tennis, etc). I'm finding out MANY people have problems with one eye and do about everything under the sun still. People who have one eye usage since they were born or at a very young age have adapted unbelievably well. I have a friend with one eye since a very young age and he plays hockey and racquetball !!!! -- very well I might add.
I have been driving with one eye since last summer (2017) when a deer came through my Model X windshield. I drive regularly now (bought service loaner in Dec 2017). It was certainly uncomfortable at first because you are more used to peripheral vision and your distance judging had to be reprogrammed for different clues. Stereo vision is MOSTLY really useful when things are close ... so less than arms length. Your eyes are only a few inches apart!!! Big whoop at long distances. Out to 20-25' you have to retrain your brain. Playing racquet and ball sports has been a challenge but my brain is adjusting (pickleball, racquetball, tennis, etc). I'm finding out MANY people have problems with one eye and do about everything under the sun still. People who have one eye usage since they were born or at a very young age have adapted unbelievably well. I have a friend with one eye since a very young age and he plays hockey and racquetball !!!! -- very well I might add.
I have a private pilot buddy who is effectively blind in one eye. Has no problems flying. There's also the famous airline captain Carlos Dardano, who only has one eye. He was in command of TACA Flight 110 - Wikipedia, where he made an emergency landing on a levee after his airliner lost its engines.
It seems possible that a computer could do something similar, perhaps by tracking changes as the vehicle moves. However I'm not sure if it'd be practical to implement that with HW2 or 2.5. That's the billion-dollar question for a lidar-free system, stereo or not.
I think this example is relevant as I believe they are using a single camera lens. comma.ai referenced these guys and displayed their usage in some of their testing. They are using open source software called ORB-SLAM2. I would guess this would be more for HD mapping so you would know all permanent reference data points (signs, trees, fire hydrant, buildings, guard rails, etc, etc) for localization.Due to my interest in autonomous driving, and poor stereo vision, I've been contemplating such things on my commute. It seems fairly simple, if you can differentiate road from objects. Perspective makes things further away go toward the center of vision and increase the amount of road in the bottom of scene to the object.
Using the same technique that 360 overhead camera systems use by mapping 4 fisheye cameras onto a virtual ground plane, you can derive an approximate location for things with one camera.
From a normal single frame image, one can pick out where things are in relation to each other. By comparing time separated frames, velocity can be reduced, and three sets gives you acceleration. Demo SW with Nvidia GPUs does full screen motion vectors faster than real time (same type of thing used in video compression).
Lidar gives a mapping of current locations of the first surface, but does not provide correlation to specific objects. Is that just a car, or a car with a person/child next to it?
I think a hard case to solve is neighborhood driving and having a child running from the side and then become occulded by a car before popping out in the road. Other one is detecting a driver (inside or outside) with a parked car who may open the door into traffic.
Crosswalks will be tougher for Lidar, I think. If you have to yield to a pedestrian, you need to determine what objects on the side if the road are pedestrians, shrubbery, or loiterers. This is easier if you can tell which way they are facing.
I think a hard case to solve is neighborhood driving and having a child running from the side and then become occulded by a car before popping out in the road. Other one is detecting a driver (inside or outside) with a parked car who may open the door into traffic.
Crosswalks will be tougher for Lidar, I think. If you have to yield to a pedestrian, you need to determine what objects on the side if the road are pedestrians, shrubbery, or loiterers. This is easier if you can tell which way they are facing.
Hi, using you post as a spring-board. You had replied to a "vision only" post, so much is this is outside the scope of your response due to other sensors on Tesla so I call out AP sensors that are not cameras.
Two factors involved, the availible data and the use of that data. There are human drivers who are safer than other human drivers with the same data. Remove distractions and excessive speed for conditions and you should be down to a zero "at-fault" accident rate.
Cameras can have a greater dynamic range than the eye, along with faster response. In addition, for frontal impacts, there are multiple cameras covering. Lastly, if everyone used the boating safety requirement : "always be able to stop in half your visible distance" then things are slow, but safe.
(Off scope: Tesla also has radar)
If cars do not overdrive their vision, fog and white outs are not an issue. (Off scope: ultrasound and radar)
Those drivers are not safe (well meaning or not), if you can't see, don't move.
(Off scope: radar, or at really low speed ultrasound)
Is this for reversing out of perpendicular parking spaces/ dead ends? Extending past human POV, a 180 degree fish eye rear camera helps a lot (wider view than standard reverse cam (hypothetical: why the 3 backup looked so bad to start). Side view rear mounted cameras would also show oncoming traffic. (out of scope: but not as much as cross traffic being more aware of encroaching vehicles. Or backing into the spot (not always possible) ) Other option: safe cars don't (normally) park in high risk spots.
I've been thinking about this as I drive. I come to a stop sign on a road that tees into a main road, and I have to check traffic both ways before turning left. I spot a moving hole approaching a few hundred yard away in traffic to my right, and then look for a matching hole in traffic approaching to my left. When the holes look like they will arrive at my location near the same time, I have about a half second to jump and get up to speed. I can't see how any self-driving car could do this.
I don't see why an autonomous car with just cameras couldn't do this. The car could easily watch traffic in both directions (at the same time, even) and make these simple calculations.I've been thinking about this as I drive. I come to a stop sign on a road that tees into a main road, and I have to check traffic both ways before turning left. I spot a moving hole approaching a few hundred yard away in traffic to my right, and then look for a matching hole in traffic approaching to my left. When the holes look like they will arrive at my location near the same time, I have about a half second to jump and get up to speed. I can't see how any self-driving car could do this.
That's kind of a funny post because you keep mentioning other sensors as off-post.
So it's like you're trying to defend a vision only system, but then you're kind of suggesting that yeah maybe having some other sensors would be a good thing. So feel free to mention other sensor, but please avoid mentioning ultrasonics. That just makes everyone cringe because we're mostly Tesla owners. As Tesla owners we know first hand that ultrasonics suck, and that's why our blindspot monitoring system doesn't work some/most of the time. That's something that AP2 owners are looking forwards to vision being added to solve that problem.
My point was that Vision only wasn't enough.
There is an inescapable fact that the safety requirement for autonomous driving will be magnitudes of degrees greater than a human driver. That you have to have redundancy in not just the number of sensors, but the type of sensor. It's not just eliminating at fault accidents, but it's also avoiding no fault accidents. Even with lots of sensors the state of the art in autonomous driving seems to be causing accidents because it's overly cautious. It's going to be made massively worse if the autonomous car is given less information, and then forced to slow down excessively in adverse conditions.
You can't escape the irrefutable fact that some sensing systems are better than other sensing systems for specific situations, and weather conditions.
it's also really tough to achieve vision that exceeds the human vision system if you include every part of the human vision system. Not just in the dynamic range, but in the ability to process the image. So we use other sensing systems to give the computer data to use to validate data from the vision system or the other way around. Like if you read up on it you'll see some articles talking about the need for bike to car communication because vision only systems seem to have issues detecting cyclist. The bike to car communication is a way to solve that problem for now until the computer vision system is at a point that solves that problem. Keep in mind were only a couple years from when Subaru (a vision only system) was slowing down for shadows in the road. We Tesla owners are lucky in that we have radar.
There is also the need for autonomous driving to improve the efficiency of our roads. Now you argued that the proper way to deal with every situation was to simply slow down or to stop all together. That's fine for a human driver who is limited to vision only, but it seems like a massive artificial limit for autonomous cars when the one of the primary benefits of them over a human is to use additional sensors.
As Tesla owners we know first hand that ultrasonics suck, and that's why our blindspot monitoring system doesn't work some/most of the time.
There is an inescapable fact that the safety requirement for autonomous driving will be magnitudes of degrees greater than a human driver. That you have to have redundancy in not just the number of sensors, but the type of sensor.
it's also really tough to achieve vision that exceeds the human vision system if you include every part of the human vision system. Not just in the dynamic range, but in the ability to process the image. So we use other sensing systems to give the computer data to use to validate data from the vision system or the other way around. Like if you read up on it you'll see some articles talking about the need for bike to car communication because vision only systems seem to have issues detecting cyclist. The bike to car communication is a way to solve that problem for now until the computer vision system is at a point that solves that problem. Keep in mind were only a couple years from when Subaru (a vision only system) was slowing down for shadows in the road. We Tesla owners are lucky in that we have radar.
My blindspot monitoring system works all of the time when parking and when changing lanes.
I don't see why autonomous driving has to be "magnitudes of degrees" (whatever that is) greater than a human driver.
It just has to be as good or better.
Cars have been navigated by the human vision system forever. Human vision can be thought of as camera connected to a very sophisticated visual processing system. It's only a matter of time until artificial visual processing systems surpass the brain.
A lot of talk about redundancy. People highly over estimate the need because you assume that it cant ever fail or the car will go berserk and start mowing people down. If for some reason a camera fails, the car just stops and calls for help. Just like you would if your engine seized.