Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

General Discussion: 2018 Investor Roundtable

This site may earn commission on affiliate links.
Status
Not open for further replies.
Article on Model 3 by a professional driver posted on a BMW forum - via reddit:

The steering, brakes, and balance were all on par with my expectations of a sport sedan—think E46 M3

This car is a game-changer; it will be relatively attainable compared to its predecessors, and it was even able to satisfy the driving bias of an old-school BMW-lover like me. I really didn’t want to like it, but I found little to complain about.

:D
 
I hope we Will actually do this. And that @myusername will admit he was wrong if Tesla hits 4000/week and not hide behind the fact it’s not quite 5000. But also that all the bulls will admit they were wrong if it does turn out to be 2000/week and not hide behind Whatever wishful thinking is then du jour.
I will happily admit that I am wrong if Tesla is producing M3s at 4000/wk at the end of Q1... of course I would... If Tesla produced 50k M3s for 2017... I'd have happily admitted that I was wrong also. My stated expectations for 2017 were less than 20k. I've been saying this for a year... completely against the grain... called out for FUD... and they aren't even coming close to even my expectations!
 
So, I only threw a glance at that discussion as it pertains to LIDAR. And I spent today on the road, the last hour or so through thick banks of fog in moose country. Anyone trusting their lives and those of their loved ones to light wavelength tech alone is reckless and suicidal and should be confined. Radar also will not dependably detect animals. One always has to adapt to current conditions. I wish there was a law mandating reflectors on moose (let's start with humans) :rolleyes:
What do you do today? You slow down. Full autonomy wasn’t promised at normal daylight speeds in all conditions. The cars will have to drive slower. When forward facing LIDAR is $100 a sensor, then they can add that as a new sensor input, but until then the cars can simply go slower.
 
What do you do today? You slow down. Full autonomy wasn’t promised at normal daylight speeds in all conditions. The cars will have to drive slower. When forward facing LIDAR is $100 a sensor, then they can add that as a new sensor input, but until then the cars can simply go slower.

Guys.. Vision is light based just like LIDAR. You dont need to know what a moose is to know you dont wan to hit the unidentifiable object in the middle of the road. Using Binocular cameras, the car can see the Object exists and has physical dimensions. Radar can confirm that something is physically there even if the signature is not detailed. I think Elon once said that if a UFO landed on the road, the car would stop for it, though I could be mistaken on the exact quote. I think this is kinda what he meant. You dont have to know what every object is and how it acts, you only need to know that the object exists and it something you dont want to hit. Shadows for example could look like an object in the road, but the vision system and radar determines that it has no depth and reflects no radar, so its ignored.

Edit: Just another quick note. More sensors is not necessarily better. Less sensors that are better is much more desirable. If you have more sensors, which is right when there is a conflict? Vision + Radar can be a reasonable substitution for LIDAR and can allow for enough accuracy and detail. The advantages are many, including cost and the fact that LIDAR will bounce off anything, including fog. There are LIDARs and solutions that work in Fog, but they are more expensive then a more simple system.
 
Last edited:
  • Informative
Reactions: Drax7
Guys.. Vision is light based just like LIDAR. You dont need to know what a moose is to know you dont wan to hit the unidentifiable object in the middle of the road. Using Binocular cameras, the car can see the Object exists and has physical dimensions.

It's not that simple. LIDAR is active, cameras are passive. With binocular vision you have to do photogrammetry - matching up objects in one scene with those in another. It's an inherently error-prone process, as anyone who's ever done, say, panoramic photo stitching can attest, because some points in pairs of images will look similar, even though they're not the same object. Our minds don't screw it up because they can reason out what's the most logical explanation. But determining what makes sense or not is an AI-hard problem.

Radar is a nice piece to the puzzle, but radars see the world strangely. A moose is like a translucent ghost to a radar. I'm not even sure if thin things like its legs would even show up at all. A piece of plywood or fiberglass wouldn't. Yet a scrap of aluminum foil is like a glaring beacon to radar. Radar also has lots of reflection-noise problems, particularly in cities. The way LIDAR sees the world is more similar to how we see the world, and as a general rule, if it's solid and a possible threat to your car, LIDAR will see it, while otherwise, it won't. Not that LIDAR is without its problems; for example, an exception to the above is precipitation, which they have to try to filter out of the datastream algorithmically. LIDAR can also suffer reflection problems, but not as badly as radar. But in general, LIDAR is a superb-quality datastream. This is offset by its awkward form, high price, and the fact that you still need a vision processing system regardless; the LIDAR just helps you figure out what is where in a much better manner than photogrammetry does.

IMHO, I think there's a lot of potential to time-of-flight LIDAR. It's basically just very time-sensitive cameras, and should be very cheap in mass production - and they double as the cameras for your vision system. Broad (not narrow-beam) light pulses are sent out and the cameras determine when they're detected at each point on the sensor. Tesla should be able to readily incorporate it into future vehicles should they decide to, just replacing their existing cameras. By replacing photogrammetry with a true depth-measuring system, their existing vision system should return much more reliable results.
 
Status
Not open for further replies.