Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Beyond LIDAR and standard cameras: Time-Of-Flight cameras

This site may earn commission on affiliate links.

KarenRei

ᴉǝɹuǝɹɐʞ
Jul 18, 2017
9,618
104,600
Iceland
There was an interesting article posted on Seeking Alpha today arguing that cameras will win the "LIDAR vs cameras" debate for autonomous vehicles - but not in the way they're currently used:

Tesla: Cameras Might Win The Autonomous Race After All - Tesla Motors (NASDAQ:TSLA) | Seeking Alpha

In the autonomous space today, you have two very different philosophies - those based primarily around LIDAR (such as Google/Waymo) and those based around cameras (like Tesla). Versus cameras, LIDAR:

* Is much more expensive (formerly ~$75k per unit, now $7,5k, as per Google), versus low double-digits per camera.
* Requires a bulky, awkward spinning rig mounted to the top of the vehicle
* Produces a very detailed, low-error model of the world around it (cameras are prone to misstitching problems)
* May still require cameras for precise identification of what it detects

LIDAR, however, appears to be evolving into a camera-based technology: Time-Of-Flight. In this technology, light is emitted in bright pulses across a broad area, and cameras record not (just) how much light they receive, but more specifically, when they receive it, with sub-nanosecond precision. There's no spinning rig, greater vertical resolution, and most importantly, the hardware is producible with the same sort of semiconductor manufacturing technology that makes cameras so cheap. The same cameras can also double as colour-imaging cameras for identification.

In short, the article argues that cameras will win the day, but not the type Tesla is using, leaving it with a large liability for underdelivering on driving capabilities vs. competitors that spring up with ToF camera-based systems.

It's an interesting argument, although I'm not entirely convinced, for a number of reasons.

* What you really want is a fusion of 3d geometry and identification of what you're seeing. A line on a road or text on a sign or a lit brakelight or so forth has no detectable 3d geometry. Is that thing sticking out ahead some leaves on a tree or a metal beam? Is that a paper bag on the road or a rock? Etc. If in the future Tesla has to switch to ToF cameras for building 3d models of the world around them, they don't lose any of the progress that they've made based on a system where their 3d models are built with photogrammetry; they just map their imagery to better models.

* Tesla's liability isn't so much for the cost of FSD as it is for the cost of hardware retrofits if the current hardware proves inadequate. Swapping out cameras is almost certainly much cheaper than the cost of refunding thousands of dollars. Any vehicles not swapped out still continue to work, just with the (potentially) poorer photogrammetry-based 3d modelling.

* Unlike Tesla's competitors, Tesla's early start gives them reems of data collected from its vehicles's sensors in real-world environments - radar, ultrasound, and imagery. Even if competitors happen to choose a better 3d-mapping technology, they remain well behind in the size of their datasets. And data is critical; if you want to test a new version of your software, you can validate it against every drive in your dataset to ensure that it performs as intended.

* We actually don't know Tesla's plans for what sorts of cameras they plan to incorporate and into what. Liabilities for upgrading existing MSs and MXs would be vastly lower than for, say, a couple million M3s.

Tesla took a big hit with the Mobileye divorce, and is still playing catchup with AP2. And their insistence on using technology that was affordable to put on all vehicles without serious design compromises left them with no choice but cameras; traditional LIDAR has just been too expensive and awkward. But now that a potentially useful "upgrade" may be coming into play, will Tesla switch gears?
 
I generally don't like to read Seeking Alpha because it's like a short-TSLA cult that kept losing money and the more money it loses the more its belief that it's the best time to short!

I have no idea about Time-Of-Flight but of course, there are many promising technologies out there that look good in theory but the question is how to make it work for the public.
 
  • Like
Reactions: NeverFollow
Tesla's early start gives them reems of data collected from its vehicles's sensors in real-world environments - radar, ultrasound, and imagery.
you need to switch this to future tense and also consider that a bunch of unclassified data is useless, and you need lots of manpower to classify it (or lots of manpower to write a perfect AI to classify it for you, but they you won't need the data anymore).
 
  • Informative
Reactions: lunitiks
Some copy-paste info for them who lazy to google:
Time-of-flight camera
A time-of-flight camera (ToF camera) is a range imaging camera system that resolves distance based on the known speed of light, measuring the time-of-flight of a light signal between the camera and the subject for each point of the image. The time-of-flight camera is a class of scannerless LIDAR, in which the entire scene is captured with each laser or light pulse, as opposed to point-by-point with a laser beam such as in scanning LIDAR systems.

Principle
The simplest version of a time-of-flight camera uses light pulses or a single light pulse. The illumination is switched on for a very short time, the resulting light pulse illuminates the scene and is reflected by the objects in the field of view. The camera lens gathers the reflected light and images it onto the sensor or focal plane array. Depending upon the distance, the incoming light experiences a delay. As light has a speed of approximately c = 300,000,000 meters per second, this delay is very short: an object 2.5 m away will delay the light by (…) 16.66 ns.

Components
A time-of-flight camera consists of the following components:
  • Illumination unit: It illuminates the scene. For RF-modulated light sources with phase detector imagers, the light has to be modulated with high speeds up to 100 MHz, only LEDs or laser diodes are feasible. For Direct TOF imagers, a single pulse per frame (e.g. 30 Hz) is used. The illumination normally uses infrared light to make the illumination unobtrusive.
  • Optics: A lens gathers the reflected light and images the environment onto the image sensor (focal plane array). An optical band-pass filter only passes the light with the same wavelength as the illumination unit. This helps suppress non-pertinent light and reduce noise.
  • Image sensor: This is the heart of the TOF camera. Each pixel measures the time the light has taken to travel from the illumination unit (laser or LED) to the object and back to the focal plane array. Several different approaches are used for timing; see Types of devices above.
  • Driver electronics: Both the illumination unit and the image sensor have to be controlled by high speed signals and synchronized. These signals have to be very accurate to obtain a high resolution. For example, if the signals between the illumination unit and the sensor shift by only 10 picoseconds, the distance changes by 1.5 mm. For comparison: current CPUs reach frequencies of up to 3 GHz, corresponding to clock cycles of about 300 ps - the corresponding 'resolution' is only 45 mm.
  • Computation/Interface: The distance is calculated directly in the camera. To obtain good performance, some calibration data is also used. The camera then provides a distance image over some interface, for example USB or Ethernet.

Advantages
In contrast to stereo vision or triangulation systems, the whole system is very compact: the illumination is placed just next to the lens, whereas the other systems need a certain minimum base line. In contrast to laser scanning systems, no mechanical moving parts are needed.

Time-of-flight cameras are able to measure the distances within a complete scene with a singleshot. As the cameras reach up to 160 frames per second, they are ideally suited to be used in real-time applications.

Disadvantages
(snip)
In contrast to laser scanning systems where a single point is illuminated, the time-of-flight cameras illuminate a whole scene. For a phase difference device (amplitude modulated array), due to multiple reflections, the light may reach the objects along several paths. Therefore, the measured distance may be greater than the true distance. Direct TOF imagers are vulnerable if the light is reflecting from a specular surface. There are published papers available that outline the strengths and weaknesses of the various TOF devices and approaches.
Time-of-flight camera - Wikipedia
 
  • Informative
Reactions: MitchMitch
Using Time of Flight Imaging (TOFi) for applications like hand gesture sensing and room mapping is fine. However, if you take a deeper look into the physics of light and time, you'll quickly realize that this technology is pretty useless for automotive vision.

All papers, articles and youtube videos on this subject plainly state that ToFI measures distances (depth) by using c - the Speed Of Light. Nothing, and I mean nothing, travels faster than c. As shown in this diagram:

Konstanz_der_Lichtgeschwindigkeit.jpg


Now, in nineteen bows and arrows, a white bearded German realized that light travels at a finite speed. C is actually constant, no matter how fast the light source is going. This means that time slows down, as opposed to speed up.

2-specialrelat.jpg


A particular consequence of this is that all our astronauts traveling at near light speed above the world actually age slower than people down on Earth.

When TOFi'ing your fingers using a steadycam, the light source is not moving relative to anything else and thus causes no problems. But the Planck Second you move your light emitting camera from - or towards - your subject, a time dilation and redshift error will immediately and completely change the ADAS computer's perception of reality. Not only does the deep network neurons fire at a slower rate: Your car will in fact get heavier.

trains.gif


This is due to a principle called General Relativity (which btw is far too complex for anyone to really understand).

The effects described above could in theory be demonstrated by aiming a telescope at a star or a black hole, whereby light deflected from stars behind it will seem distorted or gravitationally lensed ("parallax").

So in conclusion, tofi looks good on the surface but will IMO not stand the test of time.

There, my instant gratification monkey just stole 30 seconds of your time and 5 minutes of mine.
 
Last edited:
Using Time of Flight Imaging (TOFi) for applications like hand gesture sensing and room mapping is fine. However, if you take a deeper look into the physics of light and time, you'll quickly realize that this technology is pretty useless for automotive vision.

All papers, articles and youtube videos on this subject plainly state that ToFI measures distances (depth) by using c - the Speed Of Light. Nothing, and I mean nothing, travels faster than c. As shown in this diagram:

Konstanz_der_Lichtgeschwindigkeit.jpg


Now, in nineteen bows and arrows, a white bearded German realized that light travels at a finite speed. C is actually constant, no matter how fast the light source is going. This means that time slows down, as opposed to speed up.

2-specialrelat.jpg


A particular consequence of this is that all our astronauts traveling at near light speed above the world actually age slower than people down on Earth.

When TOFi'ing your fingers using a steadycam, the light source is not moving relative to anything else and thus causes no problems. But the Planck Second you move your light emitting camera from - or towards - your subject, a time dilation and redshift error will immediately and completely change the ADAS computer's perception of reality. Not only does the deep network neurons fire at a slower rate: Your car will in fact get heavier.

trains.gif


This is due to a principle called General Relativity (which btw is far too complex for anyone to really understand).

The effects described above could in theory be demonstrated by aiming a telescope at a star or a black hole, whereby light deflected from stars behind it will seem distorted or gravitationally lensed ("parallax").

So in conclusion, tofi looks good on the surface but will IMO not stand the test of time.

There, my instant gratification monkey just stole 30 seconds of your time and 5 minutes of mine.

While I really like your post, the only equation you posted basically disproved your whole point. Just try using real speeds, even at 200 mph the error due to time dilation is almost non existent, 0.000015%. At 80 mph it's just 0.000006%. That error is totally irrelevant when it comes to cars. That kind of error is totally irrelevant anywhere.

And if your assumption would be true, we couldn't use radar as well, since it works with exactly the same principle. And don't get me started on sonar, we can't even exactly say how quickly it will travel.
 
While I really like your post, the only equation you posted basically disproved your whole point. Just try using real speeds, even at 200 mph the error due to time dilation is almost non existent, 0.000015%. At 80 mph it's just 0.000006%. That error is totally irrelevant when it comes to cars. That kind of error is totally irrelevant anywhere.

And if your assumption would be true, we couldn't use radar as well, since it works with exactly the same principle. And don't get me started on sonar, we can't even exactly say how quickly it will travel.

Whoosh....
Verb_-Whoosh..png