Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
  • Funny
Reactions: bhzmark
  • Like
Reactions: bhzmark and croman
AP1 is an effort of mobileye...you are contradicting yourself.

Mobileye is one piece of AP1. Tesla layered in a whole bunch of their own hardware and software to make it work on the roads, in the hands of consumers.

And now Tesla is doing that again in AP2 without Mobileye. And so far on a far faster time table than AP1 became functional after the hardware was released.

And so, meanwhile, what has Mobileye produced?

Anything?
 
  • Disagree
Reactions: gowthamn
Does Subaru use mobile eye?

EyeSight
No. Eyesight is supplier as a turnkey system by Hitachi Automotive Systems.
Subaru improves safety system
Personal experience with the 2017 version of Eyesight does not give anybody confidence although the TACC works OK in clear weather.
The system was designed to get as much benefit as they could while being very, very cheap.The system is built around two cameras- that's it.
I think they've done an outstanding job for what they did but it is very, very basic.
 
Browsed through some MobilEyes patents (which are excruciatingly long), and got a few déjà vu moments:

These MBLY patent drawings illustrate “an example of a camera mount that is configured to be positioned behind a rearview mirror and against a vehicle windshield consistent with the disclosed embodiments.”:

MBLY patent - Example of a camera mount (1).jpg


MBLY patent - Example of a camera mount (2).jpg


Well check out this old Tesla “mule” photo, published on electrek May 3rd 2015:

MS test mule.jpg


This MBLY patent drawing illustrate yet another example of a camera mount:

MBLY patent - Example of a camera mount (3).jpg


Looks familiar, no? (PS: MBLY hasn’t patented these specific camera mounts, they’re just exemplary drawings.)

These MBLY patent drawing illustrate a camera calibration method using “single pole calibration”.

MBLY patent - calibration pole (1).jpg


MBLY patent - calibration pole (2).jpg


That is exactly the same method Tesla describes in the Service Manual of target calibration for AP1 cars.

BTW, I highly recommend reading some of MobilEye’s patents, they’re pretty interesting. For instance, 8 years ago they patented their method of headlight detection (auto dimming), which is still missing in AP2-cars.

US Patent Full-Text Database Boolean Search
(Search for “Mobileye” or “Mobileye Technologies” in Assignee Name)
 
  • Informative
Reactions: Lump and bhzmark
Sure, Mobileye have been on this for a long time (longer than Tesla). There are only a few ways to do these things well, and I wonder if part of the reason we haven't seen auto-dimming headlights is because they're trying to do it without clowning on MBLY's IP. It'd be a kick in the teeth for Tesla if they were to part ways, only to have to pay MORE in royalties or lawsuit fees because they infringed patents later down the line.

Meanwhile, I think Mobileye realised their 10+ lead in computer vision (which they are still world experts in) would plateau due to new techniques, a shift to autonomous driving rather than ADAS, plus a huge influx of research and cash in the area from the Valley (check out Facebook's amazing work on Mask-R-CNNs -https://arxiv.org/pdf/1703.06870.pdf)

As Jen-Hsun Huang (Nvidia CEO) explained earlier this year. Autonomous driving is no longer a vision/detection issue... everyone has access to extremely accurate object detection these days. I'm a bit surprised they use such terrible cameras though - since they're so cheap, I'd have thought that at least using wideband HDR cameras for dealing with tricky light scenarios (like leaving a tunnel) would give some future proofing. Anyway, we've gone way beyond the bounding box recognition of "this is a car", "this is a sign"... Mobileye are (as of today) still a vision/detection company to the public. Behind the scenes, they've been working on being much more than that, but it's relatively new territory for everyone.

Presumably Jim Keller is busy designing a chip that's effectively N times more powerful than EyeQ4. I doubt Elon wants to be reliant on NVIDIA in the car itself, so I would guess they'll spin their own board with a custom Tesla Vision chip for the Model 3, or maybe the 2018 Model S, or the Model Y, or the semi... basically, Tesla will want their own chips just like Apple did/do.

Mobileye + Intel is a really formidable combination, though. Intel also bought a big stake in HERE maps (and, as we know, maps give foresight and redundancy to the camera - indispensable for safe autonomous driving). Mobileye now have access to almost unlimited compute power, cutting edge AI researchers, chip designers and fabricaton (EyeQ5 will presumably be built on Intel's processors), HERE high definition maps, basically unlimited development funds and the industrial clout to call the shots and get any meeting they want. MBLY will very likely be a $1B+ p/a revenue generator by the end of the 2018.

Intel vs Nvidia for autonomous cars. Who'd have thought that'd be the case back in 2005?!
 
I'm a bit surprised they use such terrible cameras though - since they're so cheap, I'd have thought that at least using wideband HDR cameras for dealing with tricky light scenarios (like leaving a tunnel) would give some future proofing.
Source? I mean, sure the cameras are probably not very expensive, but what do you mean by cheap? Relatively speaking. What are you comparing them to?

BTW, when you say "cameras", what are you referring to - the optics, the sensor chips, or both?
 
Sure, Mobileye have been on this for a long time (longer than Tesla). There are only a few ways to do these things well, and I wonder if part of the reason we haven't seen auto-dimming headlights is because they're trying to do it without clowning on MBLY's IP. It'd be a kick in the teeth for Tesla if they were to part ways, only to have to pay MORE in royalties or lawsuit fees because they infringed patents later down the line.
Good analysis. This part sounds plausible for not stepping on patents. But then it may end up easier to just go with a separate sensor if this becomes a long term issue.

Intel vs Nvidia for autonomous cars. Who'd have thought that'd be the case back in 2005?!
I did touch on this a bit too in a post somewhere, but I'm not convinced this is better for the industry than Mobileye vs Intel vs Nvidia (plus perhaps AMD). There's one less competitor now and perhaps one less possible approach.
 
Source? I mean, sure the cameras are probably not very expensive, but what do you mean by cheap? Relatively speaking. What are you comparing them to?

Today, cameras are perhaps the lowest cost, highest bandwidth sensors available in the world. They're a few dollars. There was a talk from Amnon Shashua (Mobileye CTO) who mentioned that the cameras they used in their systems (i.e AP1 cameras) were only around 5-6 dollars each, and that talk was a couple years old (so presumably they're even cheaper now). The cameras they use are monochrome, low-contrast and relatively low resolution compared to the cameras in, say, your phone. Relatively low-res and monochrome is actually useful, because it's more efficient to process and classify per-frame than hi-res/colour. However, it's important to note that successful computer vision doesn't actually need colour, or benefit particularly from higher resolutions.

Tesla's HW2 cameras are mostly monochrome (you can see this from the Tesla Vision promo video), presumably with the exception of the rear camera (because mine's in colour!)

Anyway, compare this to other sensors commonly used in self-driving tech: the high-end Velodyne LIDAR is still very expensive: $8k for the cheapest puck, but more like $30k for a usable unit that would be even somewhat useful in a few scenarios. The BOSCH radar that Tesla's use is cheaper, but still reportedly a couple thousand dollars (Bosch Mid Range Radar (MRR) Sensor - System Plus Consulting). I'm sure these figures are lower in reality, due to volume discounts and corporate deals etc, but still - comparatively, the cameras are extremely cheap.

Of course, doing per-frame detection on 8 camera feeds, simultaneously, is a strenuous computational task. This is why it makes sense to discard colour, and keep the cameras relatively low res... otherwise the throughput requirements go through the roof. However, I'd have thought low-light cameras and HDR cameras would be useful in a lot of situations, but it'd probably require separate optics, which would get more expensive (both financially and computationally) very quickly.
 
Today, cameras are perhaps the lowest cost, highest bandwidth sensors available in the world. They're a few dollars. There was a talk from Amnon Shashua (Mobileye CTO) who mentioned that the cameras they used in their systems (i.e AP1 cameras) were only around 5-6 dollars each, and that talk was a couple years old (so presumably they're even cheaper now). The cameras they use are monochrome, low-contrast and relatively low resolution compared to the cameras in, say, your phone. Relatively low-res and monochrome is actually useful, because it's more efficient to process and classify per-frame than hi-res/colour. However, it's important to note that successful computer vision doesn't actually need colour, or benefit particularly from higher resolutions.

Tesla's HW2 cameras are mostly monochrome (you can see this from the Tesla Vision promo video), presumably with the exception of the rear camera (because mine's in colour!)

Anyway, compare this to other sensors commonly used in self-driving tech: the high-end Velodyne LIDAR is still very expensive: $8k for the cheapest puck, but more like $30k for a usable unit that would be even somewhat useful in a few scenarios. The BOSCH radar that Tesla's use is cheaper, but still reportedly a couple thousand dollars (Bosch Mid Range Radar (MRR) Sensor - System Plus Consulting). I'm sure these figures are lower in reality, due to volume discounts and corporate deals etc, but still - comparatively, the cameras are extremely cheap.

Of course, doing per-frame detection on 8 camera feeds, simultaneously, is a strenuous computational task. This is why it makes sense to discard colour, and keep the cameras relatively low res... otherwise the throughput requirements go through the roof. However, I'd have thought low-light cameras and HDR cameras would be useful in a lot of situations, but it'd probably require separate optics, which would get more expensive (both financially and computationally) very quickly.
I'm guessing his point is how you do know the cameras in use are not low-light cameras?
 
Today, cameras are perhaps the lowest cost, highest bandwidth sensors available in the world. They're a few dollars. There was a talk from Amnon Shashua (Mobileye CTO) who mentioned that the cameras they used in their systems (i.e AP1 cameras) were only around 5-6 dollars each, and that talk was a couple years old (so presumably they're even cheaper now). The cameras they use are monochrome, low-contrast and relatively low resolution compared to the cameras in, say, your phone. Relatively low-res and monochrome is actually useful, because it's more efficient to process and classify per-frame than hi-res/colour. However, it's important to note that successful computer vision doesn't actually need colour, or benefit particularly from higher resolutions.

Tesla's HW2 cameras are mostly monochrome (you can see this from the Tesla Vision promo video), presumably with the exception of the rear camera (because mine's in colour!)

Anyway, compare this to other sensors commonly used in self-driving tech: the high-end Velodyne LIDAR is still very expensive: $8k for the cheapest puck, but more like $30k for a usable unit that would be even somewhat useful in a few scenarios. The BOSCH radar that Tesla's use is cheaper, but still reportedly a couple thousand dollars (Bosch Mid Range Radar (MRR) Sensor - System Plus Consulting). I'm sure these figures are lower in reality, due to volume discounts and corporate deals etc, but still - comparatively, the cameras are extremely cheap.

Of course, doing per-frame detection on 8 camera feeds, simultaneously, is a strenuous computational task. This is why it makes sense to discard colour, and keep the cameras relatively low res... otherwise the throughput requirements go through the roof. However, I'd have thought low-light cameras and HDR cameras would be useful in a lot of situations, but it'd probably require separate optics, which would get more expensive (both financially and computationally) very quickly.


Actually mobileye uses a low light automotive HDR Camera. Im pretty sure he has mentioned it many times in his talks. Secondly

Velodyne justed Announced their new “Velarray” LiDAR Sensor that will be demo in the summer time, with engineering samples late 2017 and go into mass production Q1 2018 and cost "a few hundred dollars".

Thirdly radar that Tesla uses actually cost sub $100 dollars. This is why many production cars like Audi, Mercedes has 6-10 radars. Radar is a generation old tech and is cheap.

Lastly, mobileye has moved past being just a perception company for a while. They have deep neural network for every aspect of driving. They also have the map system done and driving policy.


They are so ahead it's not even funny
 
However, I'd have thought low-light cameras and HDR cameras would be useful in a lot of situations, but it'd probably require separate optics, which would get more expensive (both financially and computationally) very quickly.
Still, I'm interested in what you mean with "camera". Apple, for example, has been using the "dirt cheap" Largan lenses for quite a while. Why shouldn't Tesla be using the same quality stuff? Question is what sensor are they using: Aptina, Omnivision, Samsung? And what these combinations provide in terms of low light quality?
 
Actually mobileye uses a low light automotive HDR Camera. Im pretty sure he has mentioned it many times in his talks.

Mobileye traditionally used a Micron MT9V022 (1/3”) Monochrome Wide VGA camera sensor. This has automatic gain control so that it can adjust its gain for lowlight or bright light, but this is not the same as an HDR sensor. EyeQ3 improved this with a 1280x1024 (1.3MP) camera, 50 degree FOV... but still monochrome, and still WDR - not HDR.

The new Sony IMX would be an interesting camera to use: Sony Commercializes the Industry's First High-Sensitivity CMOS Image Sensor for Automotive Cameras, Delivering Simultaneous LED Flicker Mitigation and High-Quality HDR Shooting

HDR would help with twilight; one of the reasons it's so hard is finding the right gain control for the camera in existing systems. There are other situations it'd help with - tricky light scenarios; reading unlit signs whilst facing bright headlights, or exiting/entering a tunnel without over-exposure. It may also help in bright sunlight.

Thirdly radar that Tesla uses actually cost sub $100 dollars. This is why many production cars like Audi, Mercedes has 6-10 radars. Radar is a generation old tech and is cheap.

Short-range radars like the multiarrays used in Mercedes for parking etc are cheap, at around 20 dollars a pop. Long range (150+ feet) are more expensive. Presumably the LRR4 in new Tesla's is somewhere between a few hundred and a thousand dollars - I wouldn't know without asking directly. Anyway, either way - still more expensive than a camera.

LIDAR will come down in price. But the original question was not about tomorrows' tech; it was about todays.
 
  • Informative
Reactions: Matias
Mobileye traditionally used a Micron MT9V022 (1/3”) Monochrome Wide VGA camera sensor. This has automatic gain control so that it can adjust its gain for lowlight or bright light, but this is not the same as an HDR sensor. EyeQ3 improved this with a 1280x1024 (1.3MP) camera, 50 degree FOV... but still monochrome, and still WDR - not HDR.

Both WDR and HDR achieve a similar result, and can be considered the same from an end user standpoint, as both utilize the allowable dynamic range of the sensors to differently overexpose the dark area/objects and underexpose the bright area/objects in order to reveal more discernible details in those areas. Learned from my photography hobby, HDR is a post-processing technique which uses software to reconstruct the final image based on several user settings. WDR is more recent development and more for video applications like surveillance and dash cam, and the WDR processing is mostly done in hardware. I believe Mobileye builds the feature into their chips so moving object detection can be more efficient.

Sony's new sensor widens the dynamic range even further. Cellphone camera sensor nowadays does around 10 EV stops. Sony A7Sii can cover 12 stops. But this new IMX390CQV sensor achieves a whopping 40 stops! It can definitely see the dark with only starlight. Mitigating flickering is also a nice feature on the street as car camera systems start to use higher frame rate to capture fast moving objects and some LED light sources flicker too.
 
Last edited:
  • Like
  • Love
Reactions: Matias and lunitiks