Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
I always wonder how you speak in absolute terms and what kind of sources give you that authority (like insider knowledge of Tesla's systems), but it seems you are pulling facts out of thin air.

@wk057 has pulled raw camera feed from the AP1 camera. This was saved as part of the crash data. So obviously Tesla has a way to pull the raw camera feed.
Jason Hughes on Twitter

@wk057 has been doing this for multiple cars. The last few frames before a crash are stored in the car's black box (EDR).
Tesla Autopilot camera stores footage after a crash like a dashcam – here’s an example
Several Tesla owners claim Model X accelerated/crashed on its own but everything points to user error

Jason Hughes‏ @wk057 13 Sep 2016
@letsgoskatepool Not quite. No way to get any real stream of data from the camera to the MCU, which is why this is only a few event frames.

Mobileye doesn't allow raw camera data.
Second of all there is no other chip to actually process the visuals of the camera.
There is no secondary chip to do an additional machine learning process.
All tesla has access to is converting the outputs from the EyeQ3 to actuators.

That's it. They don't have access to the chip to do whatever they want.
This is why their miles data only consists of literally gps location.

"Our chip receives the video feed from a camera and processes this video to find vehicles, to find pedestrians, to find traffic signs and speed limit signs, to find traffic lines and also to support automated driving," Shashua says.

Its mobileye chip, running their software. period.
There are no secondary tesla chips. how is it that you don't understand?
 
Except Tesla is very likely not using conventional GPS but likely integrating odometry into their systems or at least a higher accuracy system like WAAS. People who pulled data from Visible Tesla are able to tell what stall their car is parked in and also how far forward the car is pulled into the stall. Basically accuracy is within 2-3 feet, whatever way Tesla is doing it.
How accurate is the GPS? • r/teslamotors

The odometry approach can lead to accuracy to within an inch, but Tesla probably doesn't need that close of an accuracy just to stay in a lane.
http://gizmodo.com/a-new-technique-makes-gps-accurate-to-an-inch-1758457807

the best GPS will get you an accuracy of 3-10 meters, not 2-3 ft. That's a FACT.
The same GPS tesla uses is the same every other automaker uses. These are industry standard.
The reddit post you posted is pure none sense. i can't believe you even posted it.
 
the best GPS will get you an accuracy of 3-10 meters, not 2-3 ft. That's a FACT.
The same GPS tesla uses is the same every other automaker uses. These are industry standard.
The reddit post you posted is pure none sense. i can't believe you even posted it.
WAAS can provide accuracy better than 1 meter (~3 ft) in the continental USA.
Wide Area Augmentation System - Wikipedia

If you augment that with some inertial sensors, it's possible to get even better accuracy.
 
Last edited:
  • Like
Reactions: abasile
Jason Hughes‏ @wk057 13 Sep 2016
@letsgoskatepool Not quite. No way to get any real stream of data from the camera to the MCU, which is why this is only a few event frames.

Mobileye doesn't allow raw camera data.
Second of all there is no other chip to actually process the visuals of the camera.
There is no secondary chip to do an additional machine learning process.
All tesla has access to is converting the outputs from the EyeQ3 to actuators.
He's talking about the MCU, not the vehicle controllers Tesla is using (which would have access to that video feed). The MCU receives data from Tesla's through a gateway that goes through the CAN bus. He noted there is not enough bandwidth to send the full video:
“It has to send these messages over the CAN bus very quickly to save them from the camera to the MCU,” Hughes explains, “so they have to be dumbed-down resolution so that they can actually make it to the MCU before anything bad happens to it in a crash.”
Tesla Autopilot Automatically Stores Crash-Cam Footage After a Collision | Inverse

The MCU referenced is the CID in this diagram. The EyeQ3, Tesla's chip, and the camera is under the vehicle controllers part of this diagram.
TH2.png

Hacking a Tesla Model S: What we found and what we learned | Lookout Blog

But obviously whatever chip Tesla is using that sent those frames over the CAN Bus has access to the video, not simply some outputs as actuators as you keep characterizing.

That's it. They don't have access to the chip to do whatever they want.
This is why their miles data only consists of literally gps location.

"Our chip receives the video feed from a camera and processes this video to find vehicles, to find pedestrians, to find traffic signs and speed limit signs, to find traffic lines and also to support automated driving," Shashua says.

Its mobileye chip, running their software. period.
There are no secondary tesla chips. how is it that you don't understand?
So you are basically saying there is no chip(s) between the EyeQ3 and the vehicle interfaces? I'm pretty sure whatever logic Tesla is using to do the GPS based lane keeping (which is used when there are no lane lines visible), and also geocoded radar whitelisting is requires another chip to handle it. Also the autopilot visualizations done on the IC (instrument cluster) requires some custom Tesla software.

Just looking at the Audi zFAS board which also uses the EyeQ3, there are plenty of other chips that can do processing (the two huge Tegra X1 chips being most obvious).
EyeQ3-on-Audi-zFAS-board.png


Also, the EyeQ3 block diagram has a video out (lower right corner). What is the point of the video out if automakers are never allowed to use it?
EyeQ3.png

Exclusive: The Tesla AutoPilot - An In-Depth Look At The Technology Behind the Engineering Marvel - Page 5 of 6
 
Last edited:
FSD needs three type of data.

1) HD Data: what mobileye is doing with REM. A map with exact lanes, lane markings, intersections, traffic sign lights, road signs, light poles, road barriers/edges, and landmarks.

That type of data requires raw pixel data which wasn't available to tesla in ap1.

To play devil's advocate: Why is having HD map data an absolute requirement for full self-driving? The obvious counterpoint is that humans are able to drive without them.

I could argue that any *critical dependency* on HD map data for a self-driving system is a good indication that it will never work that well for all the corner cases. Lane markings change all the time on a road (e.g., road construction). Signs come and go. Barriers come and go. I'd hate to rely on those from a database.

Humans are able to drive with just their eyes and a brain containing "low res" map data (i.e., rough knowledge of where roads and places are).
 
...Mobileye doesn't allow raw camera data...
They don't have access to the chip to do whatever they want...

According to Tesla's statement on the divorce:

"MobilEye had knowledge of and collaboration with Tesla on Autopilot functionality for the past 3 years.

Tesla has been developing its own vision capability in-house for some time with the goal of accelerating performance improvements. After learning that Tesla would be deploying this product, MobilEye attempted to force Tesla to discontinue this development, pay them more, and use their products in future hardware.

In late July when it became apparent to MobilEye that Tesla planned to use its own vision software in future Autopilot platforms, MobilEye made several demands of Tesla in exchange for continuing supply of first generation hardware, including:
  • Raising the price of their product retroactively
  • Demanding an agreement to extremely unfavorable terms of sale and
  • Demanding that Tesla not use data that was collected by its vehicles’ cameras for any purpose other than helping MobilEye develop its products
  • Requiring that Tesla collaborate on Tesla Vision and source future vision processing from them until at least level 4"

It sounds like MobilEye knows that Tesla can do its own "vision software" and can collect data from MobilEye's camera so it lays out the terms and conditions for Tesla to agree to restrict the data collection as well as to share Tesla Vision with MobilEye.

It sounds like the quote above does not confirm that Tesla is technologically challenged that it is unable to obtain raw data from MobilEye's camera but legally, MobilEye wants to impose terms and conditions which Tesla does not accept.

Again, it's the legality, not the capability.
 
He's talking about the MCU, not the vehicle controllers Tesla is using (which would have access to that video feed). The MCU receives data from Tesla's through a gateway that goes through the CAN bus. He noted there is not enough bandwidth to send the full video:
“It has to send these messages over the CAN bus very quickly to save them from the camera to the MCU,” Hughes explains, “so they have to be dumbed-down resolution so that they can actually make it to the MCU before anything bad happens to it in a crash.”
Tesla Autopilot Automatically Stores Crash-Cam Footage After a Collision | Inverse

The MCU referenced is the CID in this diagram. The EyeQ3, Tesla's chip, and the camera is under the vehicle controllers part of this diagram.
TH2.png

Hacking a Tesla Model S: What we found and what we learned | Lookout Blog

But obviously whatever chip Tesla is using that sent those frames over the CAN Bus has access to the video, not simply some outputs as actuators as you keep characterizing.


So you are basically saying there is no chip(s) between the EyeQ3 and the vehicle interfaces? I'm pretty sure whatever logic Tesla is using to do the GPS based lane keeping (which is used when there are no lane lines visible), and also geocoded radar whitelisting is requires another chip to handle it. Also the autopilot visualizations done on the IC (instrument cluster) requires some custom Tesla software.

Just looking at the Audi zFAS board which also uses the EyeQ3, there are plenty of other chips that can do processing (the two huge Tegra X1 chips being most obvious).
EyeQ3-on-Audi-zFAS-board.png


Also, the EyeQ3 block diagram has a video out (lower right corner). What is the point of the video out if automakers are never allowed to use it?
EyeQ3.png

Exclusive: The Tesla AutoPilot - An In-Depth Look At The Technology Behind the Engineering Marvel - Page 5 of 6
He's talking about the MCU, not the vehicle controllers Tesla is using (which would have access to that video feed). The MCU receives data from Tesla's through a gateway that goes through the CAN bus. He noted there is not enough bandwidth to send the full video:
“It has to send these messages over the CAN bus very quickly to save them from the camera to the MCU,” Hughes explains, “so they have to be dumbed-down resolution so that they can actually make it to the MCU before anything bad happens to it in a crash.”
Tesla Autopilot Automatically Stores Crash-Cam Footage After a Collision | Inverse

The MCU referenced is the CID in this diagram. The EyeQ3, Tesla's chip, and the camera is under the vehicle controllers part of this diagram.
TH2.png

Hacking a Tesla Model S: What we found and what we learned | Lookout Blog

But obviously whatever chip Tesla is using that sent those frames over the CAN Bus has access to the video, not simply some outputs as actuators as you keep characterizing.


So you are basically saying there is no chip(s) between the EyeQ3 and the vehicle interfaces? I'm pretty sure whatever logic Tesla is using to do the GPS based lane keeping (which is used when there are no lane lines visible), and also geocoded radar whitelisting is requires another chip to handle it. Also the autopilot visualizations done on the IC (instrument cluster) requires some custom Tesla software.

Just looking at the Audi zFAS board which also uses the EyeQ3, there are plenty of other chips that can do processing (the two huge Tegra X1 chips being most obvious).
EyeQ3-on-Audi-zFAS-board.png


Also, the EyeQ3 block diagram has a video out (lower right corner). What is the point of the video out if automakers are never allowed to use it?
EyeQ3.png

Exclusive: The Tesla AutoPilot - An In-Depth Look At The Technology Behind the Engineering Marvel - Page 5 of 6

You can't run safety critical system on off the shelve gpu. They have to be specially engineered with safety critical and fault tolerance in mind. Also you can't run safe critical application with other application. You can't use the IC gpu and the 17inch display GPU because when those gpu crash because you went to the wrong browser. Guess what happens to your car? it crashes.

All car actuators algorithm are done on the SOC EyeQ3 using mobileye's sdk.
The gps logging and radar signature blocklist program also happen on mobileye's SOC.
Its a complete system.

http://www.movon.co.kr/download/board.asp?board=blog&uid=822

Lastly Mobileye demand that tesla not use their camera data has nothing to do with terms but rather IP.
also has nothing to do with raw camera feed but the data processed by the deep learning algorithms on the SOC.

The fact is that tesla miles data consists only of gps logging and radar/gps blocklist.
Its not under debate. It fact. its settled.
I don't discuss speculation, i discuss facts.
 
You can't run safety critical system on off the shelve gpu. They have to be specially engineered with safety critical and fault tolerance in mind. Also you can't run safe critical application with other application. You can't use the IC gpu and the 17inch display GPU because when those gpu crash because you went to the wrong browser. Guess what happens to your car? it crashes.

All car actuators algorithm are done on the SOC EyeQ3 using mobileye's sdk.
The gps logging and radar signature blocklist program also happen on mobileye's SOC.
Its a complete system.

http://www.movon.co.kr/download/board.asp?board=blog&uid=822

Lastly Mobileye demand that tesla not use their camera data has nothing to do with terms but rather IP.
also has nothing to do with raw camera feed but the data processed by the deep learning algorithms on the SOC.

The fact is that tesla miles data consists only of gps logging and radar/gps blocklist.
Its not under debate. It fact. its settled.
I don't discuss speculation, i discuss facts.

In many sentences you still didn't cite a source for your claim that Tesla was not collecting and using any data from the cameras.

In fact stopcrazy cited info that suggests that Tesla was collecting data and one of the dealbreaker points of negotiations was Mobileye wanted to limit that when it came time to extend/renegotiate their arrangement. Try to limit your factual claims to statements for which you have some factual basis. That is not one of them.
 
  • Like
Reactions: MTOman and croman
You can't run safety critical system on off the shelve gpu. They have to be specially engineered with safety critical and fault tolerance in mind. Also you can't run safe critical application with other application. You can't use the IC gpu and the 17inch display GPU because when those gpu crash because you went to the wrong browser. Guess what happens to your car? it crashes.

All car actuators algorithm are done on the SOC EyeQ3 using mobileye's sdk.
The gps logging and radar signature blocklist program also happen on mobileye's SOC.
Its a complete system.
What "GPU" are you referring to? Note I'm not referring to the Tegra 3/4s (which are not GPUs either) handling the CID and the IC when I say there is a Tesla chip between the EyeQ3 and actuators, but a chip that falls under the "vehicle controllers" part the diagram. The CAN bus does not have enough bandwidth to handle a full video stream, so any video processing has to happen before the CAN bus (the rear view camera on the other hand may be connected directly to the CID).

I don't have picture of whatever board Tesla is using, but from the Audi board, even something like the X1s are not "GPU". They incorporate CPU cores just like the EyeQ3 chip. They have 4 ARM Cortex-A57 CPU cores + 4 Cortex-A53 CPU cores, plus 256 Maxwell GPU cores.
Tegra - Wikipedia

The Parker SOC in the PX2 Tesla is using right now to do their Tesla Vision AI uses a similar architecture:
4 ARM Cortex-A57 CPU cores + 2 Denver 2 CPU cores + 256 CUDA GPU cores.
Nvidia reveals new details on its Drive PX2 platform, Parker SoC - ExtremeTech
So obviously this is reliable enough to do the processing for semi-autonomous driving, since Tesla is using these cores to do exactly that for AP2!

Let's compare to Mobileye's chip which has 4 MIPS cores and 4 VMP cores:
Exclusive: The Tesla AutoPilot - An In-Depth Look At The Technology Behind the Engineering Marvel - Page 5 of 6

EyeQ3 uses 1004 from the block diagram, which is base on the previous 34K used by EyeQ2. MIPS 1004K or 34K is not some kind of special high fault tolerance architecture. It's actually used in a lot of set top boxes, for example in this one:
http://www.edn.com/Home/PrintView?contentItemId=4442600
Here's the applications listed from the datasheet for the MIPS 1004K architecture:
Key Applications
Digital Home:
• Enhanced set-top boxes (STBs)
• HD digital consumer multimedia
• Residential gateways (RGWs)
Enterprise Communications Infrastructure
Network Attached Storage (NAS)
Office Automation/Multi-Function Products (MFPs)
• Medium/large office print/fax/scan
https://imagination-technologies-cl...onaws.com/documentation/MIPS32_1004K_1211.pdf

http://www.movon.co.kr/download/board.asp?board=blog&uid=822

Lastly Mobileye demand that tesla not use their camera data has nothing to do with terms but rather IP.
also has nothing to do with raw camera feed but the data processed by the deep learning algorithms on the SOC.

The fact is that tesla miles data consists only of gps logging and radar/gps blocklist.
Its not under debate. It fact. its settled.
I don't discuss speculation, i discuss facts.
Seems like you are changing the subject. My original point was only about the raw camera feed and whether Tesla has access to it. I showed the Mobileye chip has a video out. Your link is to the older EyeQ2, but even that chip has a video out, look at page 3.

And once again you resort to the usual tactic of stating what you are saying is "fact" when you have absolutely no evidence.
 
Last edited:
  • Informative
Reactions: MTOman
...............

Seems like you are changing the subject. My original point was only about the raw camera feed and whether Tesla has access to it. I showed the Mobileye chip has a video out. Your link is to the older EyeQ2, but even that chip has a video out, look at page 3.

And once again you resort to the usual tactic of stating what you are saying is "fact" when you have absolutely no evidence.

What are you talking about?

I have always maintained that The only facts we have with us is that.

Tesla fleet learning consists only of gps logging and radar/gps logging.

gps based maps are not suitable for anything above level 2 autonomy.

The above are facts. Frankly there is no point for further discussion because they are irrelevant.

This is despite whether tesla has access to raw camera feed or not.
However they do not based on the statement from Tesla themselves.
They referred to the abstracted data from cameras not the camera raw feed.
Lastly they don't have a processor to even process the info.

THe difference between tesla and other car markers is that they load their cars with excessive sensors and processor unit.
Something elon would never do in-order to save cost.

Infact the 2015 mercedes benz has more sensors than AP2 for example.
The same can be said about audi cars.

Hence you cant look at their board and say, hey they have this then tesla must also.
That's a very naive way to come to a conclusion.

and how did AP2 get into this discussion....
 
  • Like
Reactions: NerdUno
In many sentences you still didn't cite a source for your claim that Tesla was not collecting and using any data from the cameras.

In fact stopcrazy cited info that suggests that Tesla was collecting data and one of the dealbreaker points of negotiations was Mobileye wanted to limit that when it came time to extend/renegotiate their arrangement. Try to limit your factual claims to statements for which you have some factual basis. That is not one of them.

Tesla has no GPU to process the data.
regardless we know they are not processing any further data because everything they have done they have bragged about.
We know they collect gps logs and radar logs.
Whether its fleet learning or shadow mode. we know.

When they start collecting data from cameras and mapping out every lanes, traffic light, road sign, road marking, intersection, etc in the world.
we will know about it because elon won't hesitate to brag about it.
 
  • Funny
Reactions: NerdUno
Tesla has no GPU to process the data.
regardless we know they are not processing any further data because everything they have done they have bragged about.
We know they collect gps logs and radar logs.
Whether its fleet learning or shadow mode. we know.

When they start collecting data from cameras and mapping out every lanes, traffic light, road sign, road marking, intersection, etc in the world.
we will know about it because elon won't hesitate to brag about it.
So in other words you are just speculating. Okay got it.
 
  • Like
Reactions: GWord and MTOman
Tesla and Mobileye worked together in unclear proportions to develop and roll out AP1. Since they split Tesla working on its own has come up with ap2 with the functionality in the most recent update.

What's the best that mobileye has done since then without tesla? That is on streets and can be purchased now?
 
What's the best that mobileye has done since then without tesla? That is on streets and can be purchased now?

I'm not sure Mobileye's success can be measured by what's on the streets right now. Neither Tesla nor Mobileye have a self-driving car on the road as of today.

However, the split was really over the choice of technology and partners for a full self-driving system: Tesla wanted to use Tesla Vision on GPU, whereas Mobileye wanted them to wait and use EyeQ4. Everything else was just a catalyst. Although, I have to say, I understand where Mobileye were coming from when they expressed concern over Tesla's "gung-ho" attitude to safety. I mean, think about it for a second: Tesla sell premium cars with a feature (labelled as beta) that could easily kill you and others around you. My AP2 car has definitely attempted it. It's not for the faint of heart!

One thing is for certain... Prof. Shashua is a brilliant mind - perhaps the best when it comes to this area. His involvement in academia no doubt keeps him sharp and up-to-date with the latest breakthroughs... but more importantly, his communication via numerous technical talks, interviews and press conferences is extremely reassuring. I might not know the maths behind the Mobileye system, but I feel like I understand the general problems, the overall building blocks of their solutions, and where they're at with it right now. They have 1000+ people working on it, with a few industry luminaries leading them on a very focused mission. Tesla have the guy that made Swift leading some random PhD graduates.

It's annoyingly human of me, but after listening to hours and hours of Prof Shashua's talks, I actually have trust in Mobileye, because it's clear they know what they're doing. Add to this my own experience of the general "feeling" of driving AP1 (safe, reliable, steady) vs AP2 (nerve-wracking, volatile) and, well, I'd probably buy Mobileye if I could.

I have no idea what the Tesla approach is... and I do wish they'd tell us. It'd make me more confident in them - and the system as a whole - if I knew what was going on over there in Fremont, or at least what the general plan is (is there one?!!)
 
I'm not sure Mobileye's success can be measured by what's on the streets right now. Neither Tesla nor Mobileye have a self-driving car on the road as of today.

However, the split was really over the choice of technology and partners for a full self-driving system: Tesla wanted to use Tesla Vision on GPU, whereas Mobileye wanted them to wait and use EyeQ4. Everything else was just a catalyst. Although, I have to say, I understand where Mobileye were coming from when they expressed concern over Tesla's "gung-ho" attitude to safety. I mean, think about it for a second: Tesla sell premium cars with a feature (labelled as beta) that could easily kill you and others around you. My AP2 car has definitely attempted it. It's not for the faint of heart!

One thing is for certain... Prof. Shashua is a brilliant mind - perhaps the best when it comes to this area. His involvement in academia no doubt keeps him sharp and up-to-date with the latest breakthroughs... but more importantly, his communication via numerous technical talks, interviews and press conferences is extremely reassuring. I might not know the maths behind the Mobileye system, but I feel like I understand the general problems, the overall building blocks of their solutions, and where they're at with it right now. They have 1000+ people working on it, with a few industry luminaries leading them on a very focused mission. Tesla have the guy that made Swift leading some random PhD graduates.

It's annoyingly human of me, but after listening to hours and hours of Prof Shashua's talks, I actually have trust in Mobileye, because it's clear they know what they're doing. Add to this my own experience of the general "feeling" of driving AP1 (safe, reliable, steady) vs AP2 (nerve-wracking, volatile) and, well, I'd probably buy Mobileye if I could.

I have no idea what the Tesla approach is... and I do wish they'd tell us. It'd make me more confident in them - and the system as a whole - if I knew what was going on over there in Fremont, or at least what the general plan is (is there one?!!)
I believe their efforts can certainly be measured based on what are on the roads today: non-Tesla level 2 systems using Mobileye.

As for Tesla's approach they talked about it in this presentation last year:
Tesla reveals new details of its Autopilot program: 780M miles of data, 100M miles driven and more
MIT Technology Review Events Videos - Delivering on the Promise of Autonomous Vehicles
 
  • Like
Reactions: bhzmark