Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Mobileye will launch a self-driving taxi service in Israel in 2019

This site may earn commission on affiliate links.
Even in the mobileye presentation lines are not very realistic (other than perspective corrected and blended).

Also, I do not see how texture or scenery would or should matter to an autonomous system therefore road with all the markings could be fully computer generated.


I think the lines were supposed to be faded to replicate Japan's deceleration lanes? Not sure. But environment does matter because you can overfit/under fit your model to scenes from the sim, creating behaviors that you dont want when you get into the real world.
 
The only thing Mobileye was good at is talk. Only two years ago it was still employing hundreds of people to manually annotating images for its vision system, lol, and was dead against neural net machine learning approach employed by Waymo and Tesla. The reason is it does not have it and did not know how (Tesla wouldn't share it with them). Intel was so behind NVdia in AI and so clueless it paid the stupidly high price to buy the company. They will be getting absolutely nowhere.
Mobileye Bullish on Full Automation, but Pooh-Poohs Deep-Learning AI for Robocars

Also keep in mind demo is easy to do. Everyone could put out a demo car to run it on a specific route for a few minutes now. Waymo had a demo of self driving car without steering wheel many years ago. What is tough is to put real cars in customers hand for them to drive on the road everyday. That announcement is nothing but another good PR work.
 
Last edited:
The only thing Mobileye was good at is talk. Only two years ago it was still employing hundreds of people to manually annotating images for its vision system, lol, and was dead against neural net machine learning approach employed by Waymo and Tesla. The reason is it does not have it and did not know how (Tesla wouldn't share it with them). Intel was so behind NVdia in AI and so clueless it paid the stupidly high price to buy the company. They will be getting absolutely nowhere.
Mobileye Bullish on Full Automation, but Pooh-Poohs Deep-Learning AI for Robocars

Also keep in mind demo is easy to do. Everyone could put out a demo car to run it on a specific route for a few minutes now. Waymo had a demo of self driving car without steering wheel many years ago. What is tough is to put real cars in customers hand for them to drive on the road everyday. That announcement is nothing but another good PR work.

Huh?

Mobileye uses deep NN for its vision system. That article takes what Amnon said out of context. He was talking about end-to-end NN vs modular NN. Secondly training NN for perception (which uses a method called supervised learning) involves annotating images, everyone does it.

The reason is it does not have it and did not know how (Tesla wouldn't share it with them). Intel was so behind NVdia in AI and so clueless it paid the stupidly high price to buy the company. They will be getting absolutely nowhere.

lol I can't decide if i should take this post seriously or not.
Mobileye is the only company with Level 3-5 capable vision system and HD map in production today. In my Trent voice, "no competitor has this!".
 
Don't be fooled by its propoganda. Powerpoint and "demo" mean nothing. Anyone could do those. Mobileye was nothing but a company providing vision system to auto companies to implement driver-assist features. Its effort had actually been to slow down autonomous driving implementation since it got the driver-assist market cornered but did not have anything for autonomous driving. It was even lobbying government agencies to put more restrictions on autonomous cars. The main reason it had a fall out with Tesla is Tesla was pushing hard to develop its own neural net and did not want to share the technology. It changed tune only recently when saw writing on the wall. Show me one article before 2016 either internally or from outside sources that says anything about Mobileye developing neural net I will keep my mouth shut on this thread. Intel was very behind in AI hardware and software too. That it paid such a price for Mobileye only showed its desperation. Now it's pretty much blind leads blind I can't see it going anywhere (except for producing those powerpoint and demo cars).





i
 
Last edited:
78K806OHPM.gif
 
Don't be fooled by its propoganda. Powerpoint and "demo" mean nothing. Anyone could do those. Mobileye had been nothing but a company providing vision system to auto companies to implement driver-assist features. Its effort was actually to slow down autonomous driving implementation since it got the driver-assist market cornered but did not have anything for autonomous driving. It was even lobbying government agencies to put more restrictions on autonomous cars. The main reason it had a fall out with Tesla is Tesla was going to develop its own neural net and did not want to share the technology. It changed tune only recently when saw writing on the wall. Show me one article before 2016 either internally or from outside sources that says anything about Mobileye developing neural net I will keep my mouth shut on this thread. Intel was very behind in AI hardware and software too. That it paid such a price for Mobileye only showed its desperation. Now it's pretty much blind leads blind I can't see it going anywhere (except for producing those powerpoint and demo cars).

Tesla's Autopilot System - MobilEye

Mobileye is literally the only company that doesn't hype stuff and only talks about things that are IN production. They are literally the anti-hype company, the anti-demo company.

There will be millions of SDC Level 3 and Level 4 consumer cars 2019-2021 and most of them will use mobileye tech. They have HD maps right now in production automatically generating and updating. How are they slowing down autonomous implementation? I don't understand your argument. What restrictions have they lobbied for? Those are some huge accusation. And no the fallout was not because Tesla wouldn't share their NN. You also do realize that all Nvidia have are powerful GPUs and demo API called driveworks. There have been no car slated to be released that runs nvidia software.

Mobileye literally has no competition in the space. As you yourself said, they have the driver-assistant market cornered and planning on running the table on the L3/L4 highway market. They are going for all the infinity stones.

Jan 2015 Video

 
Last edited:
The article clearly said "The system Tesla EVs use to make the autopilot ‘learn’ over time is an implementation of their own design and not related to Mobileye". When Tesla was developing the NN machine learning algorithm it wanted to access the camera raw data but Mobileye refused unless if Tesla shares the development with it. Tesla had to do it the hard way until the AP2 with it's own camera system. As I said before Mobileye was scared that its driver assist market will be disrupted by the new autonomous driving technology. The seed was sown long before the actual separation and accusations from both sides. In 2015 George Hotz revealed in an interview that Elon was recruiting him and promised millions of dollars bonuses soon as it got rid of Mobilieye. People in the industry knew what's going on although a lot of people gotten info from media believed Mobileye's version that it was them that wanted the separation. Like you did not fire me I quit first. ;) A smart move though as it saved its stock price.

As for your video it's nothing but the spin of what they had is just a different approach to Google's deep learning approach (or Tesla's or apparently Nvidia's at that time too from comments under the video). It's exactly the same as what was said in the 2016 article I linked that Mobileye did NOT have DL and did NOT think (at least publicly) it's needed. Not sure if it has changed directions in the last couple years but there is no way it could catch up in a few short years. Google already had ten millions of machine learning miles and Tesla more than a billion. Mobileye is going to put a few hundred cars on the road in 2019? Ha!
 
Last edited:
You said mobileye didn't use neural networks and I should find you a video of them talking about it before 2016. I showed you a Jan 2015 video of them listing some of the deep neural networks in their eyeq3 chip, that takes 5% of the chip. It's basically case closed.

And about the 2016 article you listed. It's written based on this video, which amnon is taken out of context.

Start at 4:00 mins
 
You need to watch the Jan 2015 video more carefully. He admitted that what they were doing was opposite to what Google did. Google was doing machine learning and records everything for cars to follow in the future. Mobileye did not record anything but only use annotated images for camera to find a match in real time. That's just old fashioned image recognition not deep learning. In the 2016 video he only "talked" about the "new technology" deep learning, a technology Google, Tesla and Nvidia already had at that time but they did not. As for Intel it is behind AMD and Nvidia in CPU/GPU performance now and had little AI processor experiences. Can not see how two companies that have been behind can suddenly become a leader when combined. Really can't.
 
Last edited:
  • Like
Reactions: J1mbo
@CarlK Here is a question though forgetting about debate on techniques used: If MobilEye has solved perception to car responsible driving level with EyeQ4 what does it even matter how they did it? Even if others might catch up with whatever techniques they would be still playing catch up except Waymo which can be ahead or similar level?

If EyeQ3 and EyeQ4 are soon really powering car responsible driving (as opposed to driver responsible) in production products that will be a mean feat. They seem to be doing production REM mapping already too.

That is why I respect Waymo too because they have released a pilot product that takes on actual customers not employees. How they all did it becomes a bit irrelevant once they have done it.
 
Last edited:
@CarlK Here is a question though forgetting about debate on techniques used: If MobilEye has solved perception to car responsible driving level with EyeQ4 what does it even matter how they did it? Even if others might catch up with whatever techniques they would be still playing catch up except Waymo which can be ahead or similar level?

If EyeQ3 and EyeQ4 are soon really powering car responsible driving (as opposed to driver responsible) in production products that will be a mean feat. They seem to be doing production REM mapping already too.

That is why I respect Waymo too because they have released a pilot product that takes on actual customers not employees. How they did it becomes a bit irrelevant once they have done it.

It only says it has solved it but everyone says the same thing. Lots of companies have had demo cars out there already too. It's a huge stretch to think that announcement in the op means Mobileye is a leader or even close to be a leader in this area.

Some seem to be using EyeQ3/EyeQ4 to define the autonomous driving capability. Those are just iterations of Mobileye's vision systems. Intel still needs to add processors and Intel or end users to add NN deep learning algorithm. We already know Intel, and their customers like VW or BMW, are way behind in AI technology to Google and Tesla. Both Intel and Mobileye, kind of like ICE companies switching to EV, who were old leaders but are forced into new technology late in the game. You can't of course predict the future but the future does not look that promising for the Intel/Mobileye combo imo.
 
Last edited:
@CarlK Certainly MobilEye is foremost about solving the vision part of the equation. Audi uses sensor fusion and other chips in their Level 3 system for example but vision and with EyeQ4 traffic from all directions and HD mapping seems to be solved unless one believes MobilEye is selling faulty chips to production cars.

I agree the game is still on for car responsible driving in various conditions ie driving policy and such things.
 
but only use annotated images for camera to find a match in real time. That's just old fashioned image recognition not deep learning. In the 2016 video he only "talked" about the "new technology" deep learning.

Huh that's what neural network is. And deep learning is simply a neural network with more than one hidden layers. Dont let the word deep learning confuse you.

OH3gI.png


There are two categories, there is supervised learning and then there is unsupervised learning. Everyone uses supervised deep learning for perception. Everyone. They either collect their own data by deploying fleets around the world or use resources provided by third parties. The same when it comes to labeling data. They either hire their own labelers or outsource it. Tesla outsources its image tagging.

Ex: Say you want to build a network that can tell the difference between a cat or a dog.

You setup three folders
  • Training set (contains 1 million pictures)
  • Validation set (contains 5,000 pictures)
  • Test set. (Contains 100,000 pictures)
Training set will contain pictures of dogs and cats with the correct label that you will use to train the network.

The test set contains pictures of dogs and cats that the network has never seen before, this you will use to evaluate the networks accuracy and performance.

The validation set which is a subset of the training set (say 5% of the test set). You use this to do quick evaluation of the model and its performance so you can adjust things like its hyperparamaters, etc.


So You feed the network millions of pictures of cats and dogs while telling it which picture is a cat and which picture is a dog. The network then learns using cost function, weights, bias (using SGD as learning algo) on what is a cat and dog. The output will then be for example 1 for cats and 0 for dogs

Then you run it on your validation set and then your test set.

That's what deep learning is. In self driving cars, companies collect pictures of driving and label them and feed it to a CNN.
 
Last edited:
  • Informative
Reactions: electracity
What you described is image recognition, pretty much what Mobileye had been doing for ages, not NN deep learning. Deep learning needs to have the ability to do trial and error over many runs from many different vehicles to tweak the neural net (like or brains do). How could Mobileye be using deep learning when it says it records nothing (in the 2015 video)? It did not.

BTW if you begin every reply with huh's or lol's don't expect me to ever respond again.
 
Last edited:
More details on the deployment next year and of the eyeq5 which begins sampling next month.


https://www.eetimes.com/document.asp?doc_id=1333990

This, said Shashua, is linked to “our recently announced deal with Volkswagen” to roll out Israel’s first ride-hailing service next year using self-driving cars.

Volkswagen and Intel/Mobileye are forming a joint venture with Israeli car importer Champion Motors. Volkswagen will provide electric vehicles and Mobileye its autonomous driving technology. Champion Motors, meanwhile, will be responsible for fleet logistics and infrastructure of the robo-taxi.

Shashua painted the big picture of Intel/Mobileye’s autonomous vehicle roadmap, showing it not only offering perception chips, but a complete subsystem, full hardware, the company’s home-grown radars and lidars, and software technologies required for the “moving people business” in the Volkswagen deal.


Also looks like mobileye are also creating it's own inhouse radar and lidar.

Meanwhile their Complete subsystem and Complete hardware system will be ready middle of 2019 to be offered to partners.

The complete subsystem is a closed hardware that handles all perception inputs and outputs an environmental model.

The complete hardware system is hardware that includes the complete subsystem but also additionally
performs.. autonomous driving. “This would be not only the perception,” said Shashua. “This will be fusion with other sensors; this will be driving policy, and all the decision making around merging into a traffic; and this will be mapping — all the building, using and localizing into the map; and also functional safety and fail operational.” With its output in control of the vehicle,

The entire AV Kit that includes multiple PCB and multiple closed and open eyeq5 will be ready by early 2020.


It looks like mobileye is making a play for all the marbles.

AVKitarchitecture.jpg
 
Last edited:
  • Informative
Reactions: lunitiks
More details on the deployment next year and of the eyeq5 which begins sampling next month.


https://www.eetimes.com/document.asp?doc_id=1333990




Also looks like mobileye are also creating it's own inhouse radar and lidar.

Meanwhile their Complete subsystem and Complete hardware system will be ready middle of 2019 to be offered to partners.

The complete subsystem is a closed hardware that handles all perception inputs and outputs an environmental model.

The complete hardware system is hardware that includes the complete subsystem but also additionally


The entire AV Kit that includes multiple PCB and multiple closed and open eyeq5 will be ready by early 2020.


It looks like mobileye is making a play for all the marbles.

AVKitarchitecture.jpg
I am surprised it seems like they are walking back on their amazing driving policy system.
It talks a lot about how the automaker could implement that.

Is it due to it not being ready today or is it so bad that automakers are begging for access to run their driving policy inside the chip...
I Mean if their driver policy system requires Radio transmitters to know the state of traffic lights. I would also not trust it's quality.

So they are sampling the bare chip next month and are supposed to be ready mid next year for partners to start integrating.
Meanwhile, Tesla in Q1/Q2 is already delivering their "full AV solution" or what they believe is needed with everything integrated to customers.
 
I am surprised it seems like they are walking back on their amazing driving policy system.
It talks a lot about how the automaker could implement that.

How exactly are they walking back? The chips come with the policy, it's up to you if you want to use it. They also have a completely open chip like Xavier.

Is it due to it not being ready today or is it so bad that automakers are begging for access to run their driving policy inside the chip...
I Mean if their driver policy system requires Radio transmitters to know the state of traffic lights. I would also not trust it's quality.

Mobileye has the best driving policy although not as validated as waymo today

Also notice how stable the eyeq4 output view is while they are driving.

An open eyeq5 has been the plan since 2016. And no driver policy is planning and control and perception is difference and deals with sensing. The car makes use of V2X. Infact Tesla gas internal development today on V2X. The incident happened because the tv crew cameras hooked up at the top were wirelessly broadcasting and interfered with the antenna of the v2x. The eyeq4 chip correctly saw the light as red. But the system is currently told to use just the V2X. The incident happened outside of their garage by the way, which makes sense.

So they are sampling the bare chip next month and are supposed to be ready mid next year for partners to start integrating.
Meanwhile, Tesla in Q1/Q2 is already delivering their "full AV solution" or what they believe is needed with everything integrated to customers.

The difference is mobileye already has a mature software which is where Tesla is lacking and alot of other chip competitors, nvidia, etc.
 
The car makes use of V2X. Infact Tesla gas internal development today on V2X. The incident happened because the tv crew cameras hooked up at the top were wirelessly broadcasting and interfered with the antenna of the v2x. The eyeq4 chip correctly saw the light as red. But the system is currently told to use just the V2X.

What is this V2X you are referencing? Especially the part in bold.
I have never seen the boards and chips on the autopilots computer and center consoles.
Never seen anything that would allude to being V2X in the car.
Also no FCC listing for antennas for those.
 
What is this V2X you are referencing? Especially the part in bold.
I have never seen the boards and chips on the autopilots computer and center consoles.
Never seen anything that would allude to being V2X in the car.
Also no FCC listing for antennas for those.

Many companies talk about V2X, which is another way of saying their system will never be available. There were discussions on this in other threads too. Even if every car made starting today is equipped with V2X there will not even be 50% of cars that have the capability in the next decade. How could your driverless system work if it relies on that when most car you see on the road could not communicate with you. The same goes for the infrastructure. You can't have your system to rely on that if there is just 1% of places you go that do not have it. Yes I'm fine with 99 intersections I'll just run the last light. That's why you never hear Tesla, that's serious about FSD implementation in the near future, talk about that.
 
  • Like
Reactions: J1mbo