Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Mobileye will launch a self-driving taxi service in Israel in 2019

This site may earn commission on affiliate links.
Many companies talk about V2X, which is another way of saying their system will never be available. There were discussions on this in other threads too. Even if every car made starting today is equipped with V2X there will not even be 50% of cars that have the capability in the next decade. How could your driverless system work if it relies on that when most car you see on the road could not communicate with you. The same goes for the infrastructure. You can't have your system to rely on that if there is just 1% of places you go that do not have it. Yes I'm fine with 99 intersections I'll just run the last light. That's why you never hear Tesla, that's serious about FSD implementation in the near future, talk about that.

I think V2X infrastructure can happen faster than proliferation in cars but I agree overall it is further in the future and any early autonomy must work without it if it is to catch wider use.

That said MobilEye already sells a functional visual traffic light detection system and is actually publicly prototyping a full self-driving system to boot. Fixating on a V2X feature testing as a sign of their state of generalization compared to Tesla seems iffy to me since we know even less of any full self-driving state Tesla might have. We haven’t even seen Tesla traffic light detection. MobilEye sells chips with one.

This is similar to @strangecosmos talking of Waymo cars hesitating in some left turns. Maybe so but they are still overall showing results far ahead of what others are showing as these cars are in pilot production use with actual customers. It seems iffy to focus on some small detail and extrapolate too many concerns from it when the overall picture is so advanced compared to competition’s showing so far like Tesla.

Is there a chance Tesla in secret is much more further along than they show or that they could catch up? Sure definitely. But both Waymo and MobilEye are already showing autonomous driving much beyond of what Tesla has ever shown. I would say they are serious about full self-driving in a nearer future than Tesla at this time.
 
Last edited:
That said MobilEye already sells a functional visual traffic light detection system and is actually publicly prototyping a full self-driving system to boot. Fixating on a V2X feature testing as a sign of their state of generalization compared to Tesla seems iffy to me since we know even less of any full self-driving state Tesla might have..

So the incident where they ran a traffic light was due to the V2X Feature overriding their vision system.
And this V2X feature was just being tested?

I never heard them talk about their V2X capabilities before so it strikes me as odd when they said it was the cause of the failure and how it's a "beta".

Wouldn't you want press/media to test the most tested and well working system? Especially mobileye which has mentioned in the past that they don't tolerate the beta/testing modus of Tesla?
 
I think V2X infrastructure can happen faster than proliferation in cars but I agree overall it is further in the future and any early autonomy must work without it if it is to catch wider use.

That said MobilEye already sells a functional visual traffic light detection system and is actually publicly prototyping a full self-driving system to boot. Fixating on a V2X feature testing as a sign of their state of generalization compared to Tesla seems iffy to me since we know even less of any full self-driving state Tesla might have. We haven’t even seen Tesla traffic light detection. MobilEye sells chips with one.

This is similar to @strangecosmos talking of Waymo cars hesitating in some left turns. Maybe so but they are still overall showing results far ahead of what others are showing as these cars are in pilot production use with actual customers. It seems iffy to focus on some small detail and extrapolate too many concerns from it when the overall picture is so advanced compared to competition’s showing so far like Tesla.

Is there a chance Tesla in secret is much more further along than they show or that they could catch up? Sure definitely. But both Waymo and MobilEye are already showing autonomous driving much beyond of what Tesla has ever shown. I would say they are serious about full self-driving in a nearer future than Tesla at this time.
None of the companies has demonstrated that their system can drive on their own. Lidar is a crutch that makes systems appear ahead but Tesla is well positioned to take the lead in this race as the have system and plan that fits to their strength which is cameras only and large and diverse fleet.
 
  • Like
Reactions: J1mbo
So the incident where they ran a traffic light was due to the V2X Feature overriding their vision system.

That is what they say.

None of the companies has demonstrated that their system can drive on their own. Lidar is a crutch that makes systems appear ahead but Tesla is well positioned to take the lead in this race as the have system and plan that fits to their strength which is cameras only and large and diverse fleet.

MobilEye does not need Lidar in their Israeli car discussed in this thread. Vision only so seems much further along than anything seen from Tesla.

Tesla surely has it’s own advantages that may help in the future but we have not seen that yet.
 
I think V2X infrastructure can happen faster than proliferation in cars but I agree overall it is further in the future and any early autonomy must work without it if it is to catch wider use.

That said MobilEye already sells a functional visual traffic light detection system and is actually publicly prototyping a full self-driving system to boot. Fixating on a V2X feature testing as a sign of their state of generalization compared to Tesla seems iffy to me since we know even less of any full self-driving state Tesla might have. We haven’t even seen Tesla traffic light detection. MobilEye sells chips with one.

This is similar to @strangecosmos talking of Waymo cars hesitating in some left turns. Maybe so but they are still overall showing results far ahead of what others are showing as these cars are in pilot production use with actual customers. It seems iffy to focus on some small detail and extrapolate too many concerns from it when the overall picture is so advanced compared to competition’s showing so far like Tesla.

Is there a chance Tesla in secret is much more further along than they show or that they could catch up? Sure definitely. But both Waymo and MobilEye are already showing autonomous driving much beyond of what Tesla has ever shown. I would say they are serious about full self-driving in a nearer future than Tesla at this time.

Please enlighten me. I have no idea what Mobileye has is so advanced other than presentations and demos. Those of very easy to do. When Elon said Tesla is not ready yet for LA to NY FSD run he said it could be easily demo'd but he did not want to cheat the system. Nowadays you could assemble a dozen or so engineers and let them to build an autonomous car with off the shelf parts to do a demo on a short stretch of mapped road. Many of them actually did just that. To put a car in customer's hands and let them do whatever they want is an entirely different story.

Mobileye is basically only attacking the image recognition front which is what it's always been doing. Image recognition alone will never get you the self driving capability. You never want to give the car key to your teenager who's been sitting in the passenger seat for years and can observe everything better than you could. Not saying Intel and others could not add NN to Mobileye's image chip but either way they are very late to the game. Tesla, and Waymo too, on the other hand had been building their AI machines for years to acquire "instincts" in the neural net from miles and miles of driving practices, just like how we human learn to do things. An example is a fast ball travels in the air for only about half a second before it reaches the plate. No batter could consciously look at it and decide where the ball would go in that short period of time. All one could do is to swing at it at the right place and time from milliseconds of info and following the "instinct" learned from hundreds or thousands of trails and errors in practices. The more practices you had the better instinct you will acguire and the better player you will be (in addition to good eyes and brain). Using this simple analogy Waymo had been practicing 10 minutes every day for a decade. Tesla started practicing only a few years ago but it is practicing 10 hours every day and continues to practice even more. Mobileye is like a player who was taught how to look at ball and just started to practice. No way it can be "advanced" in comparison not to mention there is a serious doubt it will even get there.

I can understand if this is too abstract for you. However I only need to see real evidences, not demo's or presentations, to be convinced that Mobileye/Intel is really there. i don't think there is any. Let me know if you see one.
 
@CarlK

I do undestand the theory you support, one where NN training and fleet volume is the key to solving autonomous driving. It is one theory and I am not about to argue it.

I do feel MobilEye’s vision advantage is not necessarily their only advantage though. They seem to be far ahead of Tesla in already having solved vision in production chips EyeQ3 and EyeQ4 but also in driving policy features (shipping in latter) and the Israeli taxi fleet product which is a vision based self-driving car about to take on actual customers next year.
 
I do feel MobilEye’s vision advantage is not necessarily their only advantage though. They seem to be far ahead of Tesla in already having solved vision in production chips EyeQ3 and EyeQ4 but also in driving policy features (shipping in latter) and the Israeli taxi fleet product which is a vision based self-driving car about to take on actual customers next year.

This taxi fleet you keep referring to does not exist yet.

AP will be a completely different system when MobilEye take their first paying customers.

And are you really trying to say that Tesla have NOT solved vision in their production system? :confused:
 
Could it be the case that Tesla’s AP2+ has a pretty decent driving logic (decision making) software, but less than stellar detection software? (As of right now.) While mble has the opposite, i.e. an awesome detection SW but less than awesome (or actually non-existent) driving logic to offer? Could that make sense? I'm no SW guy, so I'm just tossing out a thought here.

I mean we need both to get a proper self-driving system to work. Can’t settle with great camera NNs. Can’t settle with «that’s a car, that’s a ped, that’s a cat looking like it’s going to attack that bird because it’s posing like a cat ready to attack that bird», right? Gotta have some clever code to decide what to actually do with all this information, right?

Joshua Brown. Was it crappy detection SW, crappy driving logic or an unholy combination of both? I don't know, does anyone know?

I think @verygreen and @DamianXVI has proven AP2+ detection isn’t exactly stellar. It's OK I guess, but looks a bit glitchy from the videos I've seen. At the same time, I think the Tesla handles pretty well most of the time while on Autopilot. Provided you know it's limitations, it can be pretty relaxing. So I guess the driving logic is pretty good. Does this make any sense? Hit me with those disagees if not!
 
  • Like
Reactions: NateB and J1mbo
I think @verygreen and @DamianXVI has proven AP2+ detection isn’t exactly stellar. It's OK I guess, but looks a bit glitchy from the videos I've seen. At the same time, I think the Tesla handles pretty well most of the time while on Autopilot. Provided you know it's limitations, it can be pretty relaxing. So I guess the driving logic is pretty good. Does this make any sense? Hit me with those disagees if not!

The detection seems to be solved, but you are right, the human-friendly output we have seen isn't stellar and needs refinement - more classes of object (signs, traffic lights), more "experience" (red jackets on pedestrians) and better object positioning are probably the biggies.

As for driving policy, I'm not convinced Tesla have this as a "thing". They have a handful of driving scenarios: lane-keeping / lane change (2 scenarios) / overtake / slow down for sharp bends / driver not responding, but I would *guess* these are implemented as discrete chunks of logic rather than within a policy framework.
 
Joshua Brown. Was it crappy detection SW, crappy driving logic or an unholy combination of both? I don't know, does anyone know?

It was non existent detection on vision and unreliable detection on non-vision. It was a case of cross traffic appearing on the road. It was not even a scenario Tesla recommends Autopilot any version in the manual as only divided roads are recommended.

Joshua Brown was Autopilot 1 and EyeQ3 (2014) does not have cross traffic detection feature so car makers are left to implement cross traffic detection in other means. In theory they could use SFS but Tesla did not. In Tesla such collission detection was implemented with single radar and ultrasonics which was insufficient to reliably detect it. Radar probably did detect it but Tesla’s software ignored it.

Audi uses Lidar and triple radar in the Traffic Jam Pilot with EyeQ3 for this kind of scenario possibly SFS too but even they limit the use of that feature to divided highways with no cross traffic.

EyeQ4 does support cross traffic so that is a different discussion.
 
This taxi fleet you keep referring to does not exist yet.

AP will be a completely different system when MobilEye take their first paying customers.

Sure. EyeQ4 it is based on is a shipping product though.

And are you really trying to say that Tesla have NOT solved vision in their production system? :confused:

To the extent required by car responsible autonomous driving of course they have not. What is cooking in the labs we have seen less of than in many competitor cases so remains to be seen how close or far they are.
 
Could it be the case that Tesla’s AP2+ has a pretty decent driving logic (decision making) software, but less than stellar detection software? (As of right now.) While mble has the opposite, i.e. an awesome detection SW but less than awesome (or actually non-existent) driving logic to offer? Could that make sense? I'm no SW guy, so I'm just tossing out a thought here.

This is where neural net machine learning will shine. You don't need to start with a perfect image recognition capability. It will automatically correct the recognition system and makes your database more accurate as it goes. Whereas with the traditional Mobileye approach you will always have only what you have started with and whatever was already programmed in. That's why Mobileye keeps on talking about its next generation vision chips while Tesla only talks about getting more machine learning miles and better processors so it could learn and do things better and faster.

Mobileye seems to start to change its opinion and recognized (sorry for the pun) the importance of machine learning only very recently. Although there is no indication that it has done anything or is getting anywhere in that regard. I haven't even heard a single car from it doing machine learning out there while Tesla already has hundreds of thousands cars and the number is growing fast. Tesla had a clear vision (sorry again for the unavoidable pun) years ago of how to get there and has a tremendous lead, even to Waymo which is also relying on machine learning but with a much smaller fleet, in setting up the crowd sourced machine learning system.

Here are some good info from this short article describing how neural net machine learning is providing the ICR (Intelligent Character Recognition) solution that could help you to understand it. It will be a daunting, actually impossible, task if one is trying to solve it with traditional programming. Self driving car image recognition is a much more complex task than that. No matter what you think of how far Tesla is at at this moment I can't see anything other than its approach that will work. Hate to say it but you can pretty much write off Mobileye and a lot others that don't even have the right tool to do it at this time in this competitive landscape.

{{ service.title }}
 
Last edited:
The detection seems to be solved, but you are right, the human-friendly output we have seen isn't stellar and needs refinement - more classes of object (signs, traffic lights), more "experience" (red jackets on pedestrians) and better object positioning are probably the biggies.

Well you said it yourself, if its missing so many class detections then it can't be solved. Also if its heavily in-accurate then it can't be solved.

I'm mean you can't call this solved...


This taxi fleet you keep referring to does not exist yet.

AP will be a completely different system when MobilEye take their first paying customers.

Mobileye will mostly start taking payments early 2019 when they launch.
The whole AP will get better. Isn't that whats been said for the past 2+ years?
 
  • Disagree
Reactions: J1mbo
@CarlK I find your approach to deep learning optimistic on Tesla’s part and pessimistic on MobilEye’s part both on MobilEye’s capability in general and on their use of machine and deep learning as well.

That I guess is a big part of why we disagree on our outlook. I’m not sure it will work out that well for Tesla and I am even more sure the outlook is a not as dire for MobilEye at all.

Fair disagreement.
 
@CarlK I find your approach to deep learning optimistic on Tesla’s part and pessimistic on MobilEye’s part both on MobilEye’s capability in general and on their use of machine and deep learning as well.

That I guess is a big part of why we disagree on our outlook. I’m not sure it will work out that well for Tesla and I am even more sure the outlook is a not as dire for MobilEye at all.

Fair disagreement.

That is fine. You might be able to change your opinion if you could spend a little time on learning how this thing machine learning thing works. Fine if you don't want to. We'll just have to wait for the outcome for me to say I told you so. You can have your chance to say the same too but mark my words here that will never happen. :D
 
That is fine. You might be able to change your opinion if you could spend a little time on learning how this thing machine learning thing works. Fine if you don't want to. We'll just have to wait for the outcome for me to say I told you so. You can have your chance to say the same too but mark my words here that will never happen. :D

I actually consider myself faily well versed in that which means if I am wrong it will not be because of ignorance but because I was simply wrong in my assessment of where Tesla and MobilEye are in that and the rest. On the upside I will be sitting in my autonomous Tesla. :)
 
Another proof that you're totally clueless of how machine learning works. Please read my post above and the article linked to learn a little of what is neural net machine learning and how it works.
I stopped replying to you concerning NN because you simply lacked the understanding of what they are.

The is NO such think as "neural net machine learning". Artificial Neural networks are simply a technique under the umbrella of machine learning.

Intelligent character recognition is a subset of OCR which uses neural network rather than human programmed features. But its still referred to as OCR only with modern method.

Its literally a trained network using labeled data, the more labeled data you put in its dataset and train the model with, the more accurate it
VS
old OCR method which uses human written feature extractor/detectors.

Now today its still called OCR and uses NN instead of old methods.
Why you find that hard to understand, beats me.

Creating a Modern OCR Pipeline Using Computer Vision and Deep Learning
 
Both articles are saying the same thing even though your interpretation is kind of confusing. OCR, and the even more challenging image recognition in an autonomous driving system, need neural net machine learning to make it to work. That is exactly what Mobileye did not or does not have,
 
Both articles are saying the same thing even though your interpretation is kind of confusing. OCR, and similarly image recognition is an autonomous driving system, needs nueral net machine learning,to make it to work. which Mobileye doesn't or did not do,

No you are the one confused. Mobileye does NN this has been said a thousand times.
You previously called an explanation of NN "image recognition" and said its not "NN deep learning." You clearly have no clue what you are talking about.