Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

How will Tesla demo FSD?

This site may earn commission on affiliate links.
Are you sure they didn't use neural nets? The former head of the Google self driving car project used neural nets in the vehicle that won the 2007 DARPA Urban Challenge (Meet ALVINN, the self-driving car from 1989).

That's why I said Tesla's car would be better looking! My point is that the demo will be the same not that the technology used to do it will be exactly the same. Watching the Google demos in 2010 you might think that self-driving cars were right around the corner :(
The video is also interesting because it shows that Google's initial plan was to develop a level 3 system where the driver would be responsible for taking over very quickly if the car encountered a problem. They later dumped that plan after drivers were found to be sleeping behind the wheel in their tests.

Not sure if they used or didn’t use NN. But it should be noted that NNs became popular after Alexnet(2012), before it was linear regression, SVM etc. It’s very easy to forget how recent the explosion of machine learning was.
 
Clearer everyday? There is zero evidence that Elon is right. Elon says that Teslas will be capable of "hands free" driving in six months so I guess we'll know soon how right he is about that...

Depending on where you put the safety goal (between as safe as to ten times safer than human driver) it looks freeway on ramp to off ramp hands off self driving is right around the corner. From what we've heard recently Tesla can now recognize traffic lights and traffic signs. Surface road capability will be gradually added just matter of solving all edge cases. Of course you are still not there even if there are only 0.1% of edge cases that is not solved but that's pretty much magnitude of the challenge. There are no fundamental issues other than machines need to learn the last but hardest bit. We'll see how it is on Monday but I'm pretty comfortable with my assessment.

More importantly I can't see another company that will be able to sell you an autonomous car that you can drive everywhere in the foreseeable future. Pretty clear Tesla's approach turns out to be the right one.
 
Last edited:
@tomc603 @Daniel in SD Those are exactly how I felt for years about Mobileye and reason for numerous debates with Mobileye fans/trolls on this forum. Something is very wrong for that company when all you have to show are powerpoints and videos. And then you have to keep on inventing new stories when things did not pan out.

Not too long ago Mobileye pooh-pooh'd the idea of NN machine learning Waymo was doing. Matter of fact that was what Tesla was doing independent of Mobileye too and source of conflict between the two companies. All of sudden Amon and Mobileye became experts (which they certainly are not) on ML now when people started to believe that's the right approach.
Mobileye Bullish on Full Automation, but Pooh-Poohs Deep-Learning AI for Robocars

That is simply not true. Mobileye is one of the pioneers of machine learning and indeed deep learning — that you keep repeating the opposite does not make it true. Especially EyeQ4 and onwards make extensive use of it too.

MobilEye EyeQ3 also powers Autopilot 1. We all know how long that was superior to AP2 even with only a single narrow camera installed by Tesla (EyeQ3 supports more and wider cameras).

The simple fact is: MobilEye ships the most capable automotive vision system available in consumer cars today (the EyeQ4). The amount of things it reliably recognizes is vastly superior to what Tesla ships.

Another simple fact is: MobilEye’s approach is hampered by OEMs being slow to make use of these features they ship in their chips. The sad reality is at the moment much of this prowess is going to waste in numerous vehicles. The power is there, the vision system is there, in the concrete form of a chip, but it is not being made great use of.

Tesla does have a iteration and deployment advantage with their AP2+ and that is why myself and people like @S4WRXTTCS drive AP2+ Teslas even though we both think highly of MobilEye too. As a consumer an AP2-3 Tesla is currently the fastest way to enjoy advanced ADAS because everyone else is so very slow on implementing such daring features.

If it was just up to a great vision system, cars with EyeQ4 would be superior. But the car maker would have to actually implement great self-driving features to make that visible to the user...

Indeed what is happening here is that Tesla probably has a worse than average computer vision system but a vastly higher than average desire to build advanced driver’s aids and the OTA means to iterate on those. That is a genuine benefit for Tesla and its customers.
 
Last edited:
Not sure if they used or didn’t use NN. But it should be noted that NNs became popular after Alexnet(2012), before it was linear regression, SVM etc. It’s very easy to forget how recent the explosion of machine learning was.

NNs are old and of course they used them for machine vision. What you are talking about are deep NNs and indeed they became popular only earlier in this decade. MobilEye makes use of deep learning in EyeQ4 (2018) and some use already in EyeQ3 (2014). It is just a myth based on ignorance that they would have somehow ingored this trend just because they also employ other techniques which allows them to ship extremely power-efficient EyeQ chips.
 
Maybe they have solved the commute for all their FSD-employees and will make a video of hundreds of different drives fully autonomous from their houses to their work. Each employee have been a betatester and every intervention has been followed up and fixed through software 2.0 until finally all of them have at least one commute to work without intervention.

This will be my prediction for what they will show:

First they will show the video of the red Model 3 driving from SF to Palo Alto. Second a compliation of hundreds of drives to the Mountain View HQ/Freemont factory. Finally they will let investors choose a nearby destination such as Starbucks, the car will drive them there and park, then back, with each car having one EAP employee and 3 investors in each vehicle.
 
  • Like
Reactions: S4WRXTTCS
That is simply not true. Mobileye is one of the pioneers of machine learning and indeed deep learning — that you keep repeating the opposite does not make it true. Especially EyeQ4 and onwards make extensive use of it too.

MobilEye EyeQ3 also powers Autopilot 1. We all know how long that was superior to AP2 even with only a single narrow camera installed by Tesla (EyeQ3 supports more and wider cameras).

The simple fact is: MobilEye ships the most capable automotive vision system available in consumer cars today (the EyeQ4). The amount of things it reliably recognizes is vastly superior to what Tesla ships.

Another simple fact is: MobilEye’s approach is hampered by OEMs being slow to make use of these features they ship in their chips. The sad reality is at the moment much of this prowess is going to waste in numerous vehicles. The power is there, the vision system is there, in the concrete form of a chip, but it is not being made great use of.

Tesla does have a iteration and deployment advantage with their AP2+ and that is why myself and people like @S4WRXTTCS drive AP2+ Teslas even though we both think highly of MobilEye too. As a consumer an AP2-3 Tesla is currently the fastest way to enjoy advanced ADAS because everyone else is so very slow on implementing such daring features.

If it was just up to a great vision system, cars with EyeQ4 would be superior. But the car maker would have to actually implement great self-driving features to make that visible to the user...

Indeed what is happening here is that Tesla probably has a worse than average computer vision system but a vastly higher than average desire to build advanced driver’s aids and the OTA means to iterate on those. That is a genuine benefit for Tesla and its customers.

Why is that you don't believe what came out from horse's mouth? Mobileye in 2016 still did not have and did not want to do ML. EyeQ's are just vision chips. You can tie it to additional processor and software to do ADAS or autonomous driving but that's it. Tesla's AP1 used EyeQ3 vision output but the rest are all Tesla's. Tesla did ask Mobileye for camera raw data. Mobileye refused because it suspected, correctly, that Tesla was working on its own system and did not want it to participate. I don't need to go into details of what happened later but it had becoming clear Mobileye lied about their relationship and reason for the separation.

I still couldn't figure out why Intel would pay that much money for the company. Mobileye probably had a very good marketing group (meaning able to tell best lies) and Intel was desparate especially from pressure coming from Nvidia.
 
Last edited:
The purpose of autonomous vehicle technology is to make all vehicles autonomous. I'm pretty sure everyone has known that for decades. They initially were working on a driver assist system but decided that it was unsafe and moved on to developing level 4-5 autonomy. The obsession in this forum with the use of Lidar is very confusing to me. The most difficult parts of FSD are not solved by Lidar. If they were then we would already have autonomous vehicles and companies would be trying to make them cheaper by using a camera only approach. Is there some reason why distance information generated by Lidar couldn't be replaced by a camera based or high resolution radar system at a later date?

Clearer everyday? There is zero evidence that Elon is right. Elon says that Teslas will be capable of "hands free" driving in six months so I guess we'll know soon how right he is about that...

The dozens of failed Elon prophecies are obviously signs that he's right.
 
  • Like
Reactions: am_dmd
Why is that you don't believe what's from horse's mouth? Mobileye in 2016 still did not have and did not want to do ML. EyeQ's are just vision chips. You can tie it to additional processor and software to do ADAS or autonomous driving but that's it. I still couldn't figure out why Intel would pay that much money for the company. Mobileye probably had a very good marketing group (meaning able to tell the best lies) and Intel was desparate especially from pressure from Nvidia.

When will you stop spreading balant lies? The EyeQ4 is a NN processor
 
  • Like
Reactions: croman and am_dmd
Why is that you don't believe what's from horse's mouth? Mobileye in 2016 still did not have and did not want to do ML. EyeQ's are just vision chips. You can tie it to additional processor and software to do ADAS or autonomous driving but that's it. I still couldn't figure out why Intel would pay that much money for the company. Mobileye probably had a very good marketing group (meaning able to tell the best lies) and Intel was desparate especially from pressure from Nvidia.

Your definition of ML would then be consumer-end ML? Nobody does that. MobilEye, just like Tesla, trains their (deep) neural networks internally, not in the consumer fleet. You can’t do deep learning, or indeed any neural networks, without machine learning anyway — but it is not the consumer cars that train these networks. EyeQ4 also already has a finalized vision system, it does not need to learn machine vision from consumer cars anyway.

Driving policy is an area where a feedback loop from the car certainly can be helpful. Both Tesla and MobilEye collect mapping from their fleet (EyeQ4 in the case of MobilEye). Indeed MobilEye makes use of that data for their driving policy so that is a domain where indeed feedback from the fleet will matter down the road and MobilEey is doing this today in the BMWs etc that have EyeQ4. So, again, nothing special from Tesla compared to MobilEye.

What IS special from Tesla is a large deployment fleet for OTA iteration of software and validation. That is a benefit for the consumer because it allows us to experience quicker progress. MobilEye is so far hampered by their slow as molasses OEMs. It took someone like Tesla to show what EyeQ3 was really capable, for example, with Autopilot 1.
 
@tomc603 @Daniel in SD Those are exactly how I felt for years about Mobileye and reason for numerous debates with Mobileye fans/trolls on this forum. Something is very wrong for that company when all you have to show are powerpoints and videos. And then you have to keep on inventing new stories when things did not pan out.

Not too long ago Mobileye pooh-pooh'd the idea of NN machine learning Waymo was doing. Matter of fact that was what Tesla was doing independent of Mobileye too and source of conflict between the two companies. All of sudden Amon and Mobileye became experts (which they certainly are not) on ML now when people started to believe that's the right approach.
Mobileye Bullish on Full Automation, but Pooh-Poohs Deep-Learning AI for Robocars

Seriously do you have a reading comprehension problem on that specific article, have you even read the article? can you even quote me what it says that in that article that stands out to what you are trying to say?
 
Your definition of ML would then be consumer-end ML? Nobody does that. MobilEye, just like Tesla, trains their (deep) neural networks internally, not in the consumer fleet. You can’t do deep learning, or indeed any neural networks, without machine learning anyway — but it is not the consumer cars that train these networks. EyeQ4 also already has a finalized vision system, it does not need to learn machine vision from consumer cars anyway.

He has no clue, even @verygreen gave up trying to explain to him.
 
  • Like
Reactions: am_dmd
Seriously do you have a reading comprehension problem, have you even read the article? can you acqually quote me what it says that specific to what you are trying to say?

Happy too. Always good to help out someone with reading comprehension problem.

"But Shashua has poured a little cold water on the idea of cars being self-taught. Deep learning does well on games and other well-defined tasks, like recognizing images in a database, or translating from one language to another—two of Google’s other specialties. But driving, taken as a whole, says Shashua, is not so well defined."
 
  • Disagree
Reactions: am_dmd
Happy too. Always good to help out someone with reading comprehension problem.

"But Shashua has poured a little cold water on the idea of cars being self-taught. Deep learning does well on games and other well-defined tasks, like recognizing images in a database, or translating from one language to another—two of Google’s other specialties. But driving, taken as a whole, says Shashua, is not so well defined."

That's the authors conclusion/interpretation. So let me ask you. what comment preceded this that lead the author to conclude that? and What information/video is this article based on?
 
That's the authors conclusion/interpretation. So let me ask you. what comment preceded this that lead the author to conclude that? and What information/video is this article based on?

Talk about reading comprehension problem.
From horse's mouth "Speaking Friday at a conference organized by Goldman Sachs"

IEEE (Institute of Electrical and Electronics Engineers) is the top engineering professional organization in the world. That "fake news" tactic will not work here.
 
  • Disagree
Reactions: am_dmd
Happy too. Always good to help out someone with reading comprehension problem.

"But Shashua has poured a little cold water on the idea of cars being self-taught. Deep learning does well on games and other well-defined tasks, like recognizing images in a database, or translating from one language to another—two of Google’s other specialties. But driving, taken as a whole, says Shashua, is not so well defined."

Since you are not responding. let me help you out for the 20th time.

Here is what the article is saying

The article's premise is

Company abandons incrementalism, but will keep on annotating road scenes by hand instead of by machine

The author PHilip has no idea what deep learning or machine learning is. So he thinks that annotating images which is supervised learning ISN'T deep learning. This is the whole premise of the article. Its built on this wrong notion. From there the entire article falls apart. First of all supervised learning IS deep learning. This is what everyone does, including Nvidia, Waymo, Tesla. You record millions of pictures of something, then you annotate them and train it on a convulutional neural network. Mobileye hired 1,000 people to do their annotation, Tesla on the other hand outsourced their annotation to third party.

Going into more details

"MobileEye plans to continue using human experts to break self-driving down into parts that it can automate—an expert-system approach. “We have 600 people annotating images at MobileEye; at the end of this year, it will be 1000,” Shashua said."​

Again they don't know that annotating images IS supervised learning which is deep learning.

Andrej explains it starting at 7 mins.

They also incorrectly state that Google was using AlphaGo approach for self driving. But actually Google/Waymo has said multiple times that their planning algorithm was based on classical control and machine learning not RL. Worse is that based on the below quote the author is saying that AlphaGo = deep learning and doesn't understand that there are different subsets. There's reinforcement learning, there is supervised learning and there is unsupervised learning. But the author believed that specifically AlphaGo was and only a technique as AlphaGo constitutes as deep learning. The author doesn't even understand what reinforcement learning is. I call this ignorance.

"Now Google appears to be moving ever more in the direction of the "deep learning" approach to teaching cars to drive themselves. This approach, in which deep neural networks train themselves into expertise with little or no human intervention, is what powered Google's AlphaGo program to its recent victory over a leading master of the game of Go.

AlphaGo learned to imitate the play of human masters through trial and error. At no point did a human being step in and tell the machine to pay attention to, say, points near the edge of the board. The same researchers earlier trained machines to play Atari games—again, without giving them any hints. The program had to learn the rules as it went along."​

What they actually described is deep reinforcement learning which Mobileye has actually always said they would use interesting enough. But again the author is completely stupid and doesn't understand what deep learning is. Now this entire article is based on this video which is talking about how just one big end to end deep neural network with camera input and action (steering & control) won't work (because of the daGGer problem) but multiple deep end to end neural networks as a module will work. The author who I have proven has no clue so he misinterprets it to mean Mobileye is AGAINST deep learning. Because the author thinks only AlphaGo = deep learning.


You have posted this article about 100 times which proves you either did not read the article or you too do not understand what machine learning is.
 
Last edited: