Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Experts ONLY: comment on whether vision+forward radar is sufficient for Level 5?

This site may earn commission on affiliate links.
The premise may not even be correct: cameras may not necessarily need to match human eyes to be better for driving.

I was thinking of this as well. People are pointing out how much higher resolution the eye is, but that doesn't really matter for driving a car. Our eyes have this incredible resolution so that we can, for example: pick out a lion's fur in the grass off in the distance over there. That kind of fidelity isn't necessary for seeing a car approaching to quickly for us to make that turn or the pedestrian about to cross the road 50 feet ahead. Nothing of importance is so far away that a reasonably good camera can't make it out.
 
What is much more of a challenge are the Situational awareness and Response selecton pieces. In a human, we subconsiously apply the following abilities in a fraction of a second:

Threat determination - will I hit them?
Intuition/experience - they will not see/hear me until it is too late for them to jump out of the way
Path prediction/Intent of others - is the oncoming car on the other side of the road going to stop / is the car behind too close to stop / where will the dog be if they don't change speed or direction before I arrive in that part of the road
Determination of vehicle capability - can the car even stop in time or should I try to avoid
Self-preservation & social prejudice - hit the lamp post, the oncoming car, or the dog
Response - brake hard, sound horn

At the moment, support for these skills in AP is very simplistic, and I even doubt whether a NN is involved at all with TACC or AEB after the Perception stage. Perhaps this is actually OK - building these abilities in a NN will be a tremendous challenge to get to human-equivilence, and there is no rule that says that NN alone is the right approach.

This is why I've been a heavy skeptic of L5 autonomous driving point to point. Do I think it's possible? Yes. Do I think they should work on it? Absolutely. Will it be ready in 2 years? Don't count on it. I've been advocating they focus on highway full autonomy, which I think is achievable much sooner due to the simplified rules and scenarios that entails, but I'd still expect it to be at least 2 years out minimum. And at some point it may require a hardware upgrade, not in sensors necessarily, but in processors.

It's amazing how fast processor technology has been advancing over the last few decades. Each generation can do more on less power. A lot of that advancement comes from shrinking transistor sizes, allowing more transistors in smaller spaces using less power. There is a limit to this. At some point you can't shrink the technology any further, and further advancements may be more incremental based on better designs rather than shrinking scale. But rapid advancement may continue for awhile longer, and at some point the software will reach a point where they don't want to limit it to still work on older slower hardware.

BTW, one conspiracy theory I can probably buy into is that processor technology could be advancing even faster, but companies like Intel don't really want it to. How are they going to get you to keep buying new processors if they can't really advance them any further with current knowledge and technology. I would guess they are scared of the day they can't shrink process technology any further, and are dragging the whole thing out to maximize profit and give them time to figure out what they are going to do when the day comes they can't shink the process technology any further.
 
Last edited:
A lot of that advancement comes from shrinking transistor sizes, allowing more transistors in smaller spaces using less power. There is a limit to this. At some point you can't shrink the technology any further, and further advancements may be more incremental based on better designs rather than shrinking scale.
I was going to say we're either there now or pretty damn close, but I was wrong. We're ~3 years away from hitting that limit: Moore's Law reaches crunch point as transistors stop shrinking
 
  • Informative
Reactions: TaoJones
I was gonna stop posting since it seems futile since guys like @JeffK and @Reciprocity say Tesla is light years ahead.
plus we are at the point where discussion is pointless, end of 2017 is months away. Tesla either has FSD better than human or they don't.

I do however like this thread, it eliminates tesla only fans who don't care anything about facts.

Facts like:

A) Highway is a lot easier than Suburban and Urban roads. Therefore Cross-country driving is useless because less than 5 miles are urban streets no matter what ever parking lot to parking lot you pick.

2ii7voo.jpg


Here is a glimpse of what Highway driving looks vs. Urban streets.

nzVC6X.gif


One of the things we do subconsciously is that we predict.
We predict that a guy in his car looking down at his phone doesn't see us and that we should take evasive action.
We predict that the car in the lane next to us is blocked by the car in front of its lane and that we need to slow down and let them into our own lane.

We predict that the kid running across the street is trying to catch the bus parking infront of us in the next lane so we should watch out and slow down in anticipation of him.

This is in comparison to Highway where you have to make little or no prediction at all because you are simply driving forward in a straight lane.

Here's what predicting where every car, motorcycle, object, pedestrian is going looks like.

g6U9sBO.jpg


The idea that you record steering angle, throttle and brake status and camera feed from a fleet of 100,000 cars and in no time you will have FSD is preposterous.

Researchers and companies all over use simulators and even games like GTA V. We can today get 1 million miles of data from GTA daily and even with all that data you can't make a car FSD. Neural Networks are regression algorithms. They are great at generalizing when you have 500,000 examples of human labeled and collated data but bad at unsupervised learning in noisy data. That is why specific networks are used.

Simulators are available today and the fairytale fleet learning doesn't work.



The CNN at work there is similar to the one NVIDIA uses. But there are problems with over-fitting and under-fitting of data. You ask how hard is the problem? For example NVIDIA took 7,776,000 million pictures mapped to steering, throttle and brake data that were Collated.

Here's how the all most 8 million pictures were humanly Collated. For example they only took picture of the car driving as perfect as possible in lane and removed every other picture like pulling out the driveway, entering and exiting lane. They were all excluded. The reason the car can drive is because its seen millions of collated pictures of it driving inside a lane or with-in the road edge.

"To train a CNN to do lane following we only select data where the driver was staying in a lane and discard the rest."

But that's not enough, the images have to be even more correlated unless the car will be over fitted to drive straight in curves/corners. So more discarding Collation happens.

"To remove a bias towards driving straight the training data includes a higher proportion of frames that represent road curves."

But even with just that, if the car were to steer too much to the right and got off lane it would crash because it wouldn't know how to recover. So we do more Collation.

"After selecting the final set of frames we augment the data by adding artificial shifts and rotations to teach the network how to recover from a poor position or orientation."

This is all just to create a Level 2 capable car. So the idea you just beam up Terabytes of data per day up to Tesla HQ and you have FSD is laughable. This is why as Nvidia say "When self-driving cars go into production, many different AI neural networks, as well as more traditional technologies, will operate the vehicle. Besides PilotNet, which controls steering, cars will have networks trained and focused on specific tasks like pedestrian detection, lane detection, sign reading, collision avoidance and many more."

intel_automobility_data_generation_BIG.png



(obviously i'm a software engineer with experience in machine learning and computer vision)
 
Last edited:
I was gonna stop posting since it seems futile since guys like @JeffK and @Reciprocity say Tesla is light years ahead.
plus we are at the point where discussion is pointless, end of 2017 is months away. Tesla either has FSD better than human or they don't.

I do however like this thread, it eliminates tesla only fans who don't care anything about facts.

Facts like:

A) Highway is a lot easier than Suburban and Urban roads. Therefore Cross-country driving is useless because less than 5 miles are urban streets no matter what ever parking lot to parking lot you pick.

2ii7voo.jpg


Here is a glimpse of what Highway driving looks vs. Urban streets.

nzVC6X.gif


One of the things we do subconsciously is that we predict.
We predict that a guy in his car looking down at his phone doesn't see us and that we should take evasive action.
We predict that the car in the lane next to us is blocked by the car in front of its lane and that we need to slow down and let them into our own lane.

We predict that the kid running across the street is trying to catch the bus parking infront of us in the next lane so we should watch out and slow down in anticipation of him.

This is in comparison to Highway where you have to make little or no prediction at all because you are simply driving forward in a straight lane.

Here's what predicting where every car, motorcycle, object, pedestrian is going looks like.

g6U9sBO.jpg


The idea that you record steering angle, throttle and brake status and camera feed from a fleet of 100,000 cars and in no time you will have FSD is preposterous.

Researchers and companies all over use simulators and even games like GTA V. We can today get 1 million miles of data from GTA daily and even with all that data you can't make a car FSD. Neural Networks are regression algorithms. They are great at generalizing when you have 500,000 examples of human labeled and collated data but bad at unsupervised learning in noisy data. That is why specific networks are used.

Simulators are available today and the fairytale fleet learning doesn't work.



The CNN at work there is similar to the one NVIDIA uses. But there are problems with over-fitting and under-fitting of data. You ask how hard is the problem? For example NVIDIA took 7,776,000 million pictures mapped to steering, throttle and brake data that were Collated.

Here's how the all most 8 million pictures were humanly Collated. For example they only took picture of the car driving as perfect as possible in lane and removed every picture other picture like pulling out the driveway, entering and exiting lane is excluded. The reason car can drive is because its seen millions of collated pictures of it driving inside a lane.

"To train a CNN to do lane following we only select data where the driver was staying in a lane and discard the rest."

But that's not enough, the images have to be even more correlated unless the car will be over fitted to drive straight in curves/corners. So more discarding Collation happens.

"To remove a bias towards driving straight the training data includes a higher proportion of frames that represent road curves."

But even with just that, if the car were to steer too much to the right and got off lane it would crash because it wouldn't know how to recover. So we do more Collated.

"After selecting the final set of frames we augment the data by adding artificial shifts and rotations to teach the network how to recover from a poor position or orientation."

This is all just to create a Level 2 capable car. So the idea you just beam up Terabytes of data per day up to Tesla HQ and you have FSD is laughable. This is why as Nvidia say "When self-driving cars go into production, many different AI neural networks, as well as more traditional technologies, will operate the vehicle. Besides PilotNet, which controls steering, cars will have networks trained and focused on specific tasks like pedestrian detection, lane detection, sign reading, collision avoidance and many more."

intel_automobility_data_generation_BIG.png



(obviously i'm a software engineer with experience in machine learning and computer vision)
You still don't seem to get the approach that both MobilEye and Tesla are using. I'd advise you to watch talks by Amnon Shashua and actually pay attention to why TBs of data is not needed. TBs of data might be processed but only the result is important. For example, Assuming you can identify lane lines and road boundaries, how much data do you need to represent where on the road the car is travelling with respect to the lines? HInt: very little

Let's say your software can do a 3D box around cars or pedestrians ... what's that? 6 vectors in 3D space. To store that information is not MB or even GB of data. While possibly MBs of data was streamed through NNs to get that result. There's no need to record the inputs.

You also keep citing a proof of concept Nvidia paper which only showed CNNs can be trained to do basic road following without "manual decomposition into road or lane marking detection, semantic abstraction, path planning, and control". This doesn't mean it's full features nor did they claim it was. Tesla is not using this system as evidenced in the demo videos. You also seem to think they are using Nvidia DriveWorks, which they are not.

If you think fleet learning doesn't work then you should let Amnon Shashua from MobilEye know that you are convinced the MobilEye approach will not work because you're an expert. ;)
 
  • Informative
Reactions: croman
I was gonna stop posting since it seems futile since guys like @JeffK and @Reciprocity say Tesla is light years ahead.
plus we are at the point where discussion is pointless, end of 2017 is months away. Tesla either has FSD better than human or they don't.

I do however like this thread, it eliminates tesla only fans who don't care anything about facts.

Facts like:

A) Highway is a lot easier than Suburban and Urban roads. Therefore Cross-country driving is useless because less than 5 miles are urban streets no matter what ever parking lot to parking lot you pick.

2ii7voo.jpg


Here is a glimpse of what Highway driving looks vs. Urban streets.

nzVC6X.gif


One of the things we do subconsciously is that we predict.
We predict that a guy in his car looking down at his phone doesn't see us and that we should take evasive action.
We predict that the car in the lane next to us is blocked by the car in front of its lane and that we need to slow down and let them into our own lane.

We predict that the kid running across the street is trying to catch the bus parking infront of us in the next lane so we should watch out and slow down in anticipation of him.

This is in comparison to Highway where you have to make little or no prediction at all because you are simply driving forward in a straight lane.

Here's what predicting where every car, motorcycle, object, pedestrian is going looks like.

g6U9sBO.jpg


The idea that you record steering angle, throttle and brake status and camera feed from a fleet of 100,000 cars and in no time you will have FSD is preposterous.

Researchers and companies all over use simulators and even games like GTA V. We can today get 1 million miles of data from GTA daily and even with all that data you can't make a car FSD. Neural Networks are regression algorithms. They are great at generalizing when you have 500,000 examples of human labeled and collated data but bad at unsupervised learning in noisy data. That is why specific networks are used.

Simulators are available today and the fairytale fleet learning doesn't work.



The CNN at work there is similar to the one NVIDIA uses. But there are problems with over-fitting and under-fitting of data. You ask how hard is the problem? For example NVIDIA took 7,776,000 million pictures mapped to steering, throttle and brake data that were Collated.

Here's how the all most 8 million pictures were humanly Collated. For example they only took picture of the car driving as perfect as possible in lane and removed every picture other picture like pulling out the driveway, entering and exiting lane is excluded. The reason car can drive is because its seen millions of collated pictures of it driving inside a lane.

"To train a CNN to do lane following we only select data where the driver was staying in a lane and discard the rest."

But that's not enough, the images have to be even more correlated unless the car will be over fitted to drive straight in curves/corners. So more discarding Collation happens.

"To remove a bias towards driving straight the training data includes a higher proportion of frames that represent road curves."

But even with just that, if the car were to steer too much to the right and got off lane it would crash because it wouldn't know how to recover. So we do more Collated.

"After selecting the final set of frames we augment the data by adding artificial shifts and rotations to teach the network how to recover from a poor position or orientation."

This is all just to create a Level 2 capable car. So the idea you just beam up Terabytes of data per day up to Tesla HQ and you have FSD is laughable. This is why as Nvidia say "When self-driving cars go into production, many different AI neural networks, as well as more traditional technologies, will operate the vehicle. Besides PilotNet, which controls steering, cars will have networks trained and focused on specific tasks like pedestrian detection, lane detection, sign reading, collision avoidance and many more."

intel_automobility_data_generation_BIG.png



(obviously i'm a software engineer with experience in machine learning and computer vision)


Wow, im honored. All this work for wittle ol'me. Anyway. You have again listed a large volume of publicly available information and is maybe several years to a decade old in terms of when it was private/proprietary knowledge. I will give you some actual facts that are much more recent, as in an a few hours ago. Elon just got off the earnings call and clearly stated that they had to build AP2 in response to Mobileye not allowing them to utilize EyeQ3 along side their own system and they had to recreate it in 6 months. And they did. My assertion is that they didnt stop developing FSD to do that, ie.. .a different group with different expertise. AP2 more vision/radar folks and FSD more AI/Machine learning people with most of the overlap in Vision, based on recent information that has been made publicly available. This publicly available info is the tip of the iceberg and the iceberg that is hidden under the ocean is the propriety information that is not public. Every company has it, only one is saying they will demo FSD Nov-Dec of this year. Only one i saying that they have full hardware install since Oct. 16 of last year and at worst will require a simple processor update, which is easily accessible through the glove box (which has been at least partially confirmed on this forum.) All of which was reiterated on the quarterly call just an hour ago.

For your listening pleasure - Today's quarterly update conference call from Tesla: http://edge.media-server.com/m/p/9txdf9t7

They could be purposely deceiving us, but then again that would be illegal.

P.S. All that stuff looks scary to humans, but it is trivial to track a few objects for a super computer.
P.S.S. do you think Intel might be exaggerating a bit. Or do they not know how compress video? Tesla has no Lidar or that's 1/2 the 4000GB a day. Compressing the video gets it down to a couple of Netflex HD movies for the average driver per day. Lastly, you dont need every moment of data, only the moments where the car deviates from what the human did in a measurable way. In coding, this is like only viewing the differentials in the code vs the entire code base to find out what changed. The smarter the system gets, the less of these situations that need to be analyzed by more powerful machines or worst case, people.
 
I'll join in, based on:
  • Have a BS in computer science
  • A software engineer actually employed working with AI/neural networks/artificial vision/autonomous driving
  • An electrical engineer
Let's start with the data. The resolution and information coming from 8 cameras + Radar + Ultrasonics is way better than a human driver has available to them. Someone earlier on this thread mentioned a human driver turning their head - well with 8 eyes arranged around your head you don't need to. There has been a lot of discussion concerning LIDAR vs CAMERAS, but that's swings vs roundabouts - the raw sensor data is there and better than a human.

So, it comes down to interpretation of that data. Computers are dramatically faster than humans with numeric calculations, but humans have historically beaten computers hands down every time when it comes to pattern recognition (in sounds, vision, etc). Sure, there are specific narrow fields (e.g. number plate recognition, voice recognition, etc) where great improvement has been made in computer pattern recognition, but in general humans still have the lead here. Which would do better at finding easter eggs in a backyard - a robot or a 5 year old child?

This has been a hard nut to crack, with decades of stagnation in the field, but there has been progress in recent years. IMHO, the question is really not if computers will end up better at pattern recognition than humans, but when.

On my drive to work this morning, the highway was pretty simple. A big clear central divider. Clear lane markings. Cars all going the same way. There was one idiot stopped in the outside lane, but apart from that not too difficult. But, when I got to the streets near my office things got tricky. A truck reversing. A pedestrian crossing diagonally. A dog squatting down and having a *sugar* in the drain. Confusing signs. A nightmare junction where the only way to get out was to push slowly into traffic. An old lady pushing a metal trolley piled high with boxes, the wrong way up the street.

Will Elon's team be able to detect 99% of these hazards and navigate the car around them? Yes, sure. But 100%? 99.9%? I think that will take some time. The sensors are fine, but I worry about the CPU power needed to do this right. Highway is relatively easy - it is the last mile that worries me.

P.S. As a postscript, I'll mention that I have an AP 2.0 X and I didn't buy the full self driving option. My concern is not so much the technical capabilities (as I said, I think we'll get there eventually - at least to a level where the computer does better than a human). My concern is the regulatory (who is the legally the driver), insurance/liability (who's responsible for an accident), social (even if the stats show the AI to be safer, society has never trusted statistics and will react badly to each and every accident when the AI was in charge), and legal (sue the hell out of the manufacturer for each and every accident where the AI was in charge, as they have the biggest pockets) issues. Sure, we might get to 99.5% in 2 years, but those other non-technical issues are going to take a hell of a lot longer than that to sort out.
 
Wow, im honored. All this work for wittle ol'me. Anyway. You have again listed a large volume of publicly available information and is maybe several years to a decade old in terms of when it was private/proprietary knowledge. I will give you some actual facts that are much more recent, as in an a few hours ago. Elon just got off the earnings call and clearly stated that they had to build AP2 in response to Mobileye not allowing them to utilize EyeQ3 along side their own system and they had to recreate it in 6 months. And they did. My assertion is that they didnt stop developing FSD to do that, ie.. .a different group with different expertise. AP2 more vision/radar folks and FSD more AI/Machine learning people with most of the overlap in Vision, based on recent information that has been made publicly available. This publicly available info is the tip of the iceberg and the iceberg that is hidden under the ocean is the propriety information that is not public. Every company has it, only one is saying they will demo FSD Nov-Dec of this year. Only one i saying that they have full hardware install since Oct. 16 of last year and at worst will require a simple processor update, which is easily accessible through the glove box (which has been at least partially confirmed on this forum.) All of which was reiterated on the quarterly call just an hour ago.

For your listening pleasure - Today's quarterly update conference call from Tesla: http://edge.media-server.com/m/p/9txdf9t7

They could be purposely deceiving us, but then again that would be illegal.

P.S. All that stuff looks scary to humans, but it is trivial to track a few objects for a super computer.
P.S.S. do you think Intel might be exaggerating a bit. Or do they not know how compress video? Tesla has no Lidar or that's 1/2 the 4000GB a day. Compressing the video gets it down to a couple of Netflex HD movies for the average driver per day. Lastly, you dont need every moment of data, only the moments where the car deviates from what the human did in a measurable way. In coding, this is like only viewing the differentials in the code vs the entire code base to find out what changed. The smarter the system gets, the less of these situations that need to be analyzed by more powerful machines or worst case, people.

Brilliant you replied my post full of facts with a quote from Elon. That is amazing.
But isn't this thread supposed to be for software engineers only and only based on substantiated in-dependently verifiable facts. Wasn't the point of the thread to eliminate wishful information and statements from Elon and deal strictly on verifiable facts?

I hate to break it to you but all the technique and innovation in self driving cars are all public in academic research and patents.
Its all out there.

That is why Mobil-eye lawyers will come knocking at Tesla doors when the time is right.
You should do your homework sometime.

Lastly no they didn't recreate everything from scratch. They already had all the programming and code ultilized to process outputs from EyeQ3 that took them a full year to make. They already had their code for fleet learning and crowd sourced HP mapping from gps. So they only had to recreate some of the functionalities of EyeQ3 chip and hook it up to their AP1 code.
 
Last edited:
  • Like
  • Love
Reactions: lunitiks and GSP
You still don't seem to get the approach that both MobilEye and Tesla are using. I'd advise you to watch talks by Amnon Shashua and actually pay attention to why TBs of data is not needed. TBs of data might be processed but only the result is important. For example, Assuming you can identify lane lines and road boundaries, how much data do you need to represent where on the road the car is travelling with respect to the lines? HInt: very little

Let's say your software can do a 3D box around cars or pedestrians ... what's that? 6 vectors in 3D space. To store that information is not MB or even GB of data. While possibly MBs of data was streamed through NNs to get that result. There's no need to record the inputs.

You also keep citing a proof of concept Nvidia paper which only showed CNNs can be trained to do basic road following without "manual decomposition into road or lane marking detection, semantic abstraction, path planning, and control". This doesn't mean it's full features nor did they claim it was. Tesla is not using this system as evidenced in the demo videos. You also seem to think they are using Nvidia DriveWorks, which they are not.

If you think fleet learning doesn't work then you should let Amnon Shashua from MobilEye know that you are convinced the MobilEye approach will not work because you're an expert. ;)


The TB of data I'm referring to is the one required for @Reciprocity wishful scenario.
He claims that Tesla is doing end to end and sending raw pixels (billions of miles of pictures according to him).

However, what you are talking about is meta-data, which is good and solves the data size problem.
I have mentioned this many times. Check one of my first posts in this forum.

The problem is that those data still need to be human labeled and collated and algorithm must be created and fine-tuned to each specific problem.
This is exactly what for example google does.

You still can't feed that information (raw pixel or meta data) to a network and then get FSD back.

And Tesla does/have used driveworks. For your information that end to end method is not driveworks.

Infact driveworks was released October 2016 to all Nvidia's OEM
 
Last edited:
  • Informative
Reactions: GSP
I'd like you to open your mouth and offer your opinions.
Opinions are like belly buttons - everyone has one*

*This is the PG-13 version.

To be fair, I have nothing useful to add. I could give my non-professional opinion, but there are a lot of opinions in this thread, and it really doesn't matter what I think.
 
Last edited:
  • Like
Reactions: GSP
Opinions are like belly buttons - everyone has one*

*This is the PG-13 version.

To be fair, I have nothing useful to add. I could give my non-professional opinion, but there are a lot of opinions in this thread, and it really doesn't matter what I think.
It's a conversation - what you think matters to me now that I know you've worked professionally in machine learning.
 
Since we don't have any details on how well the AP2 sensors work, we are all guessing on this - based on what we see in the cars with the current software and what comments we can dredge up from Tesla (official or unofficial).

I've managed major hardware/software projects, and based on what we've seen from Tesla, can make some reasonable guesses on what we should expect.

First, it's likely Tesla ran a number of tests to validate the AP2 sensors can provide enough information to detect the objects required to implement EAP and FSD. What's most interesting about the FSD video Tesla posted is not the ability for the software to drive in FSD mode on a pre-programmed route - it's the videos of some of the sensors showing the real-time object recognition, demonstrating the sensors appear to be providing enough information to do the real-time object recognition needed for the EAP/FSD software.

The statements on Tesla's website claiming the AP2 hardware is sufficient to support FSD are probably based on the tests run before Tesla put the AP2 hardware into production.

However... Tesla is careful to note FSD is intended to operate in "almost all circumstances" - so they have some concerns there may be some circumstances when the AP2 sensors won't be good enough - such as in extreme weather conditions (though one could argue those conditions may not be safe for human drivers either).

Assuming the sensors are sufficient to operate well enough to support FSD in "almost all circumstances" - then the biggest unknown should be the amount of processing required to interpret the sensor data and translate that into driving actions. While AP2 has a huge increase in processor power, Musk's comments several months ago are reasonable - that it's possible Tesla may determine more processing power is needed, and fortunately, that should be a simple board swap - assuming the AP2 sensors are good enough to support FSD.

Like with any major software project, it's difficult to anticipate the challenges that will be encountered during development or how long it will take to get the software functionality to achieve the desired goals. Musk's projection of two years might or might not right - it could be faster, though there's also a strong possibility it could take longer than that. And, that's OK... As long as Tesla doesn't encounter any showstoppers and continues to move closer to achieving their goals.

When we ordered our S 100D in January, we checked the boxes for EAP and FSD, knowing there is a lot of uncertainty as to if or when Tesla will get FSD operational and approved for use. Even with the uncertainty for getting FSD, we expect that well before then we'll see improved EAP, taking advantage of the full sensor suite - and even in a "driver assist" mode, the software should be a considerable improvement over AP1 or EAP operating with only 4 of the cameras.
 
  • Like
  • Informative
Reactions: GSP and markwj
not an expert

I would point out that one the differences between a radio controlled vehicle and a autonomous vehicle is that a radio controlled vehicle is human supervised 1 to 1, whereas an autonomous vehicle is human supervised one to many.

Given a suitable emergency uplink, Tesla's method could be sufficient for humanless driving. (That is if an uplink is fast enough for a head office human to take over if an outlier case is detected.)

Lidar will come down a tremendous amount in cost. The silicon required to shot a laser, and time its return is very small. Significantly smaller than building a Nvidia class GPU.
 
The TB of data I'm referring to is the one required for @Reciprocity wishful scenario.
He claims that Tesla is doing end to end and sending raw pixels (billions of miles of pictures according to him).

However, what you are talking about is meta-data, which is good and solves the data size problem.
I have mentioned this many times. Check one of my first posts in this forum.

The problem is that those data still need to be human labeled and collated and algorithm must be created and fine-tuned to each specific problem.
This is exactly what for example google does.

You still can't feed that information (raw pixel or meta data) to a network and then get FSD back.

And Tesla does/have used driveworks. For your information that end to end method is not driveworks.

Infact driveworks was released October 2016 to all Nvidia's OEM

Your facts are impressive, they just are not relevant and frankly you should leave my name out of it.

The facts really are simple. Tesla, a publicly traded company, which carries a great deal of weight as it relates to being truthful and honest about what they know at the time they say it. They are yelling as loudly as possibly that they have all but solved FSD and they solution is different then everyone else's. The demo will be Nov or Dec of this year. My guess is that they don't even mention that unless they could do it today, because of the weight their words carry great consequence, much greater then some highly speculative forum dwellers, like us.

To dismiss that as outright lies is not smart and should be taken as a very critical data point. Just like the known fact that Tesla has installed hundreds of millions of dollars in hardware in their cars to support their statements. No one does that on a whim, not even Elon, I'm going to Mars, Musk. If anything, Elon's hubris as it relates to FSD is another valid data point. Another example is the factory where the software is far more complex then what's in the cars for FSD, though you have to take Elon's word for that.

The fact is none of us know what they are cooking up and if we had the skills to contribute, we wouldn't be here blabbing about it.

Out of an abundance of respect for@calisnow this is the last post on this thread for me, even if you @me, I'm not taking the bait. I have made my case, we will all done out soon enough and I will happily eat a giant bucket of fried Crow if I have been bamboozled by the Musk that is Elon.