Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

FSD V GM CRUISE

This site may earn commission on affiliate links.
There's a difference between making an informed decision and assuming that reading some articles gives you qualifications you don't have.
That is a strawman you have created because I don't see anyone claiming reading articles gave them qualifications.

That said, 1.2mp sensors is not adequate for L4 or L5 ADS. You don't need a PhD in electrical engineering or Machine learning to come to that conclusion. A close read on the technological advancement in this field and some critical thinking should tell anyone that. For example, Elon Musk said AP1 hardware was capable of L5 autonomous driving, and they are on revision 3.0 now which is not looking like will be enough either. Tesla is possibly upgrading to 5mp sensors in Hardware 4.0 according to some reports.
 
  • Love
Reactions: 2101Guy
That is a strawman you have created because I don't see anyone claiming reading articles gave them qualifications.

That said, 1.2mp sensors is not adequate for L4 or L5 ADS. You don't need a PhD in electrical engineering or Machine learning to come to that conclusion. A close read on the technological advancement in this field and some critical thinking should tell anyone that. For example, Elon Musk said AP1 hardware was capable of L5 autonomous driving, and they are on revision 3.0 now which is not looking like will be enough either. Tesla is possibly upgrading to 5mp sensors in Hardware 4.0 according to some reports.

I'm not convinced that the MP of a camera is the "main" problem. You can always throw more hardware at a problem but from what I've heard Elon and team talk about is refining the input (pixels) where they take the raw values of each pixel without running it through filters us humans need to view them. It sounds like they are finding ways to get the most out of every pixel.

Will it work? Who knows...

Although you don't need a PhD in electrical engineering to come to a conclusion, some kind of proof or data helps :). I'm seriously interested in seeing a published paper explaining why 1.2mp sensors are not adequate for L4 or L5. Then seeing if Tesla (in their published videos/documents) has taken steps to remedy any shortcomings (like the raw pixel stuff they already talked about).
 
Sleepydoc...you are NOT getting paid by Elon/Tesla to cover up their shortcomings, misleading statements, horrific communications, multiple failures at meeting established timelines, etc. Look around..read...the chorus of voices of frustration with F

Notice that NO ONE is disagreeing with me?
Since you’re nothing more than a troll I rarely respond to your posts but I simply suggest you take a look at my posts. You’ll see I’m no Fanboy. I’m also not a hater. I’m simply honest.
 
Sleepydoc...you are NOT getting paid by Elon/Tesla to cover up their shortcomings, misleading statements, horrific communications, multiple failures at meeting established timelines, etc. Look around..read...the chorus of voices of frustration with F

Notice that NO ONE is disagreeing with me?
Nobody disagrees with you because then you won’t leave. You are the only one entertained by your input which is very little.
 
Notice that NO ONE is disagreeing with me?
So what? Doesnt mean they agree, just means most people are bored with the continual "I know xxx" claims again and again and again ..

Go look through the old posts before FSD beta was out. People were "proving" that the car would not be able to see approaching cars at all, and would drive out into traffic all the time, and who knows what else. Where are those "proofs" now?

It's possible, of course, that the car would benefit from additional cameras (headlight side ones have been suggested), but that doesnt mean the car cannot be made a competent driver without them. Right now the car in many ways has better vision than human drivers.
 
So what? Doesnt mean they agree, just means most people are bored with the continual "I know xxx" claims again and again and again ..

Go look through the old posts before FSD beta was out. People were "proving" that the car would not be able to see approaching cars at all, and would drive out into traffic all the time, and who knows what else. Where are those "proofs" now?

It's possible, of course, that the car would benefit from additional cameras (headlight side ones have been suggested), but that doesnt mean the car cannot be made a competent driver without them. Right now the car in many ways has better vision than human drivers.
That’s still not a disagreement. 🤣
 
I did not misunderstand. You said;

And that is wrong. Consumer vehicles do use the same hardware as GM and Waymo use just different configurations. Robo-taxis and consumer vehicles do not have to use the same sensor configuration. L4 on city streets in consumer vehicles is not ready and will not be ready for years but for things like robust D2D L2 and L3 systems we will see in the next couple of years, there are different types of Lidars and for consumer vehicles;

Audi has been using Valeo Scalar Lidar which spins but is placed in the front grill. It is lower resolution compared to something Waymo would use but is estimated to cost $600. Mercedes will be using the 2nd generation of Valeo in their L3 systems mounted in the front grill.
1546938933_19219.jpg


NIO will use Innovusion Falcon which comes standard in their ET7 which cost not more than ~$80K
innovusion_photo.jpg


Volvo will use Luminar LiDAR in their sensor configuration for driver assist and ADS. Consumer cars will use Solid State or Flash Lidars and not the top spinning lidar like robo-taxis. It does not really matter what a robo-taxi looks like, and sensor configuration and cost is less of an issue there.

They are talking about cost of systems that are not being produced at scale yet. Mercedes Drive Pilot which comes with 1 front facing Lidar cost $5300 on the S-Class, and $7900 on the EQS. Consumer cars will not have the same sensor configuration as a robo-taxi fleet operated by a company like Cruise or Waymo.
Not sure how it’s not clear that consumer cars today do not have the same hardware as existing Waymo and GM robotaxis (e.g. - in SF). Yes, I’m very familiar with announced LiDAR equipped vehicles, but you are comparing future models (which actually aren’t yet being advertised as L4 city robotaxi capable) to present day. Present day existing consumer vehicles, we don’t have L4 robotaxis. But the only issue with the argument being made is that Tesla (available for purchase) is being compared with Waymo/GM robotaxi (not available for purchase).
 
Not sure how it’s not clear that consumer cars today do not have the same hardware as existing Waymo and GM robotaxis (e.g. - in SF). Yes, I’m very familiar with announced LiDAR equipped vehicles, but you are comparing future models (which actually aren’t yet being advertised as L4 city robotaxi capable) to present day. Present day existing consumer vehicles, we don’t have L4 robotaxis. But the only issue with the argument being made is that Tesla (available for purchase) is being compared with Waymo/GM robotaxi (not available for purchase).
There are several cars available today with lidars, not just announced. NIO ET7, ET5, ES5, Huawei Arcfox HI, Xpeng P5, Lucid Air, etc. I could keep going.
 
I'm not convinced that the MP of a camera is the "main" problem.
Who said MP was the "main" problem? I said 1.2MP is not adequate for a L4 or L5 ADS. Currently using only cameras is not adequate for a safe deployable L4 ADS. The current state of the art requires multimodal sensor fusion from various complimentary sensors. Just look at all the deployed L4 ADS and what they all have in common.
You can always throw more hardware at a problem
You are not throwing more hardware at the problem; you are using better hardware that is necessary to achieve the task. Bigger and bigger NN require better and more compute. We know from greentheonly that Tesla currently saturates the compute onboard AP3 hardware with little room for redundancy.

AP1 = Mobileye EyeQ3 Processor
AP2 = 1 Nvidia Parker SoC, 1 Nvidia Pascal GPU, 1 Infineon TriCore CPU
AP2.5 = 2 Nvidia Parker SoC, 1 Nvidia Pascal GPU, 1 Infineon TriCore CPU
AP3 = 2 Tesla SoC

AP4 and Dojo are in the works. Dojo which is a very powerful supercomputer is necessary to solve the problem. Autonomous driving is a data intensive problem.
but from what I've heard Elon and team talk about is refining the input (pixels) where they take the raw values of each pixel without running it through filters us humans need to view them. It sounds like they are finding ways to get the most out of every pixel.
Everyone is finding ways to get the most out of every pixel. Tesla's NN architecture references research papers published by Facebook, Google, Waymo etc. Tesla is not doing anything that anyone else isn't in the field. Others are just using much better sensors as well.
Will it work? Who knows...
Tesla knows, that is why AP4 is in the works, and they are reported to have signed a deal with Samsung to provide 5MP sensors and a slight possibility that 4D Imaging Radar will make an appearance.

Although you don't need a PhD in electrical engineering to come to a conclusion, some kind of proof or data helps :). I'm seriously interested in seeing a published paper explaining why 1.2mp sensors are not adequate for L4 or L5. Then seeing if Tesla (in their published videos/documents) has taken steps to remedy any shortcomings (like the raw pixel stuff they already talked about).
Because MP relates to visibility and being able to track and identify people and objects from greater distances? Tesla's current limit is lack of bandwidth and compute to process higher resolution data output which is something AP4 architecture should be able to do. They also sold 1.2MP and AP2 hardware with promises of L5 autonomous driving.

Not sure how it’s not clear that consumer cars today do not have the same hardware as existing Waymo and GM robotaxis (e.g. - in SF).
My literal first example was a car from 2017 and cars being produced right now. My contention is not what level of autonomy these cars have but the idea that consumers can't afford cars with the type of sensors on Waymo and GM. Those same sensors have been and are in consumer cars right now. Lidar, Radar, High resolution cameras, Ultrasonics etc.

Yes, I’m very familiar with announced LiDAR equipped vehicles, but you are comparing future models (which actually aren’t yet being advertised as L4 city robotaxi capable) to present day. Present day existing consumer vehicles, we don’t have L4 robotaxis. But the only issue with the argument being made is that Tesla (available for purchase) is being compared with Waymo/GM robotaxi (not available for purchase).
My literal first example was a car from 2017 and cars being produced right now.
 
Who said MP was the "main" problem? I said 1.2MP is not adequate for a L4 or L5 ADS. Currently using only cameras is not adequate for a safe deployable L4 ADS. The current state of the art requires multimodal sensor fusion from various complimentary sensors. Just look at all the deployed L4 ADS and what they all have in common.
Well, that's a lot of claims .. care to explain the reasoning and/or quote your sources here?
 
  • Funny
Reactions: Daniel in SD
Well, that's a lot of claims .. care to explain the reasoning and/or quote your sources here?
How can you prove that something doesn't exist?
The reasoning is that no one has claimed to have deployed such a system. You would think people would notice such vehicles driving around on public streets without drivers. Anything is possible I suppose.
 
  • Helpful
Reactions: Bitdepth