Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla autopilot HW3

This site may earn commission on affiliate links.
Since @mongo is the positive one, let me be the realistic one: Tesla will NOT swap your sensors. Never going to happen.

Reason is the sheer volume of cars. If sensor upgrades are required for FSD, Tesla is toast, end of story.

The real question is therefore are sensor upgrades necessary? All we know is that the sensors were changed from 2.0 to 2.5, and that the new ones are different and better. Necessary? Unknown at present

What do you think the logic behind using RCCC in the first place was? Just cost?
 
What do you think the logic behind using RCCC in the first place was? Just cost?
Tough question. Perhaps a combination of cost, current knowhow and aspirations? Who knows.

Anyway an RCCC camera can only see red and monochrome. Like this guy puts it:
One of the key color information is the red channel, which will help localize the region of interest (ROI) … . For example, Traffic Sign Recognition (TSR) applications can identify possible locations of red colored speed limit signs in the captured image.

Hence, the output generated by RCCC sensors is almost as detailed as a monochrome output and still provides the red color information as shown in Figure 2.

These cameras simply aren't used for anything but getting an intensity and red-color image!

So, albeit there's info about which pixels are more red than others (at quarter resolution, at least), there's no info on whether pixels are more greenish, yellowish or blueish.

So, without further info coming from other pictures, you can only get a greyscale and a red image, and any combination of these, but since there's no info on anything than overall intensity and red intensity, well, no other color info.

Notice that external info might help here – you might train some classifier on a lot of images, and that classifier would then be able to "guess" colors. But that's no better than looking at a black and white photograph of a tree and saying that the leaves are green (because you know that leaves are always green) – that info doesn't come from the image (and hence might be totally wrong; if you don't know the tree, it might be a tree in autumn foliage in bright yellow color).

So the red channel is useful for identifying red-colored regions of interest, which are many and important in traffic (signs, stop lights, tail lights, brake lights).

But blue is certainly an important color too, perhaps more outside the US? (Blue traffic signs everywhere!) But even in the US, blue light beacons on emergency and response cars would probably be useful to identify fast and easy.

(But now I’m touching NN territory, I’ll leave that to others.)
 
@strangecosmos in all seriousness why are you making up these imaginary shutttle examples when what we care about are actual cars in actual traffic?

Hypothetical examples are useful for illuminating concepts. Especially for exploring whether definitions of concepts are accurate. If you believe Level 4 systems are always better than Level 2 systems, then it should be impossible to think of exceptions where a Level 2 system would be better than a Level 4 system. But it is easy to think of exceptions. So clearly it is not true that Level 4 systems are always better than Level 2 systems.

A Level 4 system can be a glorified train, whereas a Level 2 system can be intelligent agent with human-like driving abilities.

It’s possible to have definitions that are only approximate, and have exceptions, but are still useful. That’s fine. All I’m saying is that this isn’t true:

SAE Level 3 is better than SAE Level 2. Full Stop.

The SAE Levels don’t attempt to factor in the difficulty of the driving environment or the difficulty of the driving tasks being performed. A system doing easier tasks in an easier environment with less supervision can be classified as a higher level than a system doing harder tasks in a harder environment with more supervision. The distinction between easier and harder environments is important, and the distinction between easier and harder tasks is important. To say that higher SAE Levels systems always reflect better capabilities is to ignore these distinctions.

Point well-taken that the SAE has clarified that the Level reflects the system‘s design intent. But Waymo was just one of many examples I used, and those other examples still hold. In fact, basing the Levels off design intent makes the definitions even weaker.

Say you have a prototype robotaxi that fails 99 times out of 100 at everything it does. It fails to recognize 99 out of 100 stop signs. It fails to detect 99 out of 100 vehicles or pedestrians. It fails to steer, brake, accelerate, or signal correctly 99 out of 100 times. The safety driver does nothing but take over constantly. This would be a Level 4 system. Yet there are Level 2 systems like Autopilot that perform much better on some of the same tasks.

To me, the most important question is: what driving tasks can the system perform, and what is its success rate? This is more important than the intended level of supervision of the system, which doesn’t necessarily reflect its actual capabilities today. For all we know, the intended level of supervision may be impossible to safely reach, ever.

Thinking only about design intent also obscures the safety/convenience trade-off. Presumably it would be safer if Audi made their system Level 2 instead of Level 3. Yet they’ve apparently decided it’s safe enough to offer some extra convenience. This is a practical/ethical/business decision. Say Audi changes their mind and decides to require human supervision for their system. Does that make the underlying technologically magically worse? Does the neural network’s accuracy on classifying vehicles suddenly drop? Because a CEO changed their mind? No, of course not.

Two automakers might buy the exact same system from Mobileye, with the exact same technical capabilities. The more cautious manufacturer might decide to use it for a Level 2 system, and the less cautious one might use it for a Level 3 system. A Level 2 system and a Level 3 system—the exact same tech, but a different choice by two different CEOs.

It would be illogical to say EyeQ4 is better than EyeQ4. So clearly Level 3 systems are not automatically technically superior to Level 2 systems.

I was surpised that when I brought this up that it was controversial at all. I almost didn’t say anything because I worried I might be making a point that everyone already found obvious.

I’m not even sure the SAE disagrees, actually. Do they actually claim higher-level systems are always better than lower-level systems? If the intended purpose of the SAE Levels is to assign legal liability, then the focus on human supervision makes sense. Who is legally liable is a different question than which systems have better technology.
 
Last edited:
Tough question. Perhaps a combination of cost, current knowhow and aspirations? Who knows.

There is also the possiblity that AP2 camera selections were affected by the MobilEye collaboration, after all they copied MobilEye’s EyeQ4 tri-focal configuration and likely had a place for an EyeQ (4?) chip on the AP2 board that was hastily removed after the split.

Another possibility is that they were affected by the Nvidia collaboration as it is widely believed the infamous FSD video was using Nvidia demo code. Both Nvidia and MobilEye of course in public have used such black-and-whiteish cameras with their systems.

Since going solo this calculation may have then changed by AP2.5...
 
  • Informative
Reactions: Matias
Do they actually claim higher-level systems are always better than lower-level systems?
No, SAE does not claim that, if your definition of "better" is a statistically lower failure rate. In fact SAE doesn't bother with spesifications or statistics at all, its purpose and content is to be a taxonomy that classifies automated systems based on what role the human is intended to have wrt the dynamic driving tasks (steering, accelerating, braking and monitoring).

When it comes to shuttles and trains, you win the argument in principle. But you are talking about something nobody cares about. What we care about is self-driving technology for cars/vehicles that's intended for real life traffic - on highways, urban and rural areas. I don't think it's a relevant argument in this context that a shuttle or train in an airport or factory floor can be deemed L4. It's irrelevant, simply because we're talking about the automotive industry and the marketplace where Tesla, Waymo, Mobileye et. al. are the relevant players.

Aside from this, I think J3016 is very clear, logical and useful. It's not a regulation, nor a specification, but simply a taxonomy. We should treat it as such, and pay attention when the big players use it to describe their system.
 
  • Informative
Reactions: pilotSteve
There is also the possiblity that AP2 camera selections were affected by the MobilEye collaboration, after all they copied MobilEye’s EyeQ4 tri-focal configuration and likely had a place for an EyeQ (4?) chip on the AP2 board that was hastily removed after the split.

Another possibility is that they were affected by the Nvidia collaboration as it is widely believed the infamous FSD video was using Nvidia demo code. Both Nvidia and MobilEye of course in public have used such black-and-whiteish cameras with their systems.

Since going solo this calculation may have then changed by AP2.5...
That's purely speculation, though, right?

Let's hope someone with more knowledge on automotive camera systems can weigh in, I think Tesla's camera sensor (color filter) change is extremely interesting.

Gut feeling: It didn't have anything to do with cost. Could just be a minor optimalization (let's hope so), still it would be interesting to know why Tesla did this
 
That's purely speculation, though, right?

Let's hope someone with more knowledge on automotive camera systems can weigh in, I think Tesla's camera sensor (color filter) change is extremely interesting.

Gut feeling: It didn't have anything to do with cost. Could just be a minor optimalization (let's hope so), still it would be interesting to know why Tesla did this

I believe @verygreen pointed out the camera sensors are physically the same except the color filter right? They would still end up writing code for AP2 cars to convert the colorspace to match AP2.5 right? Seems like they might as well have stuck with AP2 then unless they would rather swap out the cameras at a later point (doubtful).
 
When it comes to shuttles and trains, you win the argument in principle. But you are talking about something nobody cares about. What we care about is self-driving technology for cars/vehicles that's intended for real life traffic - on highways, urban and rural areas. I don't think it's a relevant argument in this context that a shuttle or train in an airport or factory floor can be deemed L4. It's irrelevant, simply because we're talking about the automotive industry and the marketplace where Tesla, Waymo, Mobileye et. al. are the relevant players.

The example we started with is Audi’s Traffic Jam Pilot. This system performs the same driving tasks as Autopilot (lane keeping and adaptive cruise control), in the same environment (divided highways), under the same conditions (dense traffic, low speeds).

You said:

If Audi has L3 - I don't care how restricted - it's something Tesla doesn't have. And yeah, Tesla might have an L2 feature set that's better and/or broader than Audi has, IDK. But the fact remains: L3 is true autonomy while L2 is not. Because with L3 the system not only monitors, but is responsible for monitoring the environment

We don’t have any information that indicates Tesla couldn’t lift Autopilot’s requirement for human supervision in the same circumstances as Traffic Jam Pilot. We can’t compare Autopilot’s success rate on the same tasks in the same circumstances to Traffic Jam Pilot. If Traffic Jam Pilot’s success rate is the same or worse, then Audi’s underlying tech is the same or worse. We don’t know, and we can’t know based on whether the system is Level 2 or Level 3.

Lifting the requirement for human supervision doesn’t make your neural net more accurate, or your path planning and control algorithms better written. It just means you are willing to sacrifice some amount of safety for some amount of convenience. This choice isn’t necessarily bad; my point is simply that this choice doesn’t tell us how good the technology is.

Two car companies could use the exact same technology from Mobileye and other suppliers. One car company could launch a Level 2 system, and the other company could launch a Level 3 system, simply because one company has a more risk-averse CEO and another company has a less risk-averse CEO. In the two cases, the tech is identical, but the SAE Level is different. So the SAE Level doesn’t tell us who has better tech.
 
Last edited:
But it does tell us something about how confident the manufacturer is about its technology!

If we’re relying on that as our source of ground truth, why not just survey executives from different companies on their level of confidence in their technology? Whoever is the most confident has the best technology!
 
Last edited:
That's purely speculation, though, right?

Let's hope someone with more knowledge on automotive camera systems can weigh in, I think Tesla's camera sensor (color filter) change is extremely interesting.

Gut feeling: It didn't have anything to do with cost. Could just be a minor optimalization (let's hope so), still it would be interesting to know why Tesla did this

Well of course but think about it: if you have an EyeQ4 based AP2 ready to go in early 2016 with a supply lined up for the first 6-9 months of production, you aren’t suddenly buying different cameras/filters in their place, not if you are Tesla anyway... you use them up and only then make the swap, remove EyeQ4 and hastily make up some code to compensate...
 
If we’re relying on that as our source of ground truth, why not just survey executives from different companies on their level of confidence in their technology? Whoever is the most confident has the best technology!

This one is different though because Levels 3+ imply manufacturer liability. And companies do take that one seriously. Just as they take SEC seriously.

Tesla may be a little different.
 
If we’re relying on that as our source of ground truth, why not just survey executives from different companies on their level of confidence in their technology? Whoever is the most confident has the best technology!
What? I said the manufacturer's assignment of a Level tells us something about their confidence in their system. Specifically it tells us what their current system is designed for wrt human intervention (if any). You're all over the place with your imagination :)
 
What? I said the manufacturer's assignment of a Level tells us something about their confidence in their system. Specifically it tells us what their current system is designed for wrt human intervention (if any). You're all over the place with your imagination :)

Uber ATG is an example where a company recklessly disabled critical safety systems and ended up killing someone. We should not automatically trust the companies who push ahead the most aggressively to have the safest technology.

Often, the most confident people are the most wrong. Blindly trusting people’s confidence is not evidence-based, and it’s not critical thinking.

Confidence ≠ truth
 
Here’s an example. Elon thinks if full autonomy is 1% safer than humans, we should deploy it. Amnon Shashua thinks it has to be 1000x safer, or society will reject it. So, if Tesla and Mobileye stay true to these philosophies, and if they measure safety in the same way, Mobileye will be a lot more cautious in deploying full autonomy than Tesla. If Tesla and Mobileye both have technology that is equally safe — let’s say 50% safer than humans — Tesla will deploy it and Mobileye won’t.

You are dealing with two distinct variables: how safe the technology is and how precautionary the company’s leadership is. Actually, three variables: there’s also the specific evidence each company uses to assess the safety of its technology.
 
  • Informative
Reactions: pilotSteve
Reason is the sheer volume of cars. If sensor upgrades are required for FSD, Tesla is toast, end of story.

Why so? Even at a $2k added, swapping 6 camera modules at the same time they swap the AP module does not seem difficult or a negative cost.
Swapping the radar would a bit more of a pain due to the bumper cover, but trained mobile service teams could handle it quickly and efficiently.
 
  • Like
Reactions: pilotSteve
Why so? Even at a $2k added, swapping 6 camera modules at the same time they swap the AP module does not seem difficult or a negative cost.
Swapping the radar would a bit more of a pain due to the bumper cover, but trained mobile service teams could handle it quickly and efficiently.

Cost of time (+opportunity) , labor, parts will exceed the Musk'd $2k FSD price.
 
  • Like
Reactions: Inside