Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla Software Version 10.0

This site may earn commission on affiliate links.
Lucky, my eyes are roughly at the same level as the pillar camera so I don't have to worry about hitting the glass with the top of my head.

It's notable that I'm much more likely to be able to identify moving objects through another car's windows than a small neural net on HW2.5/3.0. In addition, I am able to determine when my view is actually obscured, which is critical to know, and I am not sure whether the car understands what is happening when it can't see.

(which what it sees is wider than what you see in the center screen)

Really? How do you know this? It annoys me that they wouldn't provide the full camera output on the center screen, if true! I need to see as wide an angle as possible, so why would they not put that on the center screen?
 
Really? How do you know this? It annoys me that they wouldn't provide the full camera output on the center screen, if true! I need to see as wide an angle as possible, so why would they not put that on the center screen?

Mostly since the image shown on the screen will change between updates. After the last update, I can now see my license plate in the image that was not in view before the update. The camera is a very wide fish-eye, the entire image would not be ideal for the main use of the camera in backing up and parking but error be nice to have the option of zooming out for instances like this.
 
  • Like
Reactions: nvx1977
I’m really curious if people actually believe that adding features like YouTube, Netflix Streaming, etc is delaying FSD in any way?

There is practically no overlap between the expertise needed for autonomy and streaming video/audio through HTML 5.

Moving every UX programmer over to autopilot team is like asking 9 women to work together and give birth to a baby in one month.
 
my eyes are roughly at the same level as the pillar camera so I don't have to worry about hitting the glass with the top of my head.

The pillar cameras generally face forward so aren't much use for backing up.

Mostly since the image shown on the screen will change between updates. After the last update, I can now see my license plate in the image that was not in view before the update. The camera is a very wide fish-eye, the entire image would not be ideal for the main use of the camera in backing up and parking but error be nice to have the option of zooming out for instances like this.

Indeed, it does seem pretty wide - a lot wider than the picture suggests - though I don't think it is any wider than it was before, you definitely can see the plate. I remember seeing the top of the trunk lid in the corners before, and it seems that I can still see the same amount of those.

The concern I have is this, though:

Initially, when backing up, the car will be able to see traffic on either side. (A lot more angle than I thought.) But as it backs up further, it will tend to have larger and larger blindspots! Obviously physically not possible to see around the rear bumper edges (and lacks a couple degrees before that as you can see, at least on one side, for me).

So a little different than I thought - it's not what it initially sees that is the issue - it's whether it can continue to see. I'm not sure how the car is going to get around this unless it remembers what it sees. At some point it'll be far enough out that other cameras can see, but that will lead to a significant period where it can't see traffic.

I'm not convinced that parking nose out will help much though. First, the radar will have limitations on field of view as well. Probably more so than the backup camera. So it will suffer from the increasing blindness problem as well. The only benefit I can see to parking nose out is that the cameras will become helpful earlier; for coming out forwards, you "only" have to come as far as the windshield (when the wide angle front camera will become useful). While for backing out, the only other cameras facing in the right direction for backing out are on the front quarter panels! You have to come out a long way before they are useful! (It's even possible the pillar cameras might be useful first, depending on the situation with blocking vehicles.)

Posting a couple pictures to show how wide the angle is. You can see in the second and third pictures about what you can see...car is sitting about 18 inches in front of the cement line of the edge of the garage.
IMG_5350.jpg

IMG_5351.jpg

IMG_5352.jpg
 
Last edited:
  • Informative
Reactions: jebinc and MrBadger
Adding to this, I don't see any reason why Tesla won't just add camera-based rear-cross traffic alerting once they have Enhanced Summon kinda-sorta working. They'll need to be able to identify vehicles in the periphery of the camera view to make Enhanced Summon not hit cars right away when backing out, and if they're doing that, they may as well warn you about them when you're actually driving! It won't be as good as radar-based RCTA, but it's better than no warning at all...maybe. I guess it could be worse to implement it, since it might cause people to not pay attention (rely on the system)...and since after starting to back up you are creating larger and larger blind spots, the system isn't reliable like a radar-based system, and things could go south in a hurry. So maybe I do see a reason they won't add it...
 
  • Like
Reactions: C141medic
@verygreen uploaded a video of this exact scenario, pulling out of a parking spot backwards with footage from the cameras including the backup, repeaters, and pillar cams.


My worry is the obscuring of the backup camera, especially in the rain (or snow). Not sure how the car will be able to back out when the camera looks like this:

cameravisibility.jpg

It is an interesting video. Kind of what I expected. However, the SUV on one side is a minuscule one (small SUV class I think). And it’s just a car on the other side. Would be cool to see it with two Tahoes, one on each side. Not hard to find.
 
Adding to this, I don't see any reason why Tesla won't just ...

Ahh. The classic, "why don't they just" line. Tesla didn't get to where they are listening to forum/social media users who constantly drop the "why don't they just..." at every opportunity. Perhaps Tesla is in a better position to determine what will work and what won't at this point. Food for thought.
 
Perhaps Tesla is in a better position to determine what will work and what won't at this point.

Ah, the “Tesla knows best” line... ;) The track record is mixed on that one.


Note that a lot of features (perhaps not the one in question - there is only so much hardware can do) don’t exist because Tesla has not been able to get to it yet (tons of development to do), not because they aren’t possible or are not good ideas.

Food for thought.
 
Last edited:
You forgot the quantum computer between our ears...
US Department of Transportation - "34,247 fatal motor vehicle crashes in the United States in 2017 in which 37,133 deaths occurred. This resulted in 11.4 deaths per 100,000 people and 1.16 deaths per 100 million miles traveled. The fatality rate per 100,000 people ranged from 4.5 in the District of Columbia to 23.1 in Mississippi."

Not sure what the disconnect is here. If FSD means the vehicle can travel from point A to point B on it's own, while maintaining a better vehicle crash rate than current rates for human drivers - what's missing?
 
He already said it is FSD.

Ok. So it sounds like there was no argument here then. Everyone seems to be saying FSD is totally objectively possible, it’s really just a question of when we have the computing power, sensors, and the knowledge of how to solve the problem.

Some people just think it’s a lot harder than others.
 
  • Like
Reactions: jebinc and Joshan
Ok. So it sounds like there was no argument here then. Everyone seems to be saying FSD is totally objectively possible, it’s really just a question of when we have the computing power, sensors, and the knowledge of how to solve the problem.

Some people just think it’s a lot harder than others.

I am in the camp that FSD is possible, given the right hardware and computer power. Any problem in the world can be solved with a big enough computer.
 
@verygreen uploaded a video of this exact scenario, pulling out of a parking spot backwards with footage from the cameras including the backup, repeaters, and pillar cams.


My worry is the obscuring of the backup camera, especially in the rain (or snow). Not sure how the car will be able to back out when the camera looks like this:

cameravisibility.jpg
I hope this helps clear up what the cameras see and if a 360 view can be made with the existing cameras.
 
Ok. So it sounds like there was no argument here then. Everyone seems to be saying FSD is totally objectively possible, it’s really just a question of when we have the computing power, sensors, and the knowledge of how to solve the problem.

Some people just think it’s a lot harder than others.
So, as soon as we have the technology, the implementation, and the ability, we're good. Desktop fusion, you're next!
 
  • Funny
Reactions: AlanSubie4Life