Agreed. Robitaxis should put TSLA at $10K or greater but it's not likely to happen with the current hardware suite IMO.
What part of the hardware suite do you believe is insufficient for safer than a human FSD?
You can install our site as a web app on your iOS device by utilizing the Add to Home Screen feature in Safari. Please see this thread for more details on this.
Note: This feature may not be available in some browsers.
Agreed. Robitaxis should put TSLA at $10K or greater but it's not likely to happen with the current hardware suite IMO.
1 More rear-facing cameras because my rear camera is regularly distorted by rain drops. It would be helpful if the repeater camera views intersected each other. I suggest 4 additional corner cameras (2 front, 2 rear).What part of the hardware suite do you believe is insufficient for safer than a human FSD?
What part of the hardware suite do you believe is insufficient for safer than a human FSD?
1 More rear-facing cameras because my rear camera is regularly distorted by rain drops. It would be helpful if the repeater camera views intersected each other. I suggest 4 additional corner cameras (2 front, 2 rear).
2 Rear radar.
These are the two things that bother me the most.
For the record, I don't believe lidar is necessary.
1 More rear-facing cameras because my rear camera is regularly distorted by rain drops. It would be helpful if the repeater camera views intersected each other. I suggest 4 additional corner cameras (2 front, 2 rear).
2 Rear radar.
These are the two things that bother me the most.
For the record, I don't believe lidar is necessary.
For the record, your beliefs seem to be supported by nothing at all. Nobody cares what you think if your thoughts are based on random emotions.
Do you really believe that it is difficult for Tesla's AI to normalize vision for dirt and rain? Is so, why?
You say you think that more radar is needed. Why? Isn't a Tesla's visual capability already superhuman? Why is more needed?
There's no doubt that the problem is hard. Tesla seems to be putting a massive effort into improving its software and compute capability. But no effort into the things you mention. I wonder why that is.
One example is if the car is backing up in a parking lot (for whatever reason), rear camera distortion prevents a clear view of the surroundings. Doesn't matter if you define that as a technical problem or insufficient hardware.Why do you think these rear cameras and radar are necessary for safer than a human FSD?
The rear camera being distorted by rain drops not only sounds like a technical problem rather than insufficient hardware, even if stops you from seeing clearly, it might not impact the neural nets as much.
Not that you asked me, but the most likely limiting factor is the silicon memory and compute power.
One example is if the car is backing up in a parking lot (for whatever reason), rear camera distortion prevents a clear view of the surroundings. Doesn't matter if you define that as a technical problem or insufficient hardware.
If multiple 9s are going to be added after the decimal, a better view of the rear seems needed IMO. Radar and corner cameras would offer that.
Isn't the obvious solution here to make sure the camera functions properly and doesn't get distorted rather than add more cameras?
Also sounds like this is only a problem when reversing in parking lots? Or does this also affect NoA performance on high ways in your experience?
Is this a malfunction of the rear camera or a limitation?Isn't the obvious solution here to make sure the camera functions properly and doesn't get distorted rather than add more cameras?
Also sounds like this is only a problem when reversing in parking lots? Or does this also affect NoA performance on high ways in your experience?
Is this a malfunction of the rear camera or a limitation?
One example is if the car is backing up in a parking lot (for whatever reason), rear camera distortion prevents a clear view of the surroundings.
The only thing I see as lacking is a way to detect a fast moving car in the fast lane when passing. Example: Divided highway. you're going 65, passing someone going 60, and a car in the fast lane is going 90. I don't believe the 90 mph car can currently be detected in time (this might not be true for the rewritten software, but it appears to be true for current software).Why do you think these rear cameras and radar are necessary for safer than a human FSD?
The rear camera being distorted by rain drops not only sounds like a technical problem rather than insufficient hardware, even if stops you from seeing clearly, it might not impact the neural nets as much.
It'd indeed be helpful if @Mo City could further elaborate his views, but no need to be so rude/offensive. He is simply explaining his point of view, for which I am thankful, even if I see things differently.
For human vision, water droplets alone don't interfere much (at least not in the S and X over the past seven plus years). You're trying to detect objects, not read the license plate number. For slush and mud, it's a different story because they totally block the cameras. A robust cleaning system is going to be required for all-weather use. Currently, heavy rain gets the one or more cameras are blocked message. I was actually impressed by how they often come back online once past the heavy rain and how well it functions even with the blocked message.Quick google search returned at least one paper describing how water distortion on a lens can be significantly reduced with software; "Image restoration via de-raining", link https://arxiv.org/pdf/1901.00893.pdf
Caveat 1; Don't know how well this can be done real time without taking too much computer resources.
Caveat 2; I only find the paper in Arxiv, an open depository. It apparently hasn't been published in a peer-reviewed journal so I don't know whether to trust the paper.
Additional note; If AI can really negate the effect of water droplets that well, the corrected image would probably only be useful for the human doing the labeling for the computer. AI would interpret the distorted image directly without first doing the de-distortion and then doing the interpretation. I would assume that the only thing needed would be enough labeled data with droplets on the lens. Which would obviously save computer resources compared to processing the image twice.
Caveat 3; I'm no AI expert.
Well, it would probably be more than once a year, but I basically agree.I don't think more hardware would be required in 99% of scenarios. I also believe that a robotaxi network where in 1% of scenarios, certain cars had to be taken off the network until someone showed up to wipe some cameras is still vastly profitable.
We sometimes forget just how bad humans are at driving. We get spray come up on the windscreen and we cant see until the wipers activate. We get light bouncing off wet roads that blinds us. We cant see well through fog, we take our eyes off the road to look at a passenger as we talk to them, or get distracted by a pretty girl/guy on the street. We are busy shouting at talk radio. We are tired. We dont have perfect vision, or reactions.
You should watch some of the laughable attempts some of the elderly drivers in my village make of reversing down the single-track lane past my house, to let someone pass. Its comical.
FSD isn't going to drive perfectly, but its likely going to drive better than most of us.
And to get back to investment... EVEN if this first pass of FSD/robotaxi will only work in sunny climates in city streets, then its still worth tens of billions of dollars. Maybe hundreds. I'd happily buy an FSD car I could fall asleep in, if it would stop and beep at me once a year to ask me to go wipe a sensor/camera so it could carry on.
I've wondered this myself as I turn off the NoA automatic lane change usually because:The only thing I see as lacking is a way to detect a fast moving car in the fast lane when passing. Example: Divided highway. you're going 65, passing someone going 60, and a car in the fast lane is going 90. I don't believe the 90 mph car can currently be detected in time (this might not be true for the rewritten software, but it appears to be true for current software).
Peoples experience will differ, so maybe that accounts for it, but what you are describing sounds remarkably like an older version of Autopilot.I've wondered this myself as I turn off the NoA automatic lane change usually because:
- it takes too long to actually initiate and move into the fast lane
- doesn't seem to factor the speed of the approaching car per your example
- signals for a lane change when there's a car just behind me causing them to wonder if I see them
- inconsistent behavior of parking in the fast lane and not getting back over or signaling to move but just staying put
- will move into fast lane for no apparent reason, e.g. car in front is 1,000 yards ahead
I use it, but Mad Max acts more like Nervous Nelly. I've only found it moves into the fast lane, with no slower traffic in front, because when it sees an exit, it wants to follow the route and thinks that the right lane is actually an exit only lane. The algorithm for getting out of the fast lane seems to be: If there are no cars behind stay in the fast lane, if there is a car behind move over. Not sure why that was chosen over move over once the slower car is passed unless there are more slower cars and nothing behind. I have found if there is a car behind it does move over quickly.I've wondered this myself as I turn off the NoA automatic lane change usually because:
- it takes too long to actually initiate and move into the fast lane
- doesn't seem to factor the speed of the approaching car per your example
- signals for a lane change when there's a car just behind me causing them to wonder if I see them
- inconsistent behavior of parking in the fast lane and not getting back over or signaling to move but just staying put
- will move into fast lane for no apparent reason, e.g. car in front is 1,000 yards ahead