Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla.com - "Transitioning to Tesla Vision"

This site may earn commission on affiliate links.
Assuming you're referring to this:
sensor-type-png.670192

I put in the "???" to try to indicate this is a poor way to show the data. To be explicit, the circular version of the visualization is also bad, and the ordering and "connecting" of the attributes doesn't make sense either, which I was hoping to be more obvious in this linear version where having arbitrary values on the X axis that don't progress (e.g., 1, 2, 3) is bad chart design.

I think that there isn’t an advantage to putting them in a chart like that. Since it doesn’t make sense to compare them, really.

Ultimately we can achieve full self driving with vision. That is a fact. People can drive without radar or lidAr. we just need to figure out how to code that

So the question is how can lidAr or radar makes the job of coding easier that using pure vision. Or, how can lidAr and radar provide capability better than vision alone beyond what is optimally possible with cameras.

the problem with the chart is that it assumes that all the advantages are fully exploitable. They aren’t currently
 
I think that there isn’t an advantage to putting them in a chart like that. Since it doesn’t make sense to compare them, really.

Ultimately we can achieve full self driving with vision. That is a fact. People can drive without radar or lidAr. we just need to figure out how to code that

So the question is how can lidAr or radar makes the job of coding easier that using pure vision. Or, how can lidAr and radar provide capability better than vision alone beyond what is optimally possible with cameras.

the problem with the chart is that it assumes that all the advantages are fully exploitable. They aren’t currently
Except human cognition is not comparable to any current ML/AI techniques or technology. Not by any stretch of the imagination. The operation of the human brain is completely different than digital computers (and is still very poorly understood). The ability of one human brain to synthesize information far outclasses all the compute power of the earth currently. It is not a logically supportable argument that just because a human can do complex decision making based on visual input that therefore a computer system definitely can as well. I mean, perhaps it can be used to drive a car at an acceptable level of safety, but there is so far absolutely no shred of evidence for that yet (and tones of reasons to be very skeptical).
 
  • Like
Reactions: cucubits
Except human cognition is not comparable to any current ML/AI techniques or technology.

What is the part prior to “except”???

I mean, perhaps it can be used to drive a car at an acceptable level of safety, but there is so far absolutely no shred of evidence for that yet (and tones of reasons to be very skeptical).

well, two camera vision can be used 100 percent for sure. Can humans code for it is a different question. But it’s pretty obvious the “code” exists to make it possible.
 
well, two camera vision can be used 100 percent for sure. Can humans code for it is a different question. But it’s pretty obvious the “code” exists to make it possible.

Sure the "code" exists but actually developing the "code" is no trivial matter As discussed in this thread, we are very far from achieving human-like computer intelligence.
 
well, two camera vision can be used 100 percent for sure. Can humans code for it is a different question. But it’s pretty obvious the “code” exists to make it possible.
It does seem like loading self-driving code in to a chimpanzee brain would be the way to go if we could do that. If you think about it that would be far more likely to work than HW3 if it were technologically possible right now.
 
It does seem like loading self-driving code in to a chimpanzee brain would be the way to go if we could do that. If you think about it that would be far more likely to work than HW3 if it were technologically possible right now.

Don't give Elon any ideas. He's already connecting monkey brains to computers with neuralink.
 
I installed 2021.4.18.2 on my Model S a little while ago. Not sure what it all contains, but greentheonly indicated that the V3 autowipers from the .15.x releases is back in this build. Lines up with Elon's tweet about "one more production release this week"
 
Except human cognition is not comparable to any current ML/AI techniques or technology. Not by any stretch of the imagination. The operation of the human brain is completely different than digital computers (and is still very poorly understood). The ability of one human brain to synthesize information far outclasses all the compute power of the earth currently.
I used to think so, but now I'm not so sure.

When computers beat humans at tic-tac-toe we said that's cool but no big deal, but then computers beat the best chess players, and that really was something huge. Then more recently a computer (AlphaGo) beat the world's best Go player, consistently, and even invented new ways of playing (moves) that humans would never have thought of along the way. It innovated as it learned to master this "silly human game".

Of important note, humans didn't teach this computer how to be a great Go player. It learned that on its own from experience.

So although it is hard to imagine at present, I think it seems reasonable that eventually a computer like this could learn to be the best player at the game "Driving Cars In Streets" given enough time to learn, and with no more sensors than humans have.

But until that time it sure would be nice to have the radar back so the current system works well! ;)

 
  • Like
Reactions: edseloh and BitJam
Then more recently a computer (AlphaGo) beat the world's best Go player, consistently, and even invented new ways of playing (moves) that humans would never have thought of along the way. It innovated as it learned to master this "silly human game".

Of important note, humans didn't teach this computer how to be a great Go player. It learned that on its own from experience.

So although it is hard to imagine at present, I think it seems reasonable that eventually a computer like this could learn to be the best player at the game "Driving Cars In Streets" given enough time to learn, and with no more sensors than humans have.

This is an interesting observation, but to point out, AlphaGo was 5+ years ago. A primary discussion with Tesla is how long their development will take, and if it will occur before 95% of 2021 cars are already worn out.

Also, computers can learn very, very quickly when they have simple known rules and very strong feedback signals. GO is a game with known rules, and a binary success criteria of "win" or "lose." It is also a situation where sensing is not a challenge at all- the exact position of the board is known with infinite fidelity at all times (using only 91 bytes of data!), as well as if the computer won or not.

The above means building a simulator is trivial. And then you can just throw the computer at it, and it can literally fail (lose) 1 billion times in a row with no real consequences. It can try turning left when it should have gone right. It can make a risky move that works 87% of the time, but loses 13% of the time. It can actually look into the future statistically, because there are a finite number of outcomes.

This is nothing at all like driving in a multivariable, analog world bound by real physics- not human made rules, where at best failures cost thousands of dollars per event, at and worst kill people, and your goal is not to "win" it's to cooperate with other humans and minimize overall societal risk. You can't go left when you should go right, even if it's "creative" - that is illegal and it might kill someone. The decision you make now doesn't impact your success 30 seconds from now. There is no clear signal that you did the right thing, as lack of impact does not mean you didn't cause a massive issue for vehicles around you. A funny case here is that a system trained "don't crash" would probably just fully ignore stop lights as a "creative" solution, and would weave it's way through the intersection around other cars.

In order to teach self driving like we taught AlphaGo, we need a driving simulator that is phenomenal, one that a human could hardly tell is a simulation, down to the variety of cars, people, hedges, weather, etc. This sounds like just as hard a problem as building the thing that can drive in this environment, but it is part of the way Waymo is attempting to solve this problem. Tesla famously does not fundamentally use simulation.
 
Last edited:
I installed 2021.4.18.2 on my Model S a little while ago. Not sure what it all contains, but greentheonly indicated that the V3 autowipers from the .15.x releases is back in this build. Lines up with Elon's tweet about "one more production release this week"
Vision only cars are on this build too, so you possibly have both radar (normal) and vision speed estimation (from .15.x)
 
Some poor rain performance when the roads are wet / reflective here near the end:

Honestly I’m starting to think speed is the biggest factor in it failing to keep autopilot. After I posted my Reddit thread many posted videos of it working in heavier rain that I did, but they weren’t 70+ MPH like my run and that video.

Either way, hopefully it’s improved quickly. At no point in that video would I feel unsafe as a human to drive.

I still think the more important thing is fixing the auto high beams. I wouldn’t really mind having to drive manually during moderate rain when going high speeds. I do very much mind autopilot being more of a pain to use that manually driving whenever it is dark out. Just drove back from Somerville area to Barnegat again and couldn’t comfortably use autopilot nearly the whole way.

Edit: Wish he wouldn’t have changed it different colors and to sepia when it had the problems at the end as that made it a little harder to tell how bad the weather actually was.
 
The fundamental issue is that the addition of radar from HW2.0 to HW 2.5 and HW 3.0 did not work over the years. The basic issue of stationary objects in the path of a autopilot controlled Tesla were not recognized and avoided. Hence the crashes with trucks and emergency vehicles.

It seems the issue is the matching of radar data with vision data. Tesla never made that work as expected. So the elimination of radar could make autopilot safer. It could reduce accidents with stationary vehicles, phantom breaking and improve night driving. We will soon find out.

Note that Tesla cameras are not limited the same wavelengths of human vision. If the car thinks it can drive with Autopilot, then it likely can regardless of the driver‘s comfort with the auto high beams.

Of course, the driver needs to be comfortable and capable of driving with the current level of skill that Autopilot has. One has to be ready to immediately ”take the wheel”, when Autopilot bails out.

link to Tesla cmos car cameras
 
  • Like
Reactions: rjpjnk
I have a 2018 M3 and find myself wondering if they have already disabled use of radar. I have noticed in the last month when using TACC behind a car that seems to be traveling at a constant speed, my follow distance noticeably (but smoothly) oscillates — like it isn’t sure exactly how far behind the lead car is. That is a change from before when it seemed really precise with its follow distance. Obviously there could be a number of reasons for this, including ota remedies for phantom braking, but I still wonder.