Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla, TSLA & the Investment World: the Perpetual Investors' Roundtable

This site may earn commission on affiliate links.
Whatever happens...let's all focus that we are healthy and able to do the things we like to do. It is a scary world out there right now, and we should reflect on how fortunate we all are......making a few $$ along the way doesn't hurt, but im sure we'd all be able to wake up tomorrow knowing we can make more $$ if needed.
 
I was surprised to learn how auto dealerships paid their mechanics. Not base pay but per service rate. No car to work on, no pay. I haven't gone out to verify but it was from one person who worked as one at an MB dealership. Curious how Tesla operates.
That explains why the last 2 oil changes on my sons VW turn out to cost over $2K each due to stuff they found...
 
I've only seen "point clouds" used when the data is from lidar. Since Tesla doesn't use lidar, does this imply that Tesla is reverse engineering a 3D point cloud from multiple camera images all the time? I don't understand why Tesla would want to create a point cloud at all - it's more important to have actual objects, which Tesla recognizes since it has camera images with color and such (which lidar doesn't have). It seems to me backwards to create a point cloud - the lidar-heavy approaches have separate efforts to turn their point clouds into objects, efforts that Tesla shouldn't need.

Yes but it still has to know where the objects are. In a fully integrated vision system, knowing where an object is feeds into knowing what the object is and vice versa. When you see a gray blob on the horizon, if you resolve it into clouds, then you know where it is (far away), but if it resolves into a truck then you know it is a lot closer. Two seconds later, a temporally integrated vision system will "remember" than two seconds ago, it thought the grey blob was a truck, so that will bias the new image recognition of that grey blob. But if the blob isn't growing, then the vision system will start to doubt the truck answer and start to think that maybe it was a cloud afterall.

It is how human vision systems work. You can try this experiment at home. Find a lock and a key that fits into the lock. Now dim the lights really, really low such that when you look at the lock face, all you see is random noise. You can't "see" the keyhole. OK, now fumble the key into the keyhole. As soon as the key slips into the hole, your vision system will now resolve the keyhole as a keyhole (assuming you didn't dim the lights down too much). All of a sudden you can see it. What happened is that your vision system got a hint from another part of your brain that a keyhole must exist in this spot, so now the vision system uses that extra bit of information and makes sense of the very noisy vision data.

So this feedback and feedforward mechanism is what Tesla is trying to accomplish. Up until now, the image system has been recognizing objects from static pictures and re-recognizing the same objects over and over again, multiple times per second. The new system will remember (temporal) previous guesses and update these guesses based on other information, such as the knowledge that objects get bigger as you approach them (3d point cloud).

Frankly, it is amazing that Tesla has gotten so far without such a vision system. At any rate, this new vision system is truly cutting edge. Good stuff.
 
FWIW if teslas vision system is as good as that post makes it sound, then it has multi-billion dollar potential in probably a dozen different industries (im being very conservative there), that have nothing to do with cars or energy.
Its so frustrating as a brit that I cant get a model Y yet (I still have AP1 on my 2015 S), and that the EU stupidly dumbs down autopilot here, presumably waiting for germans to catch up...
 
Yes but it still has to know where the objects are. In a fully integrated vision system, knowing where an object is feeds into knowing what the object is and vice versa. When you see a gray blob on the horizon, if you resolve it into clouds, then you know where it is (far away), but if it resolves into a truck then you know it is a lot closer. Two seconds later, a temporally integrated vision system will "remember" than two seconds ago, it thought the grey blob was a truck, so that will bias the new image recognition of that grey blob. But if the blob isn't growing, then the vision system will start to doubt the truck answer and start to think that maybe it was a cloud afterall.

It is how human vision systems work. You can try this experiment at home. Find a lock and a key that fits into the lock. Now dim the lights really, really low such that when you look at the lock face, all you see is random noise. You can't "see" the keyhole. OK, now fumble the key into the keyhole. As soon as the key slips into the hole, your vision system will now resolve the keyhole as a keyhole (assuming you didn't dim the lights down too much). All of a sudden you can see it. What happened is that your vision system got a hint from another part of your brain that a keyhole must exist in this spot, so now the vision system uses that extra bit of information and makes sense of the very noisy vision data.

So this feedback and feedforward mechanism is what Tesla is trying to accomplish. Up until now, the image system has been recognizing objects from static pictures and re-recognizing the same objects over and over again, multiple times per second. The new system will remember (temporal) previous guesses and update these guesses based on other information, such as the knowledge that objects get bigger as you approach them (3d point cloud).

Frankly, it is amazing that Tesla has gotten so far without such a vision system. At any rate, this new vision system is truly cutting edge. Good stuff.

I still think it's inaccurate, or at least misleading, to call the data that Tesla keeps a "point cloud."
 
FWIW if teslas vision system is as good as that post makes it sound, then it has multi-billion dollar potential in probably a dozen different industries (im being very conservative there), that have nothing to do with cars or energy.
Its so frustrating as a brit that I cant get a model Y yet (I still have AP1 on my 2015 S), and that the EU stupidly dumbs down autopilot here, presumably waiting for germans to catch up...
Super cool - thanks for such a great explanation of the vision system.

It sure seems to me that if Tesla masters FSD many of the constituent technologies (like the vision system you describe) have plentiful applications beyond robotaxis. I haven't seen too much discussion of the value this could add to Tesla's business. For example, if Project Dojo is task agnostic (i.e. can train neural networks for applications other than driving), wouldn't that be a really valuable service product in the same way AWS is to Amazon?