Ya think? It's like we anticipate something good is going to happen or something...Too much optimism in here....
You can install our site as a web app on your iOS device by utilizing the Add to Home Screen feature in Safari. Please see this thread for more details on this.
Note: This feature may not be available in some browsers.
Ya think? It's like we anticipate something good is going to happen or something...Too much optimism in here....
Yesterday or Monday, someone posted in one of the threads about an ETF or fund that tracked the S&P but leaving out the low performing stocks. I made a note to myself to come back and research it, but didn't take the note of what the ticker was. D'oh. Now I can't find the post. Help please?
Just been absorbing Mr. Black... and trying to remember where I put the Valium.How nice of you to join us for the fireworks....just woke up?
Too much optimism in here....
That explains why the last 2 oil changes on my sons VW turn out to cost over $2K each due to stuff they found...I was surprised to learn how auto dealerships paid their mechanics. Not base pay but per service rate. No car to work on, no pay. I haven't gone out to verify but it was from one person who worked as one at an MB dealership. Curious how Tesla operates.
I've only seen "point clouds" used when the data is from lidar. Since Tesla doesn't use lidar, does this imply that Tesla is reverse engineering a 3D point cloud from multiple camera images all the time? I don't understand why Tesla would want to create a point cloud at all - it's more important to have actual objects, which Tesla recognizes since it has camera images with color and such (which lidar doesn't have). It seems to me backwards to create a point cloud - the lidar-heavy approaches have separate efforts to turn their point clouds into objects, efforts that Tesla shouldn't need.
"your rotator splint is busted and your headlights are low on fluid"That explains why the last 2 oil changes on my sons VW turn out to cost over $2K each due to stuff they found...
Yes but it still has to know where the objects are. In a fully integrated vision system, knowing where an object is feeds into knowing what the object is and vice versa. When you see a gray blob on the horizon, if you resolve it into clouds, then you know where it is (far away), but if it resolves into a truck then you know it is a lot closer. Two seconds later, a temporally integrated vision system will "remember" than two seconds ago, it thought the grey blob was a truck, so that will bias the new image recognition of that grey blob. But if the blob isn't growing, then the vision system will start to doubt the truck answer and start to think that maybe it was a cloud afterall.
It is how human vision systems work. You can try this experiment at home. Find a lock and a key that fits into the lock. Now dim the lights really, really low such that when you look at the lock face, all you see is random noise. You can't "see" the keyhole. OK, now fumble the key into the keyhole. As soon as the key slips into the hole, your vision system will now resolve the keyhole as a keyhole (assuming you didn't dim the lights down too much). All of a sudden you can see it. What happened is that your vision system got a hint from another part of your brain that a keyhole must exist in this spot, so now the vision system uses that extra bit of information and makes sense of the very noisy vision data.
So this feedback and feedforward mechanism is what Tesla is trying to accomplish. Up until now, the image system has been recognizing objects from static pictures and re-recognizing the same objects over and over again, multiple times per second. The new system will remember (temporal) previous guesses and update these guesses based on other information, such as the knowledge that objects get bigger as you approach them (3d point cloud).
Frankly, it is amazing that Tesla has gotten so far without such a vision system. At any rate, this new vision system is truly cutting edge. Good stuff.
They're missing the 2.5I find this amusing - on the Tesla IR site, they show the stock price of a long period, with a Y axis of 0.5, 1.0, 1.5, 2.0.
I think THAT is thinking big!
I can't believe this.
Just looked down at my keyboard and my F5 key started screaming, then unsnapped itself from its mount and ran off to hide behind the bookshelf. I wonder why?
Super cool - thanks for such a great explanation of the vision system.FWIW if teslas vision system is as good as that post makes it sound, then it has multi-billion dollar potential in probably a dozen different industries (im being very conservative there), that have nothing to do with cars or energy.
Its so frustrating as a brit that I cant get a model Y yet (I still have AP1 on my 2015 S), and that the EU stupidly dumbs down autopilot here, presumably waiting for germans to catch up...