Looking forward to read on twitter somebody tried to get his Tesla with FSD to drive through one of these wall paintingsthats why you use maps aswell.
You can install our site as a web app on your iOS device by utilizing the Add to Home Screen feature in Safari. Please see this thread for more details on this.
Note: This feature may not be available in some browsers.
Looking forward to read on twitter somebody tried to get his Tesla with FSD to drive through one of these wall paintingsthats why you use maps aswell.
That was Acme FSD.ha ha ha, good one. Yes, but I think I saw Wyle Coyote hit the wall when he tried it.![]()
yes, I had jiggling, wiggling, and spinning cars for 18 months, and not long after it was fixed, now I have jiggling and wiggling lines. It's distracting.![]()
I am not asking to see all the NN probabilities. I just want nicer looking visualizations, with cleaner lines that don't jiggle and wiggle.
BINGO.... THIS ^^^Here is a leak of what the final FSD Beta visualizations might look like:
![]()
I got it from this facebook video:
Elektrek also has an article about it:
![]()
Tesla's head of UI departs, showing unseen Cybertruck and FSD images in the process
Tesla’s longtime head of user interface design has left the company, and he has published some images of previously unreleased...electrek.co
IMO, if that is that the production version of the FSD Beta visualization that we get, I will be very happy. I think it looks great. It is much more refined and clean than the current beta visualizations. The intersection looks great. It looks sharp. The pedestrian looks good too. I like that they have the 1 blue line like NOA uses. I think that will be great to show the intended path the car intends to take.
What would happen if some kids drew with chalk some lane lines leading up to the tunnel image.Looking forward to read on twitter somebody tried to get his Tesla with FSD to drive through one of these wall paintings
I don't think the PDAF part of the camera can do per pixel depth. It sounds to me like PDAF (phase detection auto-focus) is just what my SLRs have been doing for years. It can determine which direction and approximately how much focus needs to be adjust to get whatever is under the auto-focus point. Based on the Google AI blog, it's a NN that does the depth per pixel depth perception Again, it sounds similar to the Pseudo-Lidar approach.Depth percepting cameras would know. For example cameras that use PDAF for depth perception.
A couple of links after googling the subject:
Sony sells PDAF depth percepting image sensors for a few dollars in large quantities.
Yes it can as an approximation. The sensor generations stereo images and the differences are used to approximate depth. A neural network is not required.I don't think the PDAF part of the camera can do per pixel depth.
Or if you have ~1.6 million cars already with hardware, and you can update the NN's on it to do the depth approximation (like Karpathy already demonstrated/spoke about) then WTF would you want to add more hardware, when the software is "good enough"?Yes it can as an approximation. The sensor generations stereo images and the differences are used to approximate depth. A neural network is not required.
Seems my original link didn't work. Here it is again:
I'd have to agree with this. Of course if they find Pseudo-Lidar / cameras still can't cut it, then perhaps. While they're at it, might as well put in cross traffic cameras.Or if you have ~1.6 million cars already with hardware, and you can update the NN's on it to do the depth approximation (like Karpathy already demonstrated/spoke about) then WTF would you want to add more hardware, when the software is "good enough"?
Link still doesn't work. Why not give the authors and article title so we can Google it?Yes it can as an approximation. The sensor generations stereo images and the differences are used to approximate depth. A neural network is not required.
Seems my original link didn't work. Here it is again:
I do have a question on this, and I apologize if it has been discussed before (too many posts to read through). Why not use hardware/sensors that are designed to measure distance, and let vision system do the object identification that it excel at? It seems that radar/lidar can measure distance with very small errors and small amount of power, whereas visual recognition requires a lot more power and resource to detect distance, and likely without the accuracy of radar/lidar.Or if you have ~1.6 million cars already with hardware, and you can update the NN's on it to do the depth approximation (like Karpathy already demonstrated/spoke about) then WTF would you want to add more hardware, when the software is "good enough"?
Cost and precision. When Tesla originally launched Autopilot 2 / Hardware 2, Lidar systems cost about the same as a car, so would be prohibitively expensive. Radar is currently used for distance, but what they fielded / available for the price at the time doesn't have nearly the resolution required to distinguish individual cars from things like road signs, much less pedestrians.I do have a question on this, and I apologize if it has been discussed before (too many posts to read through). Why not use hardware/sensors that are designed to measure distance, and let vision system do the object identification that it excel at? It seems that radar/lidar can measure distance with very small errors and small amount of power, whereas visual recognition requires a lot more power and resource to detect distance, and likely without the accuracy of radar/lidar.
I googled "pdaf depth map". It is the second link that comes up for me.Link still doesn't work. Why not give the authors and article title so we can Google it?
Radar doesn't know what it is looking at. Lidar is expense relative to cameras. Cameras cost a few dollars. Distance measurement with cameras is very low power and resources with various technologies like depth map using pdaf sensor. Accuracy with a camera can be considered high when you know what you are looking at. Lidar has a bunch of issues, like can bounce off of smoke from car exhaust.I do have a question on this, and I apologize if it has been discussed before (too many posts to read through). Why not use hardware/sensors that are designed to measure distance, and let vision system do the object identification that it excel at? It seems that radar/lidar can measure distance with very small errors and small amount of power, whereas visual recognition requires a lot more power and resource to detect distance, and likely without the accuracy of radar/lidar.
I assume you mean that LIDAR doesn't get you anything you need to drive a car that can't be done with vision (first principals, humans drive with vision and all)? There are visual illusions that fool binocular vision that would not fool LIDAR (like the broad side of a truck with a low contrast background). LIDAR also excels at distance estimation over large dynamic ranges, on static/slowly moving objects, or in situations without external illumination like an obstacle in the road ahead of headlight range. Like radar they do add data, but like radar it is clear physics does not required them to drive a car at the same safety level as a human.But LIDAR gets you nothing except spending money to hide that your vision system isn't very good.
Elon fueling the hype train again. V9 better blow our socks off! Nowadays, I'm only watching 8.2 videos once or twice a week. I need my new FSD beta fix.
View attachment 657859
The question asks about limited release. So it will probably just be to early access users, not wide to the public.
Elon fueling the hype train again.