I hope the FSD codebase trickles down to regular AP etc in all world regions where FSD is not allowed. Because removal of the radar would mean that you would have no more winter issues with snow in front of the radar.
You can install our site as a web app on your iOS device by utilizing the Add to Home Screen feature in Safari. Please see this thread for more details on this.
Note: This feature may not be available in some browsers.
Do we know that for certain - that FSD actually learns over time? My greatest concern with FSD/NoA is that it is entirely reactionary and not at all predictive. It doesn't seem to take the long view of the road, just what's immediately in front of it. I liken it to the skills of a first-year driver. It can obey traffic controls nicely, but dealing with other (human) drivers and their random actions are problematic.
As someone who's driven for 50 years, I've got a well honed set of situational awareness skills that allow me to predict problem situations. I haven't seen that degreen of learning evident in FSD yet. There's a video of a Tesla highway accident on Twitter where the cars in the adjacent lane panic brake, but the Tesla proceeded happily in it's lane at 70mph til a car swerved directly in front of it. Bang. Anyone who watched that video commented that they would have braked or slowed down immediately when they saw trouble in the adjacent lane about 3 seconds earlier. 3 seconds at 70mph is about the length of a football field.
Until all cars are automated, humans will be the largest safety hazzard. It would be cool if FSD evolved to the point where it possessed the sum total of all the best techniques and habits of the best human drivers to cover every situation.
I mean I hope this is true, but do we actually know that through fairly obscured/small windows (one through which a human would be able to definitively identify a vehicle but it would take a bit of focus to do so...), this can be done consistently by FSD at the current time? I mean I certainly think it will need to be able to do this at least as well as a human eventually (it’s not something we rely on a huge amount for driving I don’t think since we can deal with vehicles with tinted windows), but not sure that there is clear evidence it does right now. Would be hard to tell, with radar being used currently. Guess someone could look at Green’s marked up videos and find an example...but radar may be involved there too (not sure how those boxes are generated and whether radar plays any part in it).vision can be fairly easily trained to recognize the second vehicle ahead by seeing thru the windows
Unless there is a another truck or cement mixer ahead of the truck ahead. Or the truck ahead has a high volume to weight ratio, due to its current lack of load. There are a lot of cases where this assumption would not be valid.If you are following a bus or semi truck where you cannot see the second car ahead, the mass of the truck/bus is so much larger that a collision ahead wouldn’t reduce the truck’s velocity as much due to its much higher momentum. Therefore the need for rapid braking is much reduced.
“Almost ready” = not ready.
Elon suggests radar will be removed.
Yes - landing rockets back vertically is exactly like someone winning in casinos.Sometimes I wonder if Elon is like one of those people you see in movies (and probably real life) who make an *incredible*, statistically unheard of run at the craps table, and then start getting more and more cocky, and then suddenly the dice stop rolling their way but they bet too big and throw away all their big wins.
if so, i wonder how far we are from seeing the losing bets getting scooped up by the croupier.
Spacex is very clearly not tesla.Yes - landing rockets back vertically is exactly like someone winning in casinos.
BTW, this “radar will be removed” should be read in the context of city NOA only.
Similarities:Spacex is very clearly not tesla.
Differences: only one of them has been taking money from consumers for almost 5 years for a product that can only be used on one device that may not even be functional anymore if the product is ever finished.Similarities:
- Run by a madman, aren't we all?
- Disregard for the impossible
- Both have accomplished what others deemed impossible
- Elon makes impossible demands of his people and the peeps try to deliver
- Lots of resources to make it happen
- Often late with projects. Example: Falcon heavy
I mean I hope this is true, but do we actually know that through fairly obscured/small windows (one through which a human would be able to definitively identify a vehicle but it would take a bit of focus to do so...), this can be done consistently by FSD at the current time? I mean I certainly think it will need to be able to do this at least as well as a human eventually (it’s not something we rely on a huge amount for driving I don’t think since we can deal with vehicles with tinted windows), but not sure that there is clear evidence it does right now. Would be hard to tell, with radar being used currently. Guess someone could look at Green’s marked up videos and find an example...but radar may be involved there too (not sure how those boxes are generated and whether radar plays any part in it).
And how much memory does the system currently actually have? Do we know? If it doesn’t see the car ahead of the car ahead for two minutes does it still know it is there (like a human)? Again, not sure we can say as long as radar is in the picture.
Unless there is a another truck or cement mixer ahead of the truck ahead. Or the truck ahead has a high volume to weight ratio, due to its current lack of load. There are a lot of cases where this assumption would not be valid.
Differences: only one of them has been taking money from consumers for almost 5 years for a product that can only be used on one device that may not even be functional anymore if the product is ever finished.
It's actually not necessary for the vehicle in front of the lead vehicle to stop quickly for this to be problematic (think: inattention on the part of the lead vehicle...). Anyway, I guess you can come up with a variety of non-problematic scenarios if you want, but practically speaking, I've encountered many situations where it is difficult to see the lead car, visually - and it hasn't shown up on the visualization either (whether or not the car knows it is there, I have no idea). And it doesn't require extremely high weight vehicles.In the scenario of 2 high gross weight vehicles the lead vehicle wouldn't be able to stop fast anyway, so that's not much of an issue either.
Right, due to radar, (as far as we know, anyway).We do have lots of examples where the car in front of a lead car is shown on the display, however.
Sounds a lot more then "City NOA only"“Almost ready” = not ready.
...
BTW, this “radar will be removed” should be read in the context of city NOA only.
The thread context is city NOA. Anyway, it’s all our interpretation ... until someone asks explicitly.
Tesla is the first successful new automobile manufacturer in US in the last 100 years.Spacex is very clearly not tesla.
Which doesn't change the fact that they've been selling features for about half a decade that they're still years at best from delivering.Tesla is the first successful new automobile manufacturer in US in the last 100 years.
No, the are already 5 forward facing cameras when counting the pillars.
5 forward facing cameras? Are two additional forward facing cameras going to be added when radar is removed?
Perhaps I am not understanding this, but isn't radar an easier and mathematically simpler way to measure distance to an object than vision? With radar, it's just d=v*t, with t being the radar signature return time, right? With visual image, the computer needs to interpret the objects in the image based on object recognition algorithm, then it needs to interpret the objects orientation and compare its shape and size to surrounding objects. Seems like vision is more likely to lead to an error?No, the are already 5 forward facing cameras when counting the pillars.
I honestly bet that vision-only can't tell these aren't tunnels, or at best gets confused most of the time. Radar & Lidar would say "Wall" from any distance or approach angle.Perhaps I am not understanding this, but isn't radar an easier and mathematically simpler way to measure distance to an object than vision? With radar, it's just d=v*t, with t being the radar signature return time, right? With visual image, the computer needs to interpret the objects in the image based on object recognition algorithm, then it needs to interpret the objects orientation and compare its shape and size to surrounding objects. Seems like vision is more likely to lead to an error?
Besides, our eyes can get trick easily with 3d images that's actually not 3 dimensions in real world, so how does a vision only drive system solve that?
Tesla is the first successful new automobile manufacturer in US in the last 100 years.