Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
Current autopark is finicky as it requires you to roll by slowly (all the way by) a parking spot that has a car on each side, then you press the brake and it will offer you autopark. I realize that it sucks, that is not lost on me. Don't hate me as my team implemented it and it only used the Bosch sonar pucks. The new vision autopark should be sweet butter.
So you have no worries regarding accuracy of vision only park assist or autopark? I am still slightly in doubt since the cameras cannot see what's right in front of the front bumper. (For example a high sidewalk you would scrape with the bottom of the bumper)
 
So you have no worries regarding accuracy of vision only park assist or autopark? I am still slightly in doubt since the cameras cannot see what's right in front of the front bumper. (For example a high sidewalk you would scrape with the bottom of the bumper)
I don't use autopark much and can still successfully park my Model 3. In the same way a human judges distance, I'm sure the algos/neural nets can be programmed to closely estimate distance to curbs or other objects like poles while the front cameras can still see those objects. There's also the ability to use other parked cars as reference points. I'm sure this is easier said than done, but I don' think this is a big problem to solve.

Given the change in the profile of cars, we've become over-reliant on parking assists such as the proximity sensors and backup cameras. When I learned to drive, we only had eyes and mirrors. I'm personally confident that can be replicated through the camera/vision suite and a software update.
 
  • Like
Reactions: jeewee3000
So you have no worries regarding accuracy of vision only park assist or autopark? I am still slightly in doubt since the cameras cannot see what's right in front of the front bumper. (For example a high sidewalk you would scrape with the bottom of the bumper)
No worries as the cameras will have seen that high sidewalk as the car rolls forward and remembers the environmental features on the way out.

Imagine the car having the ability to scan the environment like Optimus when watering flowers. Watch AI day again for how fine tuned they have made the occupancy map at slow speeds.

The car will do something like that to the environment when parking.

 
  • Informative
Reactions: jeewee3000
Without digging into each one, I’ll go out on a limb and say that all Autopark systems are heavily camera Vision-based and I’m unsure how much they really use the ultrasonics — this is why autopark systems rely on painted lines or vehicles on either side of the desired spot. Maybe they use the ultrasonics for the fine tuning but could easily just do that with allowing a certain margin of safety based on what the cameras see.

But using Vision for this in general isn’t new or unique, that’s what competitors have already been doing. I see the ultrasonics as more useful for the manual parking stuff and will be surprised if cameras ever fully replace the functionality when it comes to objects really up close, but the cameras will be *good enough*.
 
The criminal investigation on Tesla is on short loop up here in Canada, and on the scrolling headlines on the bottom of the screen. It’s probably good to get this behind us now. FSD is probably a decade away from a robotaxi level and even a step up to level 3 is probably years away. Swallow the pill, take the stock hit, give it a name change and cut the price of the FSD sweet to a third of what is. Build it back up over the next decade as the technology improves. It gets incrementally better every time but is obviously no where close to being any kind of a “Full self drive” system.

All jmho of course.
 
The criminal investigation on Tesla is on short loop up here in Canada, and on the scrolling headlines on the bottom of the screen. It’s probably good to get this behind us now. FSD is probably a decade away from a robotaxi level and even a step up to level 3 is probably years away. Swallow the pill, take the stock hit, give it a name change and cut the price of the FSD sweet to a third of what is. Build it back up over the next decade as the technology improves. It gets incrementally better every time but is obviously no where close to being any kind of a “Full self drive” system.

All jmho of course.
Great opinion... I take it you have 22k plus miles on beta like me?
Oh wait, you probably don't have beta, just watch videos... or will say you do, at which
time we will all want to go to your car and take a picture of the fsd beta screen.
Go find a person who owns a Tesla with fsd beta and take a few trips.
Jmho:

Your post is so far off the mark, I would swear Gordon Johnson wrote it.
 
Great opinion... I take it you have 22k plus miles on beta like me?
Oh wait, you probably don't have beta, just watch videos... or will say you do, at which
time we will all want to go to your car and take a picture of the fsd beta screen.
Go find a person who owns a Tesla with fsd beta and take a few trips.
Jmho:

Your post is so far off the mark, I would swear Gordon Johnson wrote it.
Lol. 😊. Well to each their own. We have FSD beta. We love it and it’s a lot of fun. It’s just no where near a level 3 system let alone 5. But glad to hear yours is. 😊

Safe travels. 👍
 
Lol. 😊. Well to each their own. We have FSD beta. We love it and it’s a lot of fun. It’s just no where near a level 3 system let alone 5. But glad to hear yours is. 😊

Safe travels. 👍
Since your in Canada, it's likely your experience is different. I don't question that.
The system may be overfit to the US road system.
To boot, I have had 13 months experience and have seen all the changes. Canada got beta about 4 months ago?? So my commute of 58 miles each way, 50% beta and 50% Noa have gone from 7.5 interventions each way to 1.5 in the past year.
Progress is the point.
Tesla and their big brained engineers will figure the 1.5 out soon.
Last fsd comments for a bit... I can hear the mods footsteps 👣
Wait, I know how to save it, Hodl!
 
Since your in Canada, it's likely your experience is different. I don't question that.
The system may be overfit to the US road system.
To boot, I have had 13 months experience and have seen all the changes. Canada got beta about 4 months ago?? So my commute of 58 miles each way, 50% beta and 50% Noa have gone from 7.5 interventions each way to 1.5 in the past year.
Progress is the point.
Tesla and their big brained engineers will figure the 1.5 out soon.
Last fsd comments for a bit... I can hear the mods footsteps 👣
Wait, I know how to save it, Hodl!
Ryan I think we got FSD in April so you are not far off. And yah, maybe there is a Canadian component to the performance aspect. I lived about 5 months of the year for 17 years in SoCal and I know left turns can be different there, or at least the lane selection leading up to the left turn.

Anyway, I don’t doubt there could be a difference between the two countries. We’ll continue to drive the same routes and report the fails. It’s not a big deal and neither of us find it high drama or dangerous. But at least here it is absolutely no where being any kind of hands off system. Although highway travel is quite good.

Cheers.
 
  • Love
Reactions: FSDtester#1
Lol. 😊. Well to each their own. We have FSD beta. We love it and it’s a lot of fun. It’s just no where near a level 3 system let alone 5. But glad to hear yours is. 😊

Safe travels. 👍
I’m in Canada too and my takeovers seem to be more frequent than what I see in the fsd videos on YouTube. I think that is some of the difference. Most of my takeovers aren’t really safety related it’s just it’s too cautious or just stops when it doesn’t need too.

I think one thing to remember is we are on the steep part of the AI development curve. The car currently act as dumb human and the time to act as normal human is very short compared to the time it took to get to this point. I definitely think it will be as good or better than a human by this time next year. It’s hard to imagine and predict exponential curves but when it’s here it will surprise us all.
 
Ryan I think we got FSD in April so you are not far off. And yah, maybe there is a Canadian component to the performance aspect. I lived about 5 months of the year for 17 years in SoCal and I know left turns can be different there, or at least the lane selection leading up to the left turn.

Anyway, I don’t doubt there could be a difference between the two countries. We’ll continue to drive the same routes and report the fails. It’s not a big deal and neither of us find it high drama or dangerous. But at least here it is absolutely no where being any kind of hands off system. Although highway travel is quite good.

Cheers.
Cheers to you! I used to almost live in Canada 🇨🇦.. Detroit.... is just a bridge or tunnel away!
 
Last edited:
Since your in Canada, it's likely your experience is different. I don't question that.
The system may be overfit to the US road system.
To boot, I have had 13 months experience and have seen all the changes. Canada got beta about 4 months ago?? So my commute of 58 miles each way, 50% beta and 50% Noa have gone from 7.5 interventions each way to 1.5 in the past year.
Progress is the point.
Tesla and their big brained engineers will figure the 1.5 out soon.
Last fsd comments for a bit... I can hear the mods footsteps 👣
Wait, I know how to save it, Hodl!
Well, I'm on tame suburban roads in Silicon Valley, only a few miles from where the FSD engineers ply their trade. I've been driving FSD beta since 10.5, last November. Sure, it's improved a bunch, but it's nowhere close to being full self driving. For that it would have to be able to drive with nobody in the car. That means Tesla would have to take all liability. It's not even sort of close in its performance.

Of course something could change and there could be significant improvement next week. No telling. But what's there now is nowhere close. Pretending otherwise is blind stupidity. Anybody who buys a Tesla expecting it will drive itself anytime soon is deluding themselves. It might, but it probably won't.
 
Well, I'm on tame suburban roads in Silicon Valley, only a few miles from where the FSD engineers ply their trade. I've been driving FSD beta since 10.5, last November. Sure, it's improved a bunch, but it's nowhere close to being full self driving. For that it would have to be able to drive with nobody in the car. That means Tesla would have to take all liability. It's not even sort of close in its performance.

Of course something could change and there could be significant improvement next week. No telling. But what's there now is nowhere close. Pretending otherwise is blind stupidity. Anybody who buys a Tesla expecting it will drive itself anytime soon is deluding themselves. It might, but it probably won't.

I agree with this and a couple of months ago I explained why. Formerly, I was in the camp that FSD might be ready soon but the pace of improvement has hit a brick wall instead of exponentially getting better. Each release brings some amazing improvements but then it takes a step back in the kind of assertiveness it needs to succeed. Wide release is not going to change that. Sure, it's still improving overall but the pace has slowed noticeably, not sped up. While it is truly amazing what it can do, and do right, and no other system is even near as capable, it still needs to be 50 or 500 times better than it is (depending upon you measure it). I don't see that happening in the next six or nine months and, thankfully, that is not why I am invested in TSLA. I've always looked at autonomy as a bonus lottery ticket that we didn't have to pay for and that we don't know when it will pay out. I've just pushed the likely timeframe out about two years from now.

The change in the rate at which the system has improved over time tells me something is missing. I'm guessing it's not going fully autonomous without new cameras and/or processor. The current hardware can be made to drive in a rudimentary fashion quite well, and, indeed, it already does, but I think to take it to the next level, the level where it can actually be autonomous, will require 50 to 500 times more "smarts". There is always the possibility that new and very innovative techniques and training approaches could make current hardware sufficient, but I think the quickest and most likely way to get there is with upgraded hardware (most likely processors, but it's possible better cameras would speed things up too). Cameras are cheap.

I definitely don't think this will take a decade which, as has already been pointed out, is an eternity in Elon time.
 
I agree with this and a couple of months ago I explained why. Formerly, I was in the camp that FSD might be ready soon but the pace of improvement has hit a brick wall instead of exponentially getting better. Each release brings some amazing improvements but then it takes a step back in the kind of assertiveness it needs to succeed. Wide release is not going to change that. Sure, it's still improving overall but the pace has slowed noticeably, not sped up. While it is truly amazing what it can do, and do right, and no other system is even near as capable, it still needs to be 50 or 500 times better than it is (depending upon you measure it). I don't see that happening in the next six or nine months and, thankfully, that is not why I am invested in TSLA. I've always looked at autonomy as a bonus lottery ticket that we didn't have to pay for and that we don't know when it will pay out. I've just pushed the likely timeframe out about two years from now.

The change in the rate at which the system has improved over time tells me something is missing. I'm guessing it's not going fully autonomous without new cameras and/or processor. The current hardware can be made to drive in a rudimentary fashion quite well, and, indeed, it already does, but I think to take it to the next level, the level where it can actually be autonomous, will require 50 to 500 times more "smarts". There is always the possibility that new and very innovative techniques and training approaches could make current hardware sufficient, but I think the quickest and most likely way to get there is with upgraded hardware (most likely processors, but it's possible better cameras would speed things up too). Cameras are cheap.

I definitely don't think this will take a decade which, as has already been pointed out, is an eternity in Elon time.
I don't disagree on your (pushback of the) timeline, but I do disagree on the hardware side.

Hardware in the car provides a ceiling for FSD-capability, yes, but the current software is nowhere near the full hardware potential.
Current camera's are on the order of 1 MP = 1 million pixels of data per camera. The current Tesla FSD Chip is designed to handle this video stream (8 times 1 million pixels).

Hardware 4 will have ~5 MP camera's. The FSD chip will have to process 40 million pixels per frame instead of 8 million currently, whilst keeping the framerate steady or better (and not consuming too much power!). It should therefore be around 5 times faster. Current estimation is the FSD hardware 4 chip would be around 3.5x faster and the rest is gained by more efficient software (using raw image date instead of processed images and the like).

Higher resolution and therefore compute would mostly give advantages to recognizing obstacles from farther away, but FSD beta shows that the problem is moving from the vision/perception side of the problem to the planning part of the problem.

First FSD beta videos showed very poor perception (remember the uncertainty in the lane markings and all?) and was therefore very unreliable.
Since then the perception has improved greatly, which has lead to a much better experience.

However, there is still much to improve on the software side before the local maximum of HW3 is reached.

Of course HW4 is in the works, but just throwing more pixels and compute in the picture is not going to improve FSD by leaps and bounds. It's the NN training that Tesla does in house that improves the FSD capabality.

That's the local bottleneck. If DOJO works as intended, we could see an increase in the rate of improvement starting Q2/Q3 of 2023.
 
I think they might need camera AND processor upgrades, at least to achieve a complete solution. I think current hardware MIGHT work but it would become increasingly difficult to add improvements without causing degradation in other areas. In other words, I think the current hardware is, at best, marginal for a FSD system that would work in all environments and cover enough edge cases to be considered acceptable.

And, no, higher resolution cameras do not imply all pixels need to be processed all the time. I think the winning solution will need to dynamically scale the processing to different pixel volumes, depending upon the situation. Sometimes it will process the entire field of pixels (but only every other pixel) and other times it will process cropped images at full resolution. This will require a second AI layer to determine what to process at any given moment.

The primary failing of the system appears to me that it can't recognize fast developing situations as quickly as I can because it's not looking far enough ahead. This is why I think the system needs to process intelligently chosen subsets of the entire scene for success.

Time will tell.
 
  • Helpful
Reactions: jeewee3000
I think they might need camera AND processor upgrades, at least to achieve a complete solution. I think current hardware MIGHT work but it would become increasingly difficult to add improvements without causing degradation in other areas. In other words, I think the current hardware is, at best, marginal for a FSD system that would work in all environments and cover enough edge cases to be considered acceptable.

And, no, higher resolution cameras do not imply all pixels need to be processed all the time. I think the winning solution will need to dynamically scale the processing to different pixel volumes, depending upon the situation. Sometimes it will process the entire field of pixels (but only every other pixel) and other times it will process cropped images at full resolution. This will require a second AI layer to determine what to process at any given moment.

The primary failing of the system appears to me that it can't recognize fast developing situations as quickly as I can because it's not looking far enough ahead. This is why I think the system needs to process intelligently chosen subsets of the entire scene for success.

Time will tell.
For timeline estimates, I tend to focus on what pieces are missing.

Relatively few from my drives. It is basically lane connectivity map and improvements to the neural planner so it lane changes sooner when it needs to, uses the turn signal appropriately and handles parking.

Cameras, nor compute are needed to improve in the car for this to happen. Dojo needs to happen though as the march of 9s is super freaking hard (i.e. 100x more training needs). Inference in the car is not compute bound, nor is the resolution, color depth, refresh...etc of the cameras.
 
  • Like
Reactions: jeewee3000
For timeline estimates, I tend to focus on what pieces are missing.

I've experienced some rare cases where better velocity estimation would greatly improve safety and performance. But Elon is already expecting "step-change" improvements in velocity estimation in 10.69.3:
So timeline-wise I'm hoping my issues there are solved in weeks to months.
 
  • Like
Reactions: Discoducky