Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

FSD City release date keeps getting pushed?

This site may earn commission on affiliate links.
FYI, that series of unprotected left turns was well done, great drone angle. That footage was a mix of successes and fails, and it’s exactly what you have to do as a tester, give Tesla regular feedback on the good and the bad. If video footage like that makes you nervous, you shouldn’t accept the beta test download, whichever one they roll out to widen beta testing. If on the other hand you have a tolerance and preparedness for the potential fails and risks (“it may do the wrong thing at the wrong time” is the beta wording), then hop in!
 
The Button will probably not give you anything. It will put you on a list with thousands of other daring testers, but you get it at the same time as everyone else. Do the same job as placebo pills, and possibly help shorten the waiting time. Just like you can "choose" to get new software earlier. Do nothing and is totally meaningless.
 
  • Disagree
Reactions: APotatoGod
IMHO it will keep getting pushed, no way this hardware can make a safe enough FSD car. Logic, smart thinking is all it takes to realize that. Camera views are not good enough all around to make it safe to pull into fast moving traffic in many cases. There will ALWAYS be too many situations where the car cannot perform well enough, I believe now that the BETA is in the real world that has become very apparent.
 
  • Disagree
Reactions: APotatoGod
My two eyes + cranial neural net would argue otherwise :)
Let us discuss your eyes versus Tesla cameras.

Your two cameras have double pan and tilt! Your eyes have pan and tilt and so does your neck. The Tesla cameras are fixed in place.
Your eyes also have an auto-focus system where I think the Tesla cameras are fixed focus.
You also have much better dynamic range vision. When you have glare in your eyes do you adjust your eyes to the situation by putting on sunglasses or even better polarized sunglasses or just blurt out "driving disabled due to blinded left eye" and pull over to the side of the road?
When you get something in your eye do your blink to clear it out or just keep it closed and blurt out "Left eye obstructed" and stop driving?

HW 3.0 ISP still seems to have a hard time with sunrise and sunset situations. I love the nonstop front camera blinded and cannot read the traffic signal messages. The car would still drive on the road. This was a 60 MPH state road and not an interstate with traffic lights that I drove on for over 80 miles. This is a big glare and dynamic range issue. I fought this issue for about 20 minutes in a 1.5 hour drive 2 weeks ago. Cannot have FSD and not be able to read traffic lights!

When they add PT auto-focus self cleaning cameras with an auto engagable (optionally polarized) neutral density filter I will be impressed. <- All this exists today, but not in cars.
 
My two eyes + cranial neural net would argue otherwise :)

But seriously... humans are existence proof that with two cameras and a neural net should be enough.
No. It has in no way been demonstrated that human-like or animal-like cognition is equivalent to (or can be emergent from) artificial neural networks as they currently exist. ANNs do demonstrate some similarities to biological neuron networks, but they are not equivalent. Also, even if they were functionally equivalent, biological neural systems are still many orders of magnitude deeper than any available ANN systems, except perhaps for some very simple animals.
 
My two eyes + cranial neural net would argue otherwise :)
Your eyes have about 10 times better angular resolution than Tesla's cameras and better dynamic range. As a simple check that is easy to do with the dashcam recorder, try backing slowly from a parked car and check at what distance you can no longer read its license plate on screen. Then compare it with the distance where you can't do that (assuming your eyesight is good).
 
Your eyes have about 10 times better angular resolution than Tesla's cameras and better dynamic range. As a simple check that is easy to do with the dashcam recorder, try backing slowly from a parked car and check at what distance you can no longer read its license plate on screen. Then compare it with the distance where you can't do that (assuming your eyesight is good).
The dashcam is a color-processed and compressed image, the AP computer gets the raw data
 
My two eyes + cranial neural net would argue otherwise :)

But seriously... humans are existence proof that with two cameras and a neural net should be enough.
For everyone jumping on Hugh's comment, your comments are valid, but do you disagree with the fundamental premise that eventually "cameras and an ANN will be enough?" I agree with all of you, including Hugh. I think humans have proved "2 cameras and a NN" are good enough, but I also agree we have a ways to go before Tesla's cameras and ANN can emulate a human.
 
For everyone jumping on Hugh's comment, your comments are valid, but do you disagree with the fundamental premise that eventually "cameras and an ANN will be enough?" I agree with all of you, including Hugh. I think humans have proved "2 cameras and a NN" are good enough, but I also agree we have a ways to go before Tesla's cameras and ANN can emulate a human.
Yes, I strongly disagree. ANN=/=human brain. They work in fundamentally different ways. Cameras+ANNs MAY be sufficient at some point, but it has not been demonstrated either by Tesla or by anybody else in this field for a similarly difficult problem. It may also end up not being possible. I am glad somebody is trying it, but as a scientist I have to be skeptical until something is demonstrated. What Tesla actually has demonstrated is that this a far harder problem than many thought. I do take issue with Tesla pre-selling this to people before they have even shown it is possible though.
 
Yes, I strongly disagree. ANN=/=human brain. They work in fundamentally different ways. Cameras+ANNs MAY be sufficient at some point, but it has not been demonstrated either by Tesla or by anybody else in this field for a similarly difficult problem. It may also end up not being possible. I am glad somebody is trying it, but as a scientist I have to be skeptical until something is demonstrated. What Tesla actually has demonstrated is that this a far harder problem than many thought. I do take issue with Tesla pre-selling this to people before they have even shown it is possible though.
Fair points and I agree the ANN =/= brain. Different tools with different capabilities. I'd argue that ultimately computers/ANNs are better suited to solve the driving problem than the brain. The brain may be better suited to solve extremely complex driving edge cases (at least right now), but is also subject to emotion, distraction, etc. The utility of driving automation (saving lives and reducing accidents) is clearly coming much sooner than achieving full L5, but I agree that solving for all the edge cases to achieve L5 will take much longer than certainly Elon or many think.
 
  • Like
Reactions: APotatoGod
I think there are variants of L5 that will satisfy a lot of peoples needs that are much closer than 100% L5. For example, being in a semi-populated and developed area is likely going to be a lot closer. Looking at the beta videos and seeing how my current autopilot performs I think it's not going to be super far off. It still needs to deal with 'surprise' conditions but the streets are well marked, the lighting is good, it's primary concerns are accident avoidance and I think that is quite achievable.

The biggest gap in this scenario I think will be it's handling of parking lots to get to a true L5.

L5 outside in less developed areas, low light areas, or really high traffic areas I think are going to struggle.

That first scenario though is going to solve for a lot of situations. Particularly if it can go from a suburb, onto a highway, and into a city.

Is that L5? No, not quite, but it will probably satisfy a loose definition of it and satisfy a large number of people.
 
For everyone jumping on Hugh's comment, your comments are valid, but do you disagree with the fundamental premise that eventually "cameras and an ANN will be enough?" I agree with all of you, including Hugh. I think humans have proved "2 cameras and a NN" are good enough, but I also agree we have a ways to go before Tesla's cameras and ANN can emulate a human.
Yes - this was my point. Two cameras and a neural net are enough, we do it every day. Are Tesla's Cameras and NN enough? is a different question. I was merely responding to the comment that "cameras are not enough"... because they are for the correct implementation
 
Yes, I strongly disagree. ANN=/=human brain. They work in fundamentally different ways. Cameras+ANNs MAY be sufficient at some point, but it has not been demonstrated either by Tesla or by anybody else in this field for a similarly difficult problem. It may also end up not being possible. I am glad somebody is trying it, but as a scientist I have to be skeptical until something is demonstrated. What Tesla actually has demonstrated is that this a far harder problem than many thought. I do take issue with Tesla pre-selling this to people before they have even shown it is possible though.
I think there is a difference in intent vs interpretation of the original post. My interpretation was that cameras + sufficiently advanced software should eventually be viable for FSD - and I completely agree. So I don't think camera capability and software (NN or otherwise) really matters to the thrust of the point.
 
Yes - this was my point. Two cameras and a neural net are enough, we do it every day. Are Tesla's Cameras and NN enough? is a different question. I was merely responding to the comment that "cameras are not enough"... because they are for the correct implementation
Except that there's no evidence at all that a neural net is in any way equivalent to, or could be developed into, what goes on inside our heads. The technology difference is probably much greater than the difference between Babbage's hand cranked Difference Engine and a GPU.

No matter how many brass cams, gears and levers you built a Difference Engine with, you'd never get it to run a version of Mario Brothers, much less beat an expert at Go.
 
Except that there's no evidence at all that a neural net is in any way equivalent to, or could be developed into, what goes on inside our heads. The technology difference is probably much greater than the difference between Babbage's hand cranked Difference Engine and a GPU.

No matter how many brass cams, gears and levers you built a Difference Engine with, you'd never get it to run a version of Mario Brothers, much less beat an expert at Go.
Not sure why we even need to compare computers to brains. Of course they are different, in fact, computers/algorithms may be inherently better suited to solve the driving problem than the brain. We simply need a computer/algorithm to solve the driving problem equal to or better than humans, not replicate how the brain solves the problem of driving.

Computing/algorithms can now beat humans at Chess, Go, and other many problems where there was once "no evidence".