Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

The catastrophe of FSD and erosion of trust in Tesla

This site may earn commission on affiliate links.
I wish I had never started this thread. I do almost all the things mentioned by @Raurele and my eight month SS average is 99 using the beta on nearly 100% of my drives when possible/sensible. Since my last update most of the "gonna kill me" issues have disappeared - no more moving left into the yellow and on-coming traffic to get over for merging cars, and in general I am having good results and fewer disengagements. At this point my biggest gripe is when the car informs me it is changing lanes to follow the route, then immediately moves out of that lane and into a "faster" lane and, because of traffic, cannot get back to where is was to take the turn or exit. It also needs LOTS of work in construction zones as FSD is unusable when it detects too many cones. Looking forward to 12.2.
 
I also want to give you a shoutout for taking the time to help people. The main reason I frequent these forums is to learn from fellow Tesla owners and to offer help whenever I can.

Hope you get accepted to FSDb soon :D
I actually left the forums for a while because it was just constant complaints and kind of an echo chamber of anger. Thought I'd try to contribute!
 
  • Like
Reactions: lindell26
I wish I had never started this thread. I do almost all the things mentioned by @Raurele and my eight month SS average is 99 using the beta on nearly 100% of my drives when possible/sensible. Since my last update most of the "gonna kill me" issues have disappeared - no more moving left into the yellow and on-coming traffic to get over for merging cars, and in general I am having good results and fewer disengagements. At this point my biggest gripe is when the car informs me it is changing lanes to follow the route, then immediately moves out of that lane and into a "faster" lane and, because of traffic, cannot get back to where is was to take the turn or exit. It also needs LOTS of work in construction zones as FSD is unusable when it detects too many cones. Looking forward to 12.2.
I remember when you started it and thinking “*sigh* this is going to stir some *sugar* up*” lol
 
Strange analogy, as flapping wings have nothing to do with perception or decision-making, and I assume your sarcasm is a dig at Tesla for not using more perception tools like radar/lidar.

Also, last I checked, I don't have 8 eyes with one of them on my butt.
This analogy is common in the AI world. AI systems don't work the way human brains do. Or more correctly we really have very limited knowledge of how human brains do it, but we know it is quite a bit more complex than what deep neural nets and other modern AI technology does.

Because humans can drive with just cameras and their immensely more powerful brains does not mean that cameras are the right approach for computerized driving systems. Humans do a lot of things with our brains, and there is little current evidence that computers can or should use the same approaches.
 
  • Like
Reactions: Matias
This analogy is common in the AI world. AI systems don't work the way human brains do. Or more correctly we really have very limited knowledge of how human brains do it, but we know it is quite a bit more complex than what deep neural nets and other modern AI technology does.

Because humans can drive with just cameras and their immensely more powerful brains does not mean that cameras are the right approach for computerized driving systems. Humans do a lot of things with our brains, and there is little current evidence that computers can or should use the same approaches.

I don't think it's clear yet that a vision-only approach to driving cannot succeed. There might be "little current evidence" today because no one has been bold enough to try it, and AI/ML is an emerging technology.

At any rate, I've appreciated some of your past posts and articles for their thoughtfulness. The flapping wings comment just seemed like a cheap shot, and it still doesn't seem relevant. A proper analog to the flapping wings to build an airplane concept would be to put legs on a car, and that's clearly not relevant to the problem at hand (perception and decision-making).
 
I don't think it's clear yet that a vision-only approach to driving cannot succeed. There might be "little current evidence" today because no one has been bold enough to try it, and AI/ML is an emerging technology.

At any rate, I've appreciated some of your past posts and articles for their thoughtfulness. The flapping wings comment just seemed like a cheap shot, and it still doesn't seem relevant. A proper analog to the flapping wings to build an airplane concept would be to put legs on a car, and that's clearly not relevant to the problem at hand (perception and decision-making).
Indeed it's not clear that vision can't work. In fact it's clear vision can work. However that doesn't in any way say it is the best, or most likely approach. Most teams bet on regular vision plus the superhuman abilities of lidar. Radar and sometimes thermal vision. We don't know how to process images the way brains can, so we look for tools that give advantages over vision. They cost more but as the Tesla master plan teaches, you start expensive and then make it cheap later Elon is trying to build the model 2 of weeks driving systems while others correctly work on the roadster first
 
I read an article about someone bypassing it by using a fairly heavy weight (like a bowling ball), and fasten the seat belt, then a weight on the steering wheel, and then sat in passenger seat to have the car drive itself.

FSD Beta requires the cabin camera (if equipped) so that may be more difficult to bypass, but the basic AP and TACC would work with that setup. However, if one were to go through all those steps to bypass safety systems, that person deserves what's coming to them IMO.
Except that the other folks on the road who may be very negatively impacted do NOT deserve whatever happens.
 
Last edited:
Indeed it's not clear that vision can't work. In fact it's clear vision can work. However that doesn't in any way say it is the best, or most likely approach. Most teams bet on regular vision plus the superhuman abilities of lidar. Radar and sometimes thermal vision. We don't know how to process images the way brains can, so we look for tools that give advantages over vision. They cost more but as the Tesla master plan teaches, you start expensive and then make it cheap later Elon is trying to build the model 2 of weeks driving systems while others correctly work on the roadster first

The top-down approach was intended to fund future R/D with a high margin initial product. Makes sense for startups. Isn't necessary for Tesla anymore given they are swimming in cash.

I think if Elon's only goal was to develop an AV, he might have taken the roadster top-down approach. But his first priority was to get as many people onto a more sustainable mode of transport, which necessitated keeping costs and energy consumption low. This naturally tilts the favor away from additional sensors and toward the model 2/Q bottom-up approach. That large fleet's ability to generate telemetry aligns naturally with ML and NN training.

I don't know if "success is a subset of possible outcomes," but I love that he's pushing AI tech forward beyond what is industry-typical. This type of push is what he did in an auto industry that has been technology-stagnant for a century. Likewise for the space industry. The current launch cadence is insane.

I suppose we can bet against him on his attempt at sensorphobic AVs, especially since he's been embarrassingly wrong with FSD timelines, but I tend to think he'll eventually solve it (commercially viable L4/5), albeit later than he predicts. And even if other companies have reached a comparable general scalable solution first, Tesla will likely have a cost and efficiency advantage because they approached it from the bottom up.
 
The top-down approach was intended to fund future R/D with a high margin initial product. Makes sense for startups. Isn't necessary for Tesla anymore given they are swimming in cash.

I think if Elon's only goal was to develop an AV, he might have taken the roadster top-down approach. But his first priority was to get as many people onto a more sustainable mode of transport, which necessitated keeping costs and energy consumption low. This naturally tilts the favor away from additional sensors and toward the model 2/Q bottom-up approach. That large fleet's ability to generate telemetry aligns naturally with ML and NN training.

I don't know if "success is a subset of possible outcomes," but I love that he's pushing AI tech forward beyond what is industry-typical. This type of push is what he did in an auto industry that has been technology-stagnant for a century. Likewise for the space industry. The current launch cadence is insane.

I suppose we can bet against him on his attempt at sensorphobic AVs, especially since he's been embarrassingly wrong with FSD timelines, but I tend to think he'll eventually solve it (commercially viable L4/5), albeit later than he predicts. And even if other companies have reached a comparable general scalable solution first, Tesla will likely have a cost and efficiency advantage because they approached it from the bottom up.
It doesn't matter how much cash Tesla has. Google, Apple, Amazon all have much more than Tesla, and even the startups have billions. Tesla isn't trying to use cameras to reduce R&D cost! They want to make the car cost less, in particular they want it to work with the sensors in a 2016 Model S.

The Tesla Master Plan approach is to get it working first with an expensive version, learn and eventually make a cheaper version. Cheaper for customers. All the other teams are working to try to be the first to get a working robocar (robotaxi actually) and then to refine and improve it. Waymo has actually done it, at least in easy driving environments, and now in most of SF.

Doing it with just vision is harder than doing it with vision + lidar + radar. Elon tries to argue that's not true, and I understand his point but don't agree with it, and nor do most people. Most people think, in order to get this going so you can move to step 2, you use every tool that makes sense. Then you try to refine it to make use of less.
 
  • Love
Reactions: Daniel in SD
Doing it with just vision is harder than doing it with vision + lidar + radar. Elon tries to argue that's not true, and I understand his point but don't agree with it, and nor do most people. Most people think, in order to get this going so you can move to step 2, you use every tool that makes sense. Then you try to refine it to make use of less.
It's quite possible that lidar-quality environment data can be reliably and consistently be constructed from pure vision. The problem is that it takes hundreds of milliseconds to do so, whereas with lidar you have the 3D data instantly. That's a huge latency difference, and is why Tesla is constantly working to shave off milliseconds here and there. Perhaps HW4 or HW5 will cut this to tens of milliseconds, which would close the lidar gap considerably. I generally agree with Tesla's overall approach, but I think Elon's timeline ("safer than a human by the end of the year") is hopelessly optimistic. (Oxymoron notwithstanding.)