Raurele
Member
I added some more details in the forum post I made tooThank you!
You can install our site as a web app on your iOS device by utilizing the Add to Home Screen feature in Safari. Please see this thread for more details on this.
Note: This feature may not be available in some browsers.
I added some more details in the forum post I made tooThank you!
Why? This thread is one that never drops far down, and is always one of the most active. If the thread title wasnt accurate, this thread would not have gotten far at all...I wish I had never started this thread. ......
Some good tips that I hadn’t thought about on my own. Score 100 today.I added some more details in the forum post I made too
Yay! Success! Good to hearSome good tips that I hadn’t thought about on my own. Score 100 today.
Yay! Success! Good to hear
I actually left the forums for a while because it was just constant complaints and kind of an echo chamber of anger. Thought I'd try to contribute!I also want to give you a shoutout for taking the time to help people. The main reason I frequent these forums is to learn from fellow Tesla owners and to offer help whenever I can.
Hope you get accepted to FSDb soon
I know the feeling. No matter how hard you try to stay positive and offer help...I actually left the forums for a while because it was just constant complaints and kind of an echo chamber of anger. Thought I'd try to contribute!
I remember when you started it and thinking “*sigh* this is going to stir some *sugar* up*” lolI wish I had never started this thread. I do almost all the things mentioned by @Raurele and my eight month SS average is 99 using the beta on nearly 100% of my drives when possible/sensible. Since my last update most of the "gonna kill me" issues have disappeared - no more moving left into the yellow and on-coming traffic to get over for merging cars, and in general I am having good results and fewer disengagements. At this point my biggest gripe is when the car informs me it is changing lanes to follow the route, then immediately moves out of that lane and into a "faster" lane and, because of traffic, cannot get back to where is was to take the turn or exit. It also needs LOTS of work in construction zones as FSD is unusable when it detects too many cones. Looking forward to 12.2.
This analogy is common in the AI world. AI systems don't work the way human brains do. Or more correctly we really have very limited knowledge of how human brains do it, but we know it is quite a bit more complex than what deep neural nets and other modern AI technology does.Strange analogy, as flapping wings have nothing to do with perception or decision-making, and I assume your sarcasm is a dig at Tesla for not using more perception tools like radar/lidar.
Also, last I checked, I don't have 8 eyes with one of them on my butt.
Like most of the content you share!Why? This thread is one that never drops far down, and is always one of the most active. If the thread title wasnt accurate, this thread would not have gotten far at all...
This analogy is common in the AI world. AI systems don't work the way human brains do. Or more correctly we really have very limited knowledge of how human brains do it, but we know it is quite a bit more complex than what deep neural nets and other modern AI technology does.
Because humans can drive with just cameras and their immensely more powerful brains does not mean that cameras are the right approach for computerized driving systems. Humans do a lot of things with our brains, and there is little current evidence that computers can or should use the same approaches.
Hey, the Flintstones used legs on a car for "full self" driving all the way back in 1960! How's that not relevant?A proper analog to the flapping wings to build an airplane concept would be to put legs on a car, and that's clearly not relevant to the problem at hand (perception and decision-making).
Indeed it's not clear that vision can't work. In fact it's clear vision can work. However that doesn't in any way say it is the best, or most likely approach. Most teams bet on regular vision plus the superhuman abilities of lidar. Radar and sometimes thermal vision. We don't know how to process images the way brains can, so we look for tools that give advantages over vision. They cost more but as the Tesla master plan teaches, you start expensive and then make it cheap later Elon is trying to build the model 2 of weeks driving systems while others correctly work on the roadster firstI don't think it's clear yet that a vision-only approach to driving cannot succeed. There might be "little current evidence" today because no one has been bold enough to try it, and AI/ML is an emerging technology.
At any rate, I've appreciated some of your past posts and articles for their thoughtfulness. The flapping wings comment just seemed like a cheap shot, and it still doesn't seem relevant. A proper analog to the flapping wings to build an airplane concept would be to put legs on a car, and that's clearly not relevant to the problem at hand (perception and decision-making).
Except that the other folks on the road who may be very negatively impacted do NOT deserve whatever happens.I read an article about someone bypassing it by using a fairly heavy weight (like a bowling ball), and fasten the seat belt, then a weight on the steering wheel, and then sat in passenger seat to have the car drive itself.
FSD Beta requires the cabin camera (if equipped) so that may be more difficult to bypass, but the basic AP and TACC would work with that setup. However, if one were to go through all those steps to bypass safety systems, that person deserves what's coming to them IMO.
Some, like me, might argue the manual serves to confirm the suckage.…the user manual tells me (and in fact it does) that it may brake when it shouldn't, and it may not brake when it should. The fact that it says that in the user manual doesn't change the fact that it sucks.
Indeed it's not clear that vision can't work. In fact it's clear vision can work. However that doesn't in any way say it is the best, or most likely approach. Most teams bet on regular vision plus the superhuman abilities of lidar. Radar and sometimes thermal vision. We don't know how to process images the way brains can, so we look for tools that give advantages over vision. They cost more but as the Tesla master plan teaches, you start expensive and then make it cheap later Elon is trying to build the model 2 of weeks driving systems while others correctly work on the roadster first
It doesn't matter how much cash Tesla has. Google, Apple, Amazon all have much more than Tesla, and even the startups have billions. Tesla isn't trying to use cameras to reduce R&D cost! They want to make the car cost less, in particular they want it to work with the sensors in a 2016 Model S.The top-down approach was intended to fund future R/D with a high margin initial product. Makes sense for startups. Isn't necessary for Tesla anymore given they are swimming in cash.
I think if Elon's only goal was to develop an AV, he might have taken the roadster top-down approach. But his first priority was to get as many people onto a more sustainable mode of transport, which necessitated keeping costs and energy consumption low. This naturally tilts the favor away from additional sensors and toward the model 2/Q bottom-up approach. That large fleet's ability to generate telemetry aligns naturally with ML and NN training.
I don't know if "success is a subset of possible outcomes," but I love that he's pushing AI tech forward beyond what is industry-typical. This type of push is what he did in an auto industry that has been technology-stagnant for a century. Likewise for the space industry. The current launch cadence is insane.
I suppose we can bet against him on his attempt at sensorphobic AVs, especially since he's been embarrassingly wrong with FSD timelines, but I tend to think he'll eventually solve it (commercially viable L4/5), albeit later than he predicts. And even if other companies have reached a comparable general scalable solution first, Tesla will likely have a cost and efficiency advantage because they approached it from the bottom up.
It's quite possible that lidar-quality environment data can be reliably and consistently be constructed from pure vision. The problem is that it takes hundreds of milliseconds to do so, whereas with lidar you have the 3D data instantly. That's a huge latency difference, and is why Tesla is constantly working to shave off milliseconds here and there. Perhaps HW4 or HW5 will cut this to tens of milliseconds, which would close the lidar gap considerably. I generally agree with Tesla's overall approach, but I think Elon's timeline ("safer than a human by the end of the year") is hopelessly optimistic. (Oxymoron notwithstanding.)Doing it with just vision is harder than doing it with vision + lidar + radar. Elon tries to argue that's not true, and I understand his point but don't agree with it, and nor do most people. Most people think, in order to get this going so you can move to step 2, you use every tool that makes sense. Then you try to refine it to make use of less.