mtndrew1
Active Member
Yeah even as dissatisfied as I am with FSD and the years of breathless nonsense from the Chief Twit, this UPL video looks…fine (to me).What video are you watching? None of what you describe is shown.
You can install our site as a web app on your iOS device by utilizing the Add to Home Screen feature in Safari. Please see this thread for more details on this.
Note: This feature may not be available in some browsers.
Yeah even as dissatisfied as I am with FSD and the years of breathless nonsense from the Chief Twit, this UPL video looks…fine (to me).What video are you watching? None of what you describe is shown.
What video are you watching? None of what you describe is shown.
My bad. I assumed it was planning to turn right.
Human drivers though, not BMW driversDon't see a turn signal.
Oh wait, v12 learns by watching videos of human drivers. Never mind.![]()
This is probably just "redneck invention", but I'm a bit surprised that "modular" networks aren't used. That is, create networks for all sorts of things like handling basic lane keeping, making turns, lane changes, etc, ad nauseum. On top of that, create networks that identify the driving situation and then route processing to the subnet that deals with that situation. So instead of monolithically grinding through ten thousand parameters, most of which are completely uninteresting to the needed outputs, you restrict computation to only the bits that apply to the current situation.Another interesting question is whether they can actually make progress while the car and the employees are down there. It depends on the turnaround time for them to get a new sub-version out, after training back at the mothership using the telemetry from this car. It may require at least one return trip to this location in the future, to test the results of taking and training the data from this current session.
I don't disagree with you at all. With the usual disclaimer that I'm no AI expert, think there's been a misapprehension in this thread that "end-to-end AI" or "nothing but nets"means some kind of monolithic and formless mass of neurons.This is probably just "redneck invention", but I'm a bit surprised that "modular" networks aren't used. That is, create networks for all sorts of things like handling basic lane keeping, making turns, lane changes, etc, ad nauseum. On top of that, create networks that identify the driving situation and then route processing to the subnet that deals with that situation. So instead of monolithically grinding through ten thousand parameters, most of which are completely uninteresting to the needed outputs, you restrict computation to only the bits that apply to the current situation.
It also allows you to train up on unprotected lefts in essentially real time because the network would - ideally - be quite small because it only addresses the uniqueness of that scenario. Lane keeping, yielding to traffic, etc, would be handled by other networks that are not being trained right then. They only need training when the unprotected left requires additional tuning of lane keeping or yielding.
This is how I figure our brains work, and a monolithic neural network seems like a rubbish idea to me. I'm not figuring out if my computer tends to start dancing on its right or left foot to a Samba at Benny's House of Dance because that bit of my brain doesn't get activated when I'm working on my computer.
Unless I'm replying to a forum post about end to end AI, obviously.
Not at all. Your comment about training triggered my distaste for large neural networks, where a monolith is the poster child for the wrong end of the spectrum.So I think in your reply to me, you're assuming that I must fall the monolithic AI camp here
My question for the bored: Does Tesla/Elon have enough information now to know if FSD can be achieved with current tech and planned Dojo upgrades?
Can they define some very difficult diverse problems, train for that, and then extrapolate the results across the full spectrum of issues to know with any certainty if they will succeed in full autonomy without any more breakthroughs?
It’s highly unlikely that Tesla can get to autonomy even in optimal conditions in a meaningful ODD with hw3/4. Computer Vision alone isn’t there yet. It’s still likely 2-3 research breakthroughs away. Even Waymo have a hard time getting to the reliability needed at highway speeds with all the sensors.My question for the bored: Does Tesla/Elon have enough information now to know if FSD can be achieved with current tech and planned Dojo upgrades? Can they define some very difficult diverse problems, train for that, and then extrapolate the results across the full spectrum of issues to know with any certainty if they will succeed in full autonomy without any more breakthroughs?
This is probably just "redneck invention", but I'm a bit surprised that "modular" networks aren't used. That is, create networks for all sorts of things like handling basic lane keeping, making turns, lane changes, etc, ad nauseum. On top of that, create networks that identify the driving situation and then route processing to the subnet that deals with that situation. So instead of monolithically grinding through ten thousand parameters, most of which are completely uninteresting to the needed outputs, you restrict computation to only the bits that apply to the current situation.
it's probably not so.It also allows you to train up on unprotected lefts in essentially real time because the network would - ideally - be quite small because it only addresses the uniqueness of that scenario. Lane keeping, yielding to traffic, etc, would be handled by other networks that are not being trained right then. They only need training when the unprotected left requires additional tuning of lane keeping or yielding.
This is how I figure our brains work,
I guess it is either a bad human driver or a bad ADAS driver. Arguably not as bad as the other featured human driver in this video, though. But unfortunately that is not the bar.
You can refer to prior posts. Bring cautious is kind of a deal breaker for this function too, and crossing speed is something covered at length previously. I saw one example here where there seemed to be some alacrity, but that is it. Hard to measure exact speed from the video without looking at timing and comparing to prior videos from in car. Possible but boring.I saw a lot of cautious driving from both the Model Y and the example car, but I wouldn't call either "bad." If the Model Y was being driven by FSD V12 and not manually, it looked very human-like and smooth to me.
Marketing driven software development... Ask yourselves why does Tesla have to go to an intersection to ensure the software works there. When will they come to the tricky areas in my area? What about that massive data advantage? Level 5 is a pipe dream and wide ODD ADAS is not very valuable compared to what everyone has.And here is more than just the tease.
You can easily see the drivers hands manipulate the steering wheel multiple times on these turns. They are most likely collecting training data, or using the cameras to 3-d model the turn for simulations.And here is more than just the tease.
Doesn’t seem great to me (I’ll leave out the details, since I tire of writing lengthy posts describing all the obvious shortcomings) but hard to tell in some cases whether there were interventions and when the car was under manual control.
I guess it is either a bad human driver or a bad ADAS driver. Arguably not as bad as the other featured human driver in this video, though. But unfortunately that is not the bar.