You can install our site as a web app on your iOS device by utilizing the Add to Home Screen feature in Safari. Please see this thread for more details on this.
Note: This feature may not be available in some browsers.
And yet ....It seems extremely unlikely City Streets could be released to the general public in its current form or in a similarly-functioning form that could exist a month from now
And yet ....
Elon said:We should be there with Beta 10, which goes out a week from Friday
Another few weeks after that for tuning & bug fixes
Best guess is public beta button in ~4 weeks
Step by step.The only true commitment here is Beta 10 going out a week from Friday, everything else should be interpreted as aspirational. Hope for the best, plan for the worst and you'll limit potential disappointment.
It doesn’t mean anythingStep by step.
Will beta testers even see 9.3 if 10 is coming out next Friday?Step by step.
I don't understand these ignorant comments. AI Day showed they are rebuilding NN models weekly. I prefer to listen to experts in their field.Will beta testers even see 9.3 if 10 is coming out next Friday?
What do we really think will have changed in that time? The proof is in the puddin', have low expectations and you'll be pleasantly surprised with an overshoot rather than disappointed with under-delivery
Research in human-centered AI and deep learning at MIT and beyond in the context of autonomous vehicles and personal robotics. I'm particularly interested in understanding human behavior in the context of human-robot collaboration, and engineering learning-based methods that enrich that collaboration.
I received my BS, MS, and PhD from Drexel University where I worked on applications of machine learning, computer vision, and decision fusion techniques in a number of fields including robotics and human sensing.
Before joining MIT, I was at Google working on machine learning for large-scale behavior-based authentication.
What does "rebuilding NN models weekly" mean in terms of actual improvement for an end user? I've watched Lex's video on AI day, I watched the autonomous driving presentation at AI day, I've probably watched all of the beta tester videos posted by people with YouTube accounts. This is the opposite of ignorance, but you could call me a pessimist although I'd prefer to call myself a realist.I don't understand these ignorant comments. AI Day showed they are rebuilding NN models weekly. I prefer to listen to experts in their field.
Lex Fridman has a breakdown and overview of key points. He is well respected and has done a variety of interviews during which his knowledge and understanding are demonstrated.
Lex Fridman -- About
Your comment makes no sense. You still don't understand the NN training and benefit after watching the Lex video? [update: 8:28 - Summary: 3 key ideas -- talked about the reiterative process and the benefit] I'm not sure what that says. There has been constant improvement in the past several months. I don't know what it says about your understanding that you are not recognizing that. They are putting out FSDBeta every few weeks with training from scenario failures retrained. The setup criteria/triggers of unique cases they need and the fleets look for those cases and can be uploaded for training.What does "rebuilding NN models weekly" mean in terms of actual improvement for an end user?
Sorry. I understand now what you meant by the above as I didn't see that. I misunderstood your comment meaning and thought your comment was just being fictitious.Will beta testers even see 9.3 if 10 is coming out next Friday?
Recently, Musk did comment that FSD Beta Version 9.2 is "not great." He also said 9.3 is "much improved." Not long after those comments, Musk made it clear that there will no longer be a 9.3, or any incremental point upgrade coming soon. Instead, he says Tesla is planning to jump straight to Version 10, which Musk indicated will roll out next week.
Is there a pre-9.2 video of this exact drive that we can use to identify improvements?Your comment makes no sense. You still don't understand the NN training and benefit after watching the Lex video? [update: 8:28 - Summary: 3 key ideas -- talked about the reiterative process and the benefit] I'm not sure what that says. There has been constant improvement in the past several months. I don't know what it says about your understanding that you are not recognizing that. They are putting out FSDBeta every few weeks with training from scenario failures retrained. The setup criteria/triggers of unique cases they need and the fleets look for those cases and can be uploaded for training.
Re: monumental challenge -- I agree and I don't have any predictions. I think drivers will be responsible to monitor the car for a very long time but it will do more and more work.
Perfect example of the advancements:
Well for the purposes of if HW4 is required, we only really care if they spilled over due to running out of capacity, not really if it's for other reasons.Douma disagreed with green on WHY there was no longer redundancy due to borrowing compute from Node B--- he did not disagree that was the current state of things though.
Which probably puts him in the "they will maybe fix it later on" camp, but as things stand now no redundancy.
If you lose perception of only a subset of cameras or only for a subset of perception functions, it'll still be fine. There only needs to be enough to either pull over the car in its own lane or pull to the shoulder if there is one.Except there's no evidence they can do that.
Remember, in the production code the only thing they're using NNs for is perception right now (Green reconfirmed that just yesterday).
And that's split across both sides.
If one side fails- you lose perception. How do you "fail safely" at that point?
They don't have to spin up, just like in the full case, the other side would already be running. And instead of running all the functions, it'll be running a minimal risk watch dog that can take over as soon as it detects the other node had shut down.They need the perception stack running fully on both sides to be able to do that.
Which, if they could do that, they wouldn't be splitting it between sides.
(and it's not like when one side crashes they can then decide to spin up a bunch of extra NNs on the other to take over perception anyway- it's too late by then).
Again, with what I'm suggesting, it can survive a failure of one side, just that instead of being able to continue with full function, it'll have partial function (sufficient enough to bring car to a safe stop).The fact HW3 could survive a failover of one side was one of the major things they hyped about it at autonomy day.
L2/L3 does not require redundancy, they really will only run into the problem for L4/L5. Presuming they have run out in one node, They'll have to weight working on the software to fit things on HW3 vs doing another retrofit.So- other than the idea Tesla is just writing terrible, massively bloated, code they'll somehow be able to add a ton MORE ability to and then also massively shrink down in compute needed as well- I don't see how you get above L3 (or even L2 really) without HW4 (if that's even enough- since they don't actually know until they solve it).
I really don't that there's much of a debate at this point. I got criticized both here and on r/Tesla when I echoed earlier comments by others regarding the absolute necessity of having an additional camera or cameras in the nose of the car. I based my conclusion on my watching of your videos. I also pointed out that based on your videos the car has only 3 to 4 seconds at most to react and execute an unprotected left on a busy highway. Since then, I have been favoring adding a side facing cameras to the headlight assembly (not my idea). The headlights could also be made with its own cleaning mechanism, which is not uncommon in luxury vehicles.The Great Camera Debate
I'll still debate it.I really don't that there's much of a debate at this point.
Ever heard of speed limits, stop and yield signs, and traffic signals? Sure, no one radioing instructions, but sentient beings behind the wheel (most, anyway) treat those signs as controls.I guess that the one main difference between land air is ATC.
The recent AI Day presentation demonstrated they do have cross camera object recognition (recognizing objects that straddle two cameras) instead of relying on stitching.effective stitching requires a LOT of overlap. that's my main beef with the limited cameras - and placement - on teslas.
stitching on static images, with cheap distorted lenses (like these cams have) is hard when the overlap is minimal. add speed and bad weather to that and its impossible to have good coverage that is updated as fast as safety would require, and not need lots of fixups to blend the images together properly.
maybe I misunderstood, but I thought they used the cameras to build a 'world model' and then display from that.The recent AI Day presentation demonstrated they do have cross camera object recognition (recognizing objects that straddle two cameras) instead of relying on stitching.