You can install our site as a web app on your iOS device by utilizing the Add to Home Screen feature in Safari. Please see this thread for more details on this.
Note: This feature may not be available in some browsers.
BI is now a paid Tesla Shill..lol
For clarification - TX is NOT building 2170 cells. The are building PACKS from 2170 cells that they bring in from GF Nevada.
Site Leader Eric Montgomery, who noted during the meeting that August 2022 was Giga Nevada’s second-best month of production, coming second only to October 2021. Montgomery also noted that Giga Nevada has to achieve a steady output of 8,800 high voltage battery packs per week to support the company’s aggressive vehicle production plans.
Do you have a reference stating those 2170 packs are being built in Austin, and not imported from Reno with 2170 cells already installed? Could well be the case, as the math below shows:
Leaked Tesla comments from On Sep 8, 2022 revealed bty pack output from Giga Nevada. It seems they are making enough packs (likely all LR) to provide for both Fremont and Austin MY production.
Tesla Giga Nevada exceeds 6,500 Powerwall per week
Keep in mind that we heard from public documents, via @carsonight on twitter, that Tesla Fremont is also importing 1,000 LFP packs per week from China (those go to the Fremont Model 3 SR+). And we know Fremont is manufacturing 12K cars/wk currently per the article above, which includes a max of 2K/wk Models S/X.
So 3K/wk S/X/LFP3 leaves ~9K/wk more packs needed at Fremont. And per Eric Montgomery (above), Giga Nevada is building 8.8K packs/wk. So if Austin need another 1K packs/wk, they could well be assembled in Austin. To me, this seems like more effort than just increasing production at Giga Nevada, and also a waste of resources if the near-term plan (or even the mid-term plan) is to switch to 4680 cells.
Cheers!
Do you have a reference stating those 2170 packs are being built in Austin, and not imported from Reno with 2170 cells already installed? Could well be the case, as the math below shows:
Leaked Tesla comments from On Sep 8, 2022 revealed bty pack output from Giga Nevada. It seems they are making enough packs (likely all LR) to provide for both Fremont and Austin MY production.
Tesla Giga Nevada exceeds 6,500 Powerwall per week
Keep in mind that we heard from public documents, via @carsonight on twitter, that Tesla Fremont is also importing 1,000 LFP packs per week from China (those go to the Fremont Model 3 SR+). And we know Fremont is manufacturing 12K cars/wk currently per the article above, which includes a max of 2K/wk Models S/X.
So 3K/wk S/X/LFP3 leaves ~9K/wk more packs needed at Fremont. And per Eric Montgomery (above), Giga Nevada is building 8.8K packs/wk. So if Austin need another 1K packs/wk, they could well be assembled in Austin. To me, this seems like more effort than just increasing production at Giga Nevada, and also a waste of resources if the near-term plan (or even the mid-term plan) is to switch to 4680 cells.
Cheers!
Do we have any idea what the ratio of 4680 to 2170 Model Y cars currently being produced in Austin is?
HA, haven't heard that in like 10 years. Does this person know that every car maker has adopted this approach?
Maybe I was not clear. I was not suggesting that Tesla could grow faster than it can. I was suggesting that the advance of the mission will accelerate only when major competitors emerge, that is to say that if they offer retreaded ICE as BEV they'll not be major competitors. When an otherwise attractive Mustang BEV arrives that has spaghetti connections and excessive complexity that really will not advance the cause.The former part of your post was mostly agreeable, but this part I have to disagree with.
Not because it wouldn't be super beneficial to the planet for Tesla to have great competitors, but because Tesla is not going to be pushed faster by any force other than the established mission.
I've been invested in Tesla since 2015 but never owned one. Today I went to a local showroom and ordered my first Tesla, to be paid for fully with TSLA gains. Part of it is thanks to the TMC community, which gave me the confidence to see through the FUD and to HODL.
Excited about my future blue Model Y! Should arrive by February.
View attachment 853660
Give them movies to watch or games to play... they'll be happy and at their destination before they know it!ah no. It will be like riding in a non-window seat in a plane.
OK, cool.I wrote the specs for some of the code; how's that? And yes it was awhile ago, but they are still using it in some form or fashion as I can still gleen/groc the output behavior.
The Model 3 packs are backfilled with Intumescent Goo which was supposed to help prevent such fires to begin with. (Or at least they were at one point in their design)you would think cutting off the oxygen would be a better approach to electrical fires.. (i propose a big fireproof blanket)
maybe a chemical based suppressant needs to be developed to rapidly cool battery
Sure, as you can see from the car's real-time vision, the lane lines are moving around constantly (they vibrate, wiggle, appear, disappear...etc), these are being reasoned through computer vision neural networks in non-deterministic pixel space (vs deterministic vector space). Pixel space is to non-specific to place a nice straight non-moving wall that is drawn to be the closest possible space to the lane, without being un-safe. If this was being done in pixel space, the car would most likely end up in the lane in some high percentage of cases or stop way behind the lane in a percentage of cases/scenarios. Only with ground truth (most likely mapped vector space) could that wall be drawn and trusted to such a high degree. And that is being done by reasoning through several ground truth markers in the scene and calculating angles and distances. Then, feeding that ground truth into the trained NN models allow them to be much more confident in their outputs, quicker and have more robust outputs. This also greatly benefits path planning which can take ground truth inputs and be able to plan paths that don't wiggle as the input is deterministic and non-changing as the car moves through the scene, thus turns that are somewhat occluded or blind to the real-time system are now much smoother. Case in point is when the car commits beyond the creep wall and moves to an actionable pose in the median box. This is now smooth as that median space is now trusted as ground truth (vector space to some degree) and the path planner does not need to re-plan near as much as it used to for that kind of challenging maneuver.OK, cool.
So, out of curiosity, what about the output of a new feature like the creepwall indicates to you it's built from previous car data rather than a real-time inference? (not a challenge, honest question).
The Model 3 packs are backfilled with Intumescent Goo which was supposed to help prevent such fires to begin with. (Or at least they were at one point in their design)
I assume that's why Model 3 fires are rare, given the run rate of the cars.
Fitting video since that gun is not even real but a co2 powered airsoft gun...lol.Video of TSLAQ trying to get some of my shares in June....
That does make sense in terms of the precision gained by mapping markers to vector space as opposed to pixel space. And it meshes well with the previous AI day info we have regarding merging the multiple camera sources in to a single vector space used by the NN's.Sure, as you can see from the car's real-time vision, the lane lines are moving around constantly (they vibrate, wiggle, appear, disappear...etc), these are being reasoned through computer vision neural networks in non-deterministic pixel space (vs deterministic vector space). Pixel space is to non-specific to place a nice straight non-moving wall that is drawn to be the closest possible space to the lane, without being un-safe. If this was being done in pixel space, the car would most likely end up in the lane in some high percentage of cases or stop way behind the lane in a percentage of cases/scenarios. Only with ground truth (most likely mapped vector space) could that wall be drawn and trusted to such a high degree. And that is being done by reasoning through several ground truth markers in the scene and calculating angles and distances. Then, feeding that ground truth into the trained NN models allow them to be much more confident in their outputs, quicker and have more robust outputs. This also greatly benefits path planning which can take ground truth inputs and be able to plan paths that don't wiggle as the input is deterministic and non-changing as the car moves through the scene, thus turns that are somewhat occluded or blind to the real-time system are now much smoother. Case in point is when the car commits beyond the creep wall and moves to an actionable pose in the median box. This is now smooth as that median space is now trusted as ground truth (vector space to some degree) and the path planner does not need to re-plan near as much as it used to for that kind of challenging maneuver.
Does that help?
Is this kind of like HD mapping?Sure, as you can see from the car's real-time vision, the lane lines are moving around constantly (they vibrate, wiggle, appear, disappear...etc), these are being reasoned through computer vision neural networks in non-deterministic pixel space (vs deterministic vector space). Pixel space is to non-specific to place a nice straight non-moving wall that is drawn to be the closest possible space to the lane, without being un-safe. If this was being done in pixel space, the car would most likely end up in the lane in some high percentage of cases or stop way behind the lane in a percentage of cases/scenarios. Only with ground truth (most likely mapped vector space) could that wall be drawn and trusted to such a high degree. And that is being done by reasoning through several ground truth markers in the scene and calculating angles and distances. Then, feeding that ground truth into the trained NN models allow them to be much more confident in their outputs, quicker and have more robust outputs. This also greatly benefits path planning which can take ground truth inputs and be able to plan paths that don't wiggle as the input is deterministic and non-changing as the car moves through the scene, thus turns that are somewhat occluded or blind to the real-time system are now much smoother. Case in point is when the car commits beyond the creep wall and moves to an actionable pose in the median box. This is now smooth as that median space is now trusted as ground truth (vector space to some degree) and the path planner does not need to re-plan near as much as it used to for that kind of challenging maneuver.
Does that help?