Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Fibre versus Starlink Discussion

This site may earn commission on affiliate links.
Did you read the previous posts?

That's crazy, I was thinking the same thing! It definitely seems like you missed at least one post--not sure if you had a browser issue or whatever. Check it out here.

Anyway, since I have some extra time today:

Let's first acknowledge that you haven't provided any meaningful response to:
1. What will average internet user load be in 10 years?
2. What will average internet user latency be in 10 years?
3. What does it cost to run fiber to sparsely populated users in rural areas? Cost any method you like.

So, again, feel free to provide some actual baseline position on those. It will make this much easier. Honest.

Moving on to something that can actually advance the conversation here:

And let's compare to the cost of Starlink. Satellites need replacing every few years.

In very round numbers, Starlink is going to cost $10B for the initial buildout. For replacements its easy to peg a gen-N constellation+launch at less than $1B (~4000 sats @$100k = $400M, which leaves $600M for launch, which is closer to F9 cost than Starship). Assuming 5 year constellation life and round mathing, in 30 years that's $15B in capex.

Note that most of the lifetime predictions for fiber are mostly in the 20-25 year range, which is why I'm using 30 years.

Pivoting to number of subscribers, and noting that folks here will be quick to point out that my predictions are much less optimistic than some: IMHO the first ~5 years of Starlink will probably average 2-3M subscribers total in the US, ramping up to a steady state of maybe as much as 10M. I'm going to call it a 30 year average of 8M. As I've contested [way] upthread ROW subscriber base likely won't match that of the US, and so for round numbers let's call it 15M average global subscribers over that 30 years. That makes the Starlink math pretty straightforward at $15B for 15M customers over 30 years or, in the US, $8B for 8M customers.

As for fiber data, let me do your homework for you. While we can certainly modify based on input from others with more insight, this random googs find says fiber hookup is $600/user and rollout is $20k/mile. If we normalize total cost to the $8B for 8M subscribers we spent on Starlink in the US, that math basically backs out to being able to run ~150k miles of fiber. Or...spread over 8M locations, an average of 100 feet per subscriber.

Now I won't claim to be an expert on rural US geography by any means, but...100 feet per rural subscriber doesn't quite seem like it will get the job done. Even if we order-of-magnituded the rollout of fiber down to $2k/mile we're still only at ~1000 feet per subscriber to match Starlink cost. Given that we're talking about 8 million of the most rural subscribers in the US, that's going to be a hard case to close. I'd guess each subscriber would need at least an average of 1 mile of fiber, which of course nukes the whole cost story for fiber.

I'll also note that I'm not factoring opex and instead calling it a wash between Starlink and fiber. The $30/user-month for fiber is a HUGE number for 8 million subscribers (like, close to $100B) and is almost certainly higher than Starlink's otherwise more automated and consolidated ops, but opex feels like too much of a rat hole right now so we'll just give fiber the gimmie for now.

So in conclusion, while fiber makes a lot of sense for a lot of people (and nobody's contested otherwise), from a cost perspective it clearly does not make sense for the primary Starlink subscriber base.



Before I move on, since we're on the subject of adding new topics, let's also contemplate the timeline associated with rolling fiber out to those 8M people. An infrastructure buildout of that magnitude--rolling cable to every corner of the USA--would require significant effort from the public sector and, if we're honest, would probably take the better part of 5 years just to make it out of the legislative phase. Layer another 5 years for procurement and actual buildout and we're at 10 years before 8 million people gets service...compared to 5 years for those same folks to receive Starlink service.

And as a corollary, while I think everyone would agree that hanging fiber on existing poles is a great idea, any rational person would agree that it will be a HUGE lift to green light hanging fiber across a material percentage of municipalities spread across the USA on any kind of reasonable timeline. So...you're stuck with a pretty substantial rock and a hard place for the rural fiber rollout where the choices are a) pay to trench or b) wait to hang.

Allow me to add a 4th point to the mix: The environmental impact. Starlink is not environmentally friendly. Disposable satellites, launched from the ground with huge energy requirements and massive emissions every time. Even the transceivers are pulling 150W. We should be building a sustainable future.

Great, setting aside the goalpost moving, at least this is another new topic. Please quantify the environmental impact--via any metric you like--of Starlink servicing hard-to-access low density regions vs the environmental impact of rolling fiber to all of those low density regions.

To get you started, and while by no means comprehensive lists:

For Starlink impact we have:
--Manufacturing of satellites & Rockets
--Manufacturing of facilities & equipment to build space hardware
--Transporation of satellites to the launch site(s)
--Expended fuel from the launcher (Raptor = water + CO2)
--Atomized mass of the satellites

For fiber, we have:
--Manufacturing of fiber
--Transportation of fiber throughout the country
--Manufacturing of facilities and construction equipment for fiber manufacturing and infrastructure buildout
--Physical environmental damage from infrastructure buildout
--What about That Tree Frog environmental impact from infrastructure buildout
--Emissions from equipment used during the infrastructure buildout
--End-of-life fiber environmental impact from material disintegration

Let us know what you come up with. Feel free to make whatever assumptions you like, as long as they're rational and rooted in some fundamental truth.
 
Last edited:
Hmmm... My initial FIOS installation was Coax from the ONT to their on-prem router (which i reconfigured to be bridge). I now have a twisted pair GigE connection straight from the ONT to my router/FW instead.

While I don't have records, I'm sure that earlier cable setup didn't add that kind of latency to the overall link, as I would get sub 10ms ping times to local speedtest servers.

That Coax was not using DOCSIS, it was using MoCA, which is basically modern ethernet over cable tv grade coax.

The Fios Network Adapter leverages MoCA technology. MoCA, which stands for "Multimedia over Coax Alliance", delivers a high-speed Ethernet connection, but delivers it using your existing coaxial cable.

The DOCSIS signaling to deal with long distances and shared media is what adds to the latency.

-Harry
 
  • Like
Reactions: scaesare
You don't understand at all. I'm saying your test methodology is imperfect, not that the goal has changed.

When did you say that more than 1-2ms ping to Google was unacceptable? What is your basis for that claim?

Can you explain why you have this affinity for the old T1 service? Why avoid comparing with modern fibre?

This was your statement earlier in the thread:

On fibre I'm seeing sub 1ms pings to sites like Google, which is a small fraction of the time it takes the signal to even get to the satellite. It should hopefully be obvious that if it has to go first up and then back down it's never ...

I took that as the goal post was "low latency", since StarLink has proven sub 30ms, and is aiming for 20ms once the constellation is more fully deployed.

The dual bonded T1 (yes, I know it's old, but it's what I have to work with for that customer) has FAR lower latency vs HFC, and lower latency vs StarLink. If I could get the customer off the dual T1 setup today I would, heck at this point I might even do the installation cut over for free to get them off it...

And I have compared with modern Fiber, both GPON and Active Business Fiber.

I was attempting to show that latency is NOT the only factor, and that you can have low latency that sucks horribly.

Would I take it for ssh and other low bandwidth tasks, probably vs Geo Sat, but I would rather have 100Mbit + double the latency...... vs 3Mbit and 16ms latency.

-Harry
 
That's exactly what I said. And I also noted that it's not just latency, but the variability of that latency.

Not particularly responding to this quote, but to the larger thread and this conversation.

I'm trying to figure out what you are arguing FOR @banned-66611 and I'm having trouble figuring that out. There are lots of Starlink utility and details that you're arguing against - I get that. But what are you arguing for?

Are you arguing for stopping SpaceX from wasting a tremendous amount of their capital on a service that can't work? It's SpaceX's money, and their investors - if they wish to waste it on a failed concept, isn't that their choice? It's the nature of capitalism and investment - some work, some don't, and this is how boundaries get tested and pushed out to new boundaries.

Or a service that will function, but be a distraction that will delay the real internet that we need, so we need to shut it down now?


What are you arguing for (rather than against)?
 
Double that in fact, Nuro Hikari 20GS has been offering 20Gbps symmetric lines in some cities for a few months now.

The standards for 20Gbit PON are still very early in development let alone deployment.

The Nuro Hikari 20GS service is running 20Gbit on the PON side (which is shared), but has a maximum hand over of 10Gbit (wired) and 4.8Gbit (wireless, if you are in a perfect RF environment). NURO 光 20Gs

Google translate (which appeared to do a decent job of the critical information, I will fully admit I can not read the original Japanese): Google Translate

I don't know if I would call a 20Gbit service with 10Gbit hand over "20 Gbit", but I guess that is a marketing decision.

You could be on 20Gbit service with a bad split ration (1:128 is in the specs), and heavy users on it, and get far worse service vs a 10Gbit service with a better split ratio (1:32 or 1:16), and the same heavy users. I would expect 1:128 to be deployed in high density environments, as the number of splits reduces the overall length permitted.

I could see a justification to build out 20Gbit and do more splits, and less full runs of fiber back to the OLT, but in my mind the "long term" investment is the Fiber itself, and the "short term" investment is the equipment to light the fiber.

Based on what I have seen from bandwidth usage and overall Internet Engineering, neither 10Gbit nor 20Gbit service makes much sense right now. If I was deploying a brand new PON network today, I would use GPON, with 2.5G down, 1G up capabilities, and sell 1Gbit service on it. I would keep my split ratios as low as I could (1:8 or 1:16, possibly 1:32), and plan to add/move XGS-PON in the future, as it can share the same fiber with GPON, it is a different wave length.

1710ciminstallf1.png


The cost of GPON equipment (2.5G/1G) is just soooo low right now that it makes sense to take the savings on the equipment, invest the $ in the fiber and deployment, and wait for the XGS-PON equipment to come down in cost.

If it required a "forklift" to upgrade, ie changing out the all of the ONTs and OLTs, I could see "future proofing", but since GPON and XGSPON can co-exist on the exact same fiber strand, you would put in a XGSPON OLT, and only cross connect a fiber run when a subscriber on that run wants to upgrade. You get that subscriber a new ONT, upgraded service level (and increased $ per month revenue), and as you get to a small number of GPON customers on that fiber strand, you offer the remaining customers a "free" upgraded ONT, with the same service level (and revenue) that they had, and remove the GPON OLT from the strand. Eventually you retire the GPON ONT or use it with other customers.

I have looked into PON network design, as I have been trying to convince HOA and Home Builders that the legacy network providers need competition, but I have yet to be able to get any of them to accept the financial viability.

As I said before, I have nothing against Fiber to the home, in fact I have a FTTH subscription (one location), as well as HFC coax (another location) personally, and would gladdly pay more $ per month to replace the HFC with FTTH, but it is not available at that location (yet? ever?).

Fiber networks have zero to do with StarLink's capabilities, viability, or usage. They do not exist for most of these users, and will not exist for at least the next 20 years if not longer. SpaceX is offering the "better than nothing" beta because that is exactly what the target users have NOTHING.

I work from home (day job is 100% remote due to Covid), I teach my college courses from home (yes, remote teaching over HFC coax is NOT a problem), and I have zero issues due to standard latency. Has COX Cable had some issues due to the stress the HFC network is under right now, yes. In fact they just had to do 4 days of disruptive upgrades in multiple parts of Tucson.

-Harry
 
Not particularly responding to this quote, but to the larger thread and this conversation.

I'm trying to figure out what you are arguing FOR @banned-66611 and I'm having trouble figuring that out. There are lots of Starlink utility and details that you're arguing against - I get that. But what are you arguing for?

Are you arguing for stopping SpaceX from wasting a tremendous amount of their capital on a service that can't work? It's SpaceX's money, and their investors - if they wish to waste it on a failed concept, isn't that their choice? It's the nature of capitalism and investment - some work, some don't, and this is how boundaries get tested and pushed out to new boundaries.

Or a service that will function, but be a distraction that will delay the real internet that we need, so we need to shut it down now?


What are you arguing for (rather than against)?


LOL.....I thought the same thing, and decided my last rebuttal with him would end my conversation with him. As quoted by George Bernard Shaw (and some attribute to Mark Twain)
EQQqUvcVAAEBGgZ.jpg
 
Okay, last try.

Let's first acknowledge that you haven't provided any meaningful response to:
1. What will average internet user load be in 10 years?
2. What will average internet user latency be in 10 years?
3. What does it cost to run fiber to sparsely populated users in rural areas? Cost any method you like.

My crystal ball is on the fritz today but as I already told you twice now it seems unlikely that the current trend of increasing bandwidth requirements and increasing low latency requirements will reverse.

As an example 8k broadcasts have been running since last year, with satellite broadcasts using 100Mbps and fibre using 200Mbps. That's per stream of course, and it is expected to rise over time as more broadcasts go to 120 fps. Both the XBOX Series X and PS5 are capable of 8k output too, along with Nvidia's latest graphics cards, which are all used by various streaming services. Those streaming services work best with sub 16ms latency, ideally lower as local play is pushing for sub 1ms input latency.

In very round numbers, Starlink is going to cost $10B for the initial buildout. For replacements its easy to peg a gen-N constellation+launch at less than $1B (~4000 sats @$100k = $400M, which leaves $600M for launch, which is closer to F9 cost than Starship). Assuming 5 year constellation life and round mathing, in 30 years that's $15B in capex.

That's using SpaceX's somewhat dubious figures from 2018, but okay. So $500m/year just to maintain the constellation, plus ground station costs, plus regulatory costs (spectrum, licencing, compliance testing).

Note that most of the lifetime predictions for fiber are mostly in the 20-25 year range, which is why I'm using 30 years.

Not really, no. It's rated for 25 years typically but will generally last a lot longer. It's similar to solar PV and even the original copper wires used for telelphone service. There is a huge difference between what the manufacturer rates them for (i.e. what the buyer can expect as a minimum) and what they actually do.

Currently there isn't much fibre over 30 years old so the real lifespan is something of an unknown, and accelerated testing gets less and less accurate the longer the timespan.

As for fiber data, let me do your homework for you. While we can certainly modify based on input from others with more insight, this random googs find says fiber hookup is $600/user and rollout is $20k/mile. If we normalize total cost to the $8B for 8M subscribers we spent on Starlink in the US, that math basically backs out to being able to run ~150k miles of fiber. Or...spread over 8M locations, an average of 100 feet per subscriber.

Do you know anything about how fibre works? You don't string an individual strand from the ISP to every house.

Most of the infrastructure is already there. In most places it's fibre right up to the "last mile" (not a literal mile) already. They don't need one strand per house either, one strand has so much bandwidth they just run it to the block and then split it out for a dozen or more homes.

Of course there comes a point where that remote cabin in the woods is better served by satellite or some other wireless system. But for anywhere with even a small number of buildings and existing infrastructure (i.e. poles or ducting) fibre is relatively inexpensive. That's also why it's mostly fibre up to the last mile, when the copper needs replacing they throw in fibre now. It's not even worth trying to find and fix the break in the cable, just rip it out and replace it. That also gives them an opportunity to sell bandwidth to things like cell towers.

Before I move on, since we're on the subject of adding new topics, let's also contemplate the timeline associated with rolling fiber out to those 8M people. An infrastructure buildout of that magnitude--rolling cable to every corner of the USA--would require significant effort from the public sector and, if we're honest, would probably take the better part of 5 years just to make it out of the legislative phase.

Political failures are what keep the US in the 3rd world for broadband, agreed.

Great, setting aside the goalpost moving, at least this is another new topic. Please quantify the environmental impact--via any metric you like--of Starlink servicing hard-to-access low density regions vs the environmental impact of rolling fiber to all of those low density regions.

FTTP is around 56kWh/year/subscriber for the necessary equipment. For the Starlink transceiver *alone* it's over 1300kWh/year.

https://www.europacable.eu/wp-content/uploads/2020/11/Prysmian-study-on-Energy-Consumption.pdf
 
  • Funny
  • Disagree
Reactions: bxr140 and MP3Mike
This was your statement earlier in the thread:

On fibre I'm seeing sub 1ms pings to sites like Google, which is a small fraction of the time it takes the signal to even get to the satellite. It should hopefully be obvious that if it has to go first up and then back down it's never ...

I took that as the goal post was "low latency", since StarLink has proven sub 30ms, and is aiming for 20ms once the constellation is more fully deployed.

Ah, I see why you were confused then, that was an incorrect assumption.

What I would say is that at this point we can't really know how bad latency, and more importantly variability, will be on Starlink. It needs a full constellation, more ground stations, and more subscribers. Even then it will depend heavily on where you are in the world.
 
I'm trying to figure out what you are arguing FOR @banned-66611 and I'm having trouble figuring that out. There are lots of Starlink utility and details that you're arguing against - I get that. But what are you arguing for?

Are you arguing for stopping SpaceX from wasting a tremendous amount of their capital on a service that can't work? It's SpaceX's money, and their investors - if they wish to waste it on a failed concept, isn't that their choice? It's the nature of capitalism and investment - some work, some don't, and this is how boundaries get tested and pushed out to new boundaries.

Or a service that will function, but be a distraction that will delay the real internet that we need, so we need to shut it down now?


What are you arguing for (rather than against)?

I support SpaceX here, this will be a valuable service for millions of people around the world. The environmental issues are a concern but bringing broadband to more parts of the globe is a noble cause.

There are two issues though. Firstly I'm not sure it's going to be as useful as some people think, e.g. due to the power consumption and size of the transceiver.

Secondly, and this is the big one, to avoid further widening the digital divide the goal must be to get almost everyone up to at least gigabit speeds over fibre. The speeds people in rural areas are now complaining about were quite good a decade or two ago. I remember getting 0.5Mbps and being impressed by the download progress bars zipping along. But that wouldn't cut it today, and 50-100Mbps won't cut it in 10-20 years either. In fact that's already too slow for many people, especially in households with multiple users.

Too often we see people get left behind like this. We actually have a great opportunity now to spread out, to go live in more rural areas and work from home. The technology is finally there, and the pandemic proved it can be done. The key to unlocking it is fibre.

This being a Tesla forum I'd liken it to the difference between having a big battery and a supercharger network, and a small battery. A old Nissan Leaf is a great car, suits many people, but you do worry about range. You won't go an live out in the sticks with one because you aren't sure it will always meet your needs.
 
Okay, last try.



My crystal ball is on the fritz today but as I already told you twice now it seems unlikely that the current trend of increasing bandwidth requirements and increasing low latency requirements will reverse.

As an example 8k broadcasts have been running since last year, with satellite broadcasts using 100Mbps and fibre using 200Mbps. That's per stream of course, and it is expected to rise over time as more broadcasts go to 120 fps. Both the XBOX Series X and PS5 are capable of 8k output too, along with Nvidia's latest graphics cards, which are all used by various streaming services. Those streaming services work best with sub 16ms latency, ideally lower as local play is pushing for sub 1ms input latency.



That's using SpaceX's somewhat dubious figures from 2018, but okay. So $500m/year just to maintain the constellation, plus ground station costs, plus regulatory costs (spectrum, licencing, compliance testing).



Not really, no. It's rated for 25 years typically but will generally last a lot longer. It's similar to solar PV and even the original copper wires used for telelphone service. There is a huge difference between what the manufacturer rates them for (i.e. what the buyer can expect as a minimum) and what they actually do.

Currently there isn't much fibre over 30 years old so the real lifespan is something of an unknown, and accelerated testing gets less and less accurate the longer the timespan.



Do you know anything about how fibre works? You don't string an individual strand from the ISP to every house.

Most of the infrastructure is already there. In most places it's fibre right up to the "last mile" (not a literal mile) already. They don't need one strand per house either, one strand has so much bandwidth they just run it to the block and then split it out for a dozen or more homes.

Of course there comes a point where that remote cabin in the woods is better served by satellite or some other wireless system. But for anywhere with even a small number of buildings and existing infrastructure (i.e. poles or ducting) fibre is relatively inexpensive. That's also why it's mostly fibre up to the last mile, when the copper needs replacing they throw in fibre now. It's not even worth trying to find and fix the break in the cable, just rip it out and replace it. That also gives them an opportunity to sell bandwidth to things like cell towers.



Political failures are what keep the US in the 3rd world for broadband, agreed.



FTTP is around 56kWh/year/subscriber for the necessary equipment. For the Starlink transceiver *alone* it's over 1300kWh/year.

https://www.europacable.eu/wp-content/uploads/2020/11/Prysmian-study-on-Energy-Consumption.pdf

I think we actually agree more on this then anyone might think when reading the this thread. Is FTTH the best possible broadband solution, yes. Is it going to happen quickly and ubiquitously, no. Should it happen more than it is, yes.

We differ a bit on thoughts of how StarLink will evolve over time. It could gain enhancements like Laser based ISLs, and improvements to spectrum utilization, or it could wither and die an early death. To me early death is 10 years (ie after the first set of replacement satellites) or sooner. Long life is 30+ years, including at least 5 generations of satellites. Will the user terminals get better, in my opinion, yes. The power usage will go down, the installation (already easy) will get easier, etc.

Is large scale rural Fiber deployment going to be able to happen in 10 years, 20 years, 30 years? My opinion is no for the US and Canada. If we really really push hard we might get to 50% of the US population in 5-10 years. 50% of US homes still won’t have fiber broadband by 2025, study says I suspect that 50% number is actually not realistic either.

You know the availability and facilities in Japan. Many of us know the availability and facilities in the US and Canada.

A lot of this is ineffective government, government regulations, utilities charging high prices for pole attachment (average was about $15.50/year/pole https://www.fcc.gov/sites/default/files/ad-hoc-commitee-survey-04242018.pdf), the cost of having electric utility approved contractors install the fiber, and an overall thought of "it will eventually happen if we give enough free $ to the telco and cable cos".

Google tried pushing several mid size city locations with FTTH networks, and wound up stopping due to the extremely high costs. Google accomplished what they wanted, scare the crap out of the telco and cable cos, to get them to start moving (slowly) with faster speeds.

It took a project over 5 years to move most of the schools in Tucson from a single T1 with mixed voice and data usage to a 10 gig fiber backbone with 1 gig hand over at each school and voip backhaul for the telephone switches at the schools. I was part of the evaluation processes. The 10 year network bids were anywhere from $15M USD to over $30M USD (before e-Rate discount), for just the fiber optic network services for ~108 locations. That did not include the school network upgrades or the telephone system upgrades to VoIP enable the backhaul.

The schools were stuck on the single (not even bonded) T1 per site for well over a decade past when they should have been, because the local telco kept trying push a very expensive network that the district could not afford. It would have been about $5M/year without the discounts, about $1M/year with the discounts. The eventual network as deployed was a bit over $1.5M/year without discounts, or ~$300K/year with the e-Rate discounts. The large telcos view the e-Rate program and Universal Service Fees that fund it as a way of diverting the "tax" back into their pockets for over priced services to schools and libraries.

Do we stop pushing for Fiber where we can, no. Look at the FCC maps: Federal Communications Commission That is the latest award for rural broadband in the US. Anything "gigabit low latency" is fiber. Anything "above baseline low latency" is almost entirely SpaceX/StarLink. There is a lot of coverage with both of these, but the number of actual sites/houses in each block can be tiny. Many around Tucson are "8 houses".

Basically, I think we both can agree that Fiber is "better", the question is how much better, and how doable, and that even Rural Japan is far easier to deal with vs the vast amounts of middle of nowhere in the US and Canada. Japan is also far more motivated to make this work, both as a people and the government. For $DAYJOB, I have sister teams in Japan and work very closely with them.

The US does not have either the personal push, or the government interest to make it work at least in the next 10 years at scale.

Longer term, SpaceX will either have to keep improving the technology, or they will just let the network die when they can no longer keep enough customers to support the costs of replacement satellites. The constellation is designed to self de-orbit in a reasonable amount of time, and with minimal impact to the de-orbit (ie nothing hitting the ground). StarLink is at "worst" a bridge for the next 10+ years, a bridge that will do far better vs any other bridge option during that time.

-Harry
 
Ah, I see why you were confused then, that was an incorrect assumption.

What I would say is that at this point we can't really know how bad latency, and more importantly variability, will be on Starlink. It needs a full constellation, more ground stations, and more subscribers. Even then it will depend heavily on where you are in the world.

Agreed, location will matter, number of users will matter, number size of the constellation will matter.

That being said, really good ISPs do a LOT of public and private peering to exchange traffic as locally as possible. Heck there is even a non-profit Internet Exchange in Puerto Rico. PeeringDB

To get to a location to peer most ISPs have to lease fiber or wavelengths to get to the facility, SpaceX owns their own network, and they just has to lease roof space, and power for a ground terminal, and they can peer at almost any data center, almost anywhere in the world.

If the SpaceX network engineering team does this right, they will eventually be the most interconnected network in the world, especially once they add ISLs, but even before that, the satellite overhead above Tucson can also see Las Vegas and Los Angeles, and can directly peer with other networks in both locations.

-Harry
 
Google tried pushing several mid size city locations with FTTH networks, and wound up stopping due to the extremely high costs. Google accomplished what they wanted, scare the crap out of the telco and cable cos, to get them to start moving (slowly) with faster speeds.

Excellent post. Just wanted to add to the above:
Google stopped their fiber project because they ran into a TON of "Right of Way" issues by local telcos, existing fiber providers, etc. They literally got bogged down in the legal morass more than anything, not because of any technical problems.

But you are right, their start to doing this did motivate several traditional Telcos to get off their collective asses and start offering FTTH.
 

16ms is 1 frame at 60 FPS, which is the target for fast paced games these days.

For local play many gaming peripherals are moving to 1Hz update rates, e.g. mice and keyboards. Consoles tend to lag behind a bit but still target sub 16ms for input.

If you are streaming a game you of course want it to be as close to the local experience as possible, so if you add an additional 16ms of lag to any inputs and the feedback from the server (i.e. sound and the compressed video stream) it is noticable.
 
If the SpaceX network engineering team does this right, they will eventually be the most interconnected network in the world, especially once they add ISLs, but even before that, the satellite overhead above Tucson can also see Las Vegas and Los Angeles, and can directly peer with other networks in both locations.

They do have certain advantages in that area, although also disadvantages like being subject to environmental effects on their connections. I think consistency will be their biggest issues, it will keep working but performance will vary considerably over time.
 
Is large scale rural Fiber deployment going to be able to happen in 10 years, 20 years, 30 years? My opinion is no for the US and Canada. If we really really push hard we might get to 50% of the US population in 5-10 years. 50% of US homes still won’t have fiber broadband by 2025, study says I suspect that 50% number is actually not realistic either.

I agree you have huge problems in the US and Canada for political reasons. What I'm saying is that instead of throwing public money at Starlink you would be better offer trying to fix those issues.

I don't know about Canada but certainly in the US there is so much corruption and fear of anything remotely socialist that doing what is necessary is probably very difficult, but then again you did manage to get Obamacare so maybe it's not beyond the realms of possibility.

Basically, I think we both can agree that Fiber is "better", the question is how much better, and how doable, and that even Rural Japan is far easier to deal with vs the vast amounts of middle of nowhere in the US and Canada. Japan is also far more motivated to make this work, both as a people and the government. For $DAYJOB, I have sister teams in Japan and work very closely with them.

I think you probably underestimate the reach of existing fibre in the US. Yes there are some communities where Starlink may be the only option, but a lot of very rural areas already have fibre links nearby to provide cell coverage.
 
Engage my brain? Says the guy that cannot defend his arguments.

MOST fiber is trenched, not up on poles. Why? Because of tornadoes, hurricanes, morons driving cars into poles, etc. you don't want the risk of a fiber break of crap strung on poles. MOST fiber in Japan is actually underground, NOT on poles, you nit wit.

I run a business that GUARANTEES 5-9s (99.999%) uptime to different datacenters on 3 continents. It takes HOURS to repair fiber (it's a bitch to splice), so no one in their right mind runs fiber on poles except to residences, where you don't have uptime guarantees.

Why don't YOU engage YOUR brain and do some research before you spout out your personal, uneducated opinions as FACT.

To backup @bkp_duke, I've recently been involved with several Fiber to the Home (FTTH) installs over the past few years. All have been buried, even with "available" poles. So it isn't just for business class reliability reasons. So why bury instead of poles?

First, poles aren't as cheap as they appear. You have to do an engineering analysis for EACH pole to figure out if the new load of the fiber and steel messenger cable won't send it over the edge. And no, you can't just get rid of the copper while you do this since the copper is still being used! There are a lot of 90 year retirees that like their POTS service, thank you very much. Then, you have to rent pole access. The electric company isn't going to give you access to their poles for free.

Second, municipalities and counties would like to eventually get rid of unsightly poles. They look much more favorably on underground projects versus poles.

Third, what bkp_duke said, reliability is important when building new. Cars hitting poles is a pretty common occurrence, not to mention wind storms, etc.

Finally, a point about fiber cost. It makes sense economically in dense urban and suburban neighborhoods. It costs a LOT more when you are talking semi suburban and rural. We just did a FTTH project in a semi-suburban 2,000 home HOA and it cost $10,000 per home just for the backbone. Then homeowners had to pay an average of about $1,500 to get the fiber into their houses. That is what is known as economically INFEASIBLE for an ISP to recoup their costs over a reasonable time period, which is why the HOA homeowners had to pony up the money themselves.

Contrast this with an urban build out literally next door (the territories almost touch), and there the fiber construction company was able to pay for construction of the entire 6,000 home city (yes, small one), and the ISP is able to provide gigabit fiber Internet for $90/month.