2) Low latency is not what you think it is. Here are some pings to 8.8.8.8 which uses BGPAnycast to be as close as possible network wise to as many end users as possible:
To add a few more data points:
Frontier (formerly Verizon) FiOS (Garland TX, nominally 80Mbit up/down):
ping -c 4 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=116 time=8.67 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=116 time=9.14 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=116 time=11.1 ms
64 bytes from 8.8.8.8: icmp_seq=4 ttl=116 time=9.90 ms
Fontier (formerly Verizon) FiOS (Sachse TX, nominally... 300Mbit up/down? I don't remember):
ping -c 4 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=116 time=7.70 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=116 time=3.73 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=116 time=3.63 ms
64 bytes from 8.8.8.8: icmp_seq=4 ttl=116 time=6.32 ms
Linode (this time from an instance in Dallas TX):
ping -c 4 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=121 time=0.988 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=121 time=1.05 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=121 time=1.07 ms
64 bytes from 8.8.8.8: icmp_seq=4 ttl=121 time=1.05 ms
AT&T Dedicated Fiber (Plano TX, fiber to the office suite itself, split locally in our server room by an AT&T box to a capped 100Mbit data over 1Gbit ethernet interface to our router, plus a T1 to the PBX):
ping -c 4 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=116 time=2.67 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=116 time=2.55 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=116 time=2.54 ms
64 bytes from 8.8.8.8: icmp_seq=4 ttl=116 time=2.53 ms
Also, I won't post the full traceroutes, but the two Frontier and the AT&T both were 9 hops and the Linode was 7 hops, for reference.
I just downloaded 377MB on that bonded T1: Fetched 377 MB in 39min 24s (160 kB/s)
Cox has far higher latency vs the bonded T1, but I would NEVER want to be stuck on that bonded T1, ever.
As a counterpoint anectdote showing that latency does matter to some extent (and a situation where even a single T1 would have been a blessing), many years ago before the cable providers had moved far enough up into the mountain to provide cable TV not to mention Internet, at my uncle's place at Lake Tahoe, they had DirecPC/HughesNet. This was when most people had dialup and not necessarily even '56k' dialup. It was marginally servicible if you didn't attempt to click through pages too quickly, and that only with a precaching proxy (would never work in today's SSL world) that was making requests to linked pages and assets on the page ahead of time before the browser did to reduce perceived latency. It was completely unserviceable for any kind of remote work (i.e. SSH) and of course gaming was right out. When I was there, I would opt to wait until I could hog the phone line and dial into my ISP at a much "slower" speed. While pages loaded faster, when they finally loaded, on the satellite connection, the wait time for anything to happen was maddening.
At various times in my life I've had access to most flavors of dialup, as well as a 56K leased line, dual channel ISDN, original ADSL where 1.5Mbps/128kbps theoretical performance was considered good for the price... I'd take any of those over the geosynchronous satellite delay. I'd be perfectly happy with the sort of response time Starlink seems to be giving from LEO. These days I sometimes do remote work that involves trans-atlantic SSH and by the time my packets get into the middle of Europe and then I see the response to my typing it can be quite irritating. Starlink's latency for day to day use will be fine, and if sat-to-sat links become a reality and they manage to route within Starlink as close to the destination as possible, it's very plausible that my trans-atlantic scenario would be much improved, though neither myself nor the site in the EU would be likely candidates otherwise as we both have local fiber service, so I'm unlikely to benefit unless Starlink starts doing backhaul for other ISPs.
The mass being launched (60 at a time on F9, far more on Starship), and the energy cost of those launches (combined) will eventually be less than the energy used to deliver the user terminals (combined) via FedEX/UPS/etc. Now think of all of the energy used for a Fiber deployment, underground or above ground, the amount of truck rolls, the amount of energy in shipping the fiber, etc.
StarLink averages this energy across a vast amount of land mass, and thus a very low energy cost per user.
-Harry
Not to mention on the eco side, that when Starship becomes regularly operational, they are quite likely to make their own fuel by building a giant solar/battery (and possibly wind) farm and/or buying power that is generated in such a fashion, and then using the same methods to create methane and liquid oxygen as they would on Mars - both due to the scale of their needs making self generation of propellant ideal (most commercial production is geared towards a lower grade with more impurities, so the price and supply of what they want favors producing it themselves) as well as to endurance test a lot of the equipment (though it would be somewhat different for Mars applications, a lot can still be shared/learned). They can basically go "carbon neutral" on Starship (ignoring construction and such, which even then could in theory use green power), by capturing CO2 from the atmosphere to combine with H2O and make Methane + Liquid Oxygen. Of course technically it's not the same as they would be gathering from near ground in one area and then spreading it out over a larger higher altitude area, but even then the ecological impact of methalox launches (fewer weird secondary reactions) is going to be much less than traditional kerolox launches (lots of weird secondary reactions)