Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

SpaceX Internet Satellite Network: Starlink

This site may earn commission on affiliate links.
To be clear, the signal strength with not increase with bigger satellites. The beam size and thus cell size will decrease with bigger satellites (specifically, bigger antennas) so the number of users in a beam will decrease and thus every user's share of the 17 or whatever mbps will increase. The link will still suffer the same degree of losses from occlusions, doppler compensation, etc.
Sure, link loss will not change, however...
A larger array only increases data rate at the phone if it allows more simultaneous satellites beams or higher rate per beam. Otherwise, total bandwidth is unchanged even if spot size can shrink (which it might not to maintain coverage).

A larger array improves received power at the satellite (which is likely the most marginal parameter). However, uplink data rate is usually less important than downlink (unless talking ACK datagrams).

A larger array reduces spot size on the ground which can help in handoff zones.

Do we know if the terrestrial power level is maxed out with the minis? If it isn't, a larger (or higher power) array can increase signal strength at the phone. This could boost data rate.
 
  • Like
Reactions: Grendal
17Mbit effective rate from sat to phone. 15% datagram loss rate.
Interesting... the screenshot of that test targets a bandwidth rate of 20Mbps (the -b parameter), and a 15% packet loss nets the ~17Mbps being reported:

1709477515509.png


It would be interesting to see that test run with no caps, and what the loss may be. If it goes up dramatically, I wonder if this implies that are targeting an accptable UDP loss rate of 15% (understanding the individual user bandwidth would be lower in a real scenario).

15% packet loss is a quite a bit for many services that use UDP for media.. like many teleconferencing apps, and some video streaming...
 
  • Informative
Reactions: Grendal and JB47394
Interesting... the screenshot of that test targets a bandwidth rate of 20Mbps (the -b parameter), and a 15% packet loss nets the ~17Mbps being reported:

View attachment 1024152

It would be interesting to see that test run with no caps, and what the loss may be. If it goes up dramatically, I wonder if this implies that are targeting an accptable UDP loss rate of 15% (understanding the individual user bandwidth would be lower in a real scenario).

15% packet loss is a quite a bit for many services that use UDP for media.. like many teleconferencing apps, and some video streaming...
Yeah. The iperf3 command set the target bandwidth to 20Mbit in UDP mode so 17Mbit may not be the max possible...
20*(1-15%) = 17Mbit
 
  • Like
  • Helpful
Reactions: Grendal and JB47394
I consider it an honor to be agreed with by @mongo :)
😅 😊
To the topic: the test may indicate the max bandwidth really is 17Mbit and the drop rate wasn't random...
Sending side was spitting out packets at the target rate and a bottleneck would only pass them at the lower rate and drop new ones when the FIFOs were full. That fits with the first second having 0 dropped packets.
 
  • Like
Reactions: scaesare
Sure, link loss will not change, however...
A larger array only increases data rate at the phone if it allows more simultaneous satellites beams or higher rate per beam. Otherwise, total bandwidth is unchanged even if spot size can shrink (which it might not to maintain coverage).

Not wrong, but to try and clear up some conflation:
  • The maximum data rate at the device is a function of the maximum per-beam data rate being shared by the number of users in the beam. (Obviously, its not just straight division)
  • The actual data rate realized by the device is the above, degraded by things like environment or body occlusions
  • The maximum data rate of a beam is exclusively a function of regulated power levels (regardless the size of the beam)
  • The maximum data rate of a satellite is the above times the number of simultaneous beams. Power/energy budgets could also be a limiting factor, but SX is too smart to cut those too close so it's reasonable to assume they're a non-issue.
A larger array improves received power at the satellite (which is likely the most marginal parameter). However, uplink data rate is usually less important than downlink (unless talking ACK datagrams).

True and true. More elements on the Rx side of the satellite increases G/T, and the return link (UT to Satellite) is as asymmetric as any other two way exchange and thus not nearly as impactful to the user experience as the forward link (Webex-type streaming aside, which again is a non-starter anyway when it comes to production service). Limitations are mostly on the device side itself; cranking the PAs up to 11 burns through the battery and can potentially create thermal problems as well.

A larger array reduces spot size on the ground which can help in handoff zones.

Maybe not. It's very likely that "the system" will need to actively track and assign a user to a beam. More beams adds more complexity. (Not unreasonable complexity, but still more)

Do we know if the terrestrial power level is maxed out with the minis? If it isn't, a larger (or higher power) array can increase signal strength at the phone. This could boost data rate.

In fact, SX is exceeding regulated power levels in this test. The previous 7mpbs is the mathematical capacity of where they plan(ned) to operate. Its likely the low orbit for these tests sats is helping some by reducing ranging loss, but either way, it's fair to assume production service will be lower capacity. (Hopefully with less packet loss...). That said, SX is no doubt looking to increase their approved PFD so it's anyone's guess where they'll actually land WRT the simplistic metric of mbps/beam.
 
Not wrong, but to try and clear up some conflation:
  • The maximum data rate at the device is a function of the maximum per-beam data rate being shared by the number of users in the beam. (Obviously, its not just straight division)
  • The actual data rate realized by the device is the above, degraded by things like environment or body occlusions
  • The maximum data rate of a beam is exclusively a function of regulated power levels (regardless the size of the beam)
  • The maximum data rate of a satellite is the above times the number of simultaneous beams. Power/energy budgets could also be a limiting factor, but SX is too smart to cut those too close so it's reasonable to assume they're a non-issue.


True and true. More elements on the Rx side of the satellite increases G/T, and the return link (UT to Satellite) is as asymmetric as any other two way exchange and thus not nearly as impactful to the user experience as the forward link (Webex-type streaming aside, which again is a non-starter anyway when it comes to production service). Limitations are mostly on the device side itself; cranking the PAs up to 11 burns through the battery and can potentially create thermal problems as well.

What does "G/T" stand for?


Maybe not. It's very likely that "the system" will need to actively track and assign a user to a beam. More beams adds more complexity. (Not unreasonable complexity, but still more)



In fact, SX is exceeding regulated power levels in this test. The previous 7mpbs is the mathematical capacity of where they plan(ned) to operate. Its likely the low orbit for these tests sats is helping some by reducing ranging loss, but either way, it's fair to assume production service will be lower capacity. (Hopefully with less packet loss...). That said, SX is no doubt looking to increase their approved PFD so it's anyone's guess where they'll actually land WRT the simplistic metric of mbps/beam.

Assume they are allowed such by the test license... any ideas by how much they are exceeding the power levels?
 
  • Like
Reactions: Grendal
Maybe not. It's very likely that "the system" will need to actively track and assign a user to a beam. More beams adds more complexity. (Not unreasonable complexity, but still more)
Are you envisioning TDM with the satellite targeting individual users?

In fact, SX is exceeding regulated power levels in this test. The previous 7mpbs is the mathematical capacity of where they plan(ned) to operate. Its likely the low orbit for these tests sats is helping some by reducing ranging loss, but either way, it's fair to assume production service will be lower capacity. (Hopefully with less packet loss...). That said, SX is no doubt looking to increase their approved PFD so it's anyone's guess where they'll actually land WRT the simplistic metric of mbps/beam.
Is 7Mbps the physics based number? Elon had mentioned that value previously.

From the log, it looks like the packet loss was due to attempting to send 20Mbits through a 17Mbit pipe. Note how the first second has 0 dropped datagrams. There are loss bursts at the 4 at 13 second intervals.
 
17Mbit effective rate from sat to phone. 15% datagram loss rate.

A few thoughts on this...

I wonder what and where the server side of that test was, and what version of iperf3 was running on each end. (Just thinking about a bug we fixed a long time ago where we picked a sub-optimal UDP datagram size by default, leading to fragmentation...anything released in the last 5 years or so should be sending roughly MTU-sized packets, and hopefully that's what they're doing.)

17Mbps is the throughput of the UDP payloads, which is indeed probably what most people care about. It doesn't count UDP, IP, or Ethernet frame overheads.

Sometimes UDP tests run a little better (less loss) if one increases the socket buffer sizes on each end, so there's a small chance they might be able to do better by just tweaking some iperf3 parameters.

Also I'd be curious to know if SpaceX folks tried iperf3 tests over TCP (well I'm sure they did, but what numbers did they get and how'd they tune it).

Bruce.
 

"Bruce is also the lead maintainer for iperf3, an open-source network measurement program that is used as a part of the perfSONAR system, as well as a stand-alone tool."

Thank you, sir.
 
A few thoughts on this...

I wonder what and where the server side of that test was, and what version of iperf3 was running on each end. (Just thinking about a bug we fixed a long time ago where we picked a sub-optimal UDP datagram size by default, leading to fragmentation...anything released in the last 5 years or so should be sending roughly MTU-sized packets, and hopefully that's what they're doing.)

17Mbps is the throughput of the UDP payloads, which is indeed probably what most people care about. It doesn't count UDP, IP, or Ethernet frame overheads.

Sometimes UDP tests run a little better (less loss) if one increases the socket buffer sizes on each end, so there's a small chance they might be able to do better by just tweaking some iperf3 parameters.

Also I'd be curious to know if SpaceX folks tried iperf3 tests over TCP (well I'm sure they did, but what numbers did they get and how'd they tune it).

Bruce.
Novice here:

Does the first second's data (no dropped packets) indicate no issues with MTU? Or is that insufficient to make a determination?

Buffer size would be interesting, but wouldn't that be more important for the links, not endpoints?

TCP couldn't have better data rate, since it's a layer higher, you're curious on packet loss distribution effects?
 
  • Like
Reactions: Grendal
Novice here:

Does the first second's data (no dropped packets) indicate no issues with MTU? Or is that insufficient to make a determination?

Sorry, I was in a workshop all day. :) Starting with MTU stuff...

I'm assuming that the whole path (including the cool Starlink bits) at least supports a full-sized Ethernet frame. Old versions of iperf3 used to send 8K UDP packets by default, which would get chopped into Ethernet-sized fragments on the sending host. This was bad because if the network lost one of those fragments, the network might send all the other ones all the way to the end only to see them get dropped on the receiver side because it never got all the fragments. This tends to magnify the effects of packet loss. Nevertheless, that's what some UDP applications do (like NFS).

So somewhere around iperf-3.2 (2017-ish?) we changed the default UDP packet size to be derived from the path MTU, which we determined from the TCP control connection. Of course you can always change the size of UDP datagrams (-l flag), but SpaceX used the default.

I was worried about this at first because it looked like a Windows machine was involved (and for some reason there's a plethora of old Windows binaries floating around the Inter-tubes) but after some reflection I think the Windows terminal window was just for remote access to the phone maybe? So maybe this isn't an issue.

Buffer size would be interesting, but wouldn't that be more important for the links, not endpoints?

Endpoints can get overrun, although it doesn't usually happen at these bitrates. I've never run iperf3 on a phone though. Anyways, if I'm doing UDP tests and I see loss, one thing I'll often try is just bumping up the socket buffers (-w flag) just out of reflex.

Buffering in the routers (on each end of the links) is good for some applications and not others. For us, we're mostly interested in very large bulk data transfers, and there, good performance means not dropping anything. That can be bad for interactive applications because large buffers can cause delays without some kind of quality-of-service in place. (Sorry I'm dragging in a lot of other baggage.) :)

TCP couldn't have better data rate, since it's a layer higher, you're curious on packet loss distribution effects?

Sorta...I wonder if doing TCP would give a more accurate characterization of the path throughput, since it has to figure out what the bottleneck rate is.

Fun fact: We actually have a project that uses Starlink, I don't work on it though. Some of our staff have been grappling with the issue of how do we support science that isn't next to a National Laboratory and our backbone network. This often involves various wireless technologies, including Starlink:


Bruce.

PS. Speaking for myself as an individual, views not necessarily representative of my employer.
 
Sorry, I was in a workshop all day. :) Starting with MTU stuff...

I'm assuming that the whole path (including the cool Starlink bits) at least supports a full-sized Ethernet frame. Old versions of iperf3 used to send 8K UDP packets by default, which would get chopped into Ethernet-sized fragments on the sending host. This was bad because if the network lost one of those fragments, the network might send all the other ones all the way to the end only to see them get dropped on the receiver side because it never got all the fragments. This tends to magnify the effects of packet loss. Nevertheless, that's what some UDP applications do (like NFS).

So somewhere around iperf-3.2 (2017-ish?) we changed the default UDP packet size to be derived from the path MTU, which we determined from the TCP control connection. Of course you can always change the size of UDP datagrams (-l flag), but SpaceX used the default.

I was worried about this at first because it looked like a Windows machine was involved (and for some reason there's a plethora of old Windows binaries floating around the Inter-tubes) but after some reflection I think the Windows terminal window was just for remote access to the phone maybe? So maybe this isn't an issue.



Endpoints can get overrun, although it doesn't usually happen at these bitrates. I've never run iperf3 on a phone though. Anyways, if I'm doing UDP tests and I see loss, one thing I'll often try is just bumping up the socket buffers (-w flag) just out of reflex.

Buffering in the routers (on each end of the links) is good for some applications and not others. For us, we're mostly interested in very large bulk data transfers, and there, good performance means not dropping anything. That can be bad for interactive applications because large buffers can cause delays without some kind of quality-of-service in place. (Sorry I'm dragging in a lot of other baggage.) :)



Sorta...I wonder if doing TCP would give a more accurate characterization of the path throughput, since it has to figure out what the bottleneck rate is.

Fun fact: We actually have a project that uses Starlink, I don't work on it though. Some of our staff have been grappling with the issue of how do we support science that isn't next to a National Laboratory and our backbone network. This often involves various wireless technologies, including Starlink:


Bruce.

PS. Speaking for myself as an individual, views not necessarily representative of my employer.
Thanks!
Yeah, they were using a Windows box to run commands on the phone with adb/ shell.
 
  • Like
  • Helpful
Reactions: scaesare and bmah
Stupidly I just realized that of course they're likely to see more errors on a wireless radio link than a wireline network (in the latter case we basically assume it's zero). So...would there still be 15% packet loss at an offered load of 10Mbps? Or 40Mbps? Put another way how much of the packet loss is due to congestion and how much is due to errors on some link in the path (presumably the link to and from the satellite)? How much other network was there between the iperf3 client and server?

Bruce.
 
Stupidly I just realized that of course they're likely to see more errors on a wireless radio link than a wireline network (in the latter case we basically assume it's zero). So...would there still be 15% packet loss at an offered load of 10Mbps? Or 40Mbps? Put another way how much of the packet loss is due to congestion and how much is due to errors on some link in the path (presumably the link to and from the satellite)? How much other network was there between the iperf3 client and server?

Bruce.
The first second of data had 0 dropped packets which makes me think the link is limited to 17Mbit and the 15% drop rate is due to the 20Mbit test rate.
However, you might know differently with your background...
 
  • Like
Reactions: scaesare
Firsty, @bmah , thanks for your participation here and for the work on iperf3, which the community certainly benefits from. Much appreciated


Stupidly I just realized that of course they're likely to see more errors on a wireless radio link than a wireline network (in the latter case we basically assume it's zero). So...would there still be 15% packet loss at an offered load of 10Mbps? Or 40Mbps? Put another way how much of the packet loss is due to congestion and how much is due to errors on some link in the path (presumably the link to and from the satellite)? How much other network was there between the iperf3 client and server?

Bruce.

I agree, it thought the same thing when I said upthread, "It would be interesting to see that test run with no caps, and what the loss may be. If it goes up dramatically, I wonder if this implies that are targeting an acceptable UDP loss rate of 15% (understanding the individual user bandwidth would be lower in a real scenario)."

I also think looking at per-hop latency would be interesting...
 
  • Like
  • Love
Reactions: bmah and JB47394
"It would be interesting to see that test run with no caps, and what the loss may be. If it goes up dramatically, I wonder if this implies that are targeting an acceptable UDP loss rate of 15% (understanding the individual user bandwidth would be lower in a real scenario)."
If it goes up dramatically, wouldn't that indicate the link limit is 17Mbps? Otherwise, they would need to be hiding retries or adjusting the link parameters based on traffic as packets 'should' be independent.

Unless you are talking about boosting both the link rate and the iperf3 rate?