INTCP: Information-centric TCP for Satellite Network
Abstract
Satellite networks are booming to provide high-speed and low latency Internet access, but the transport layer becomes one of the main obstacles. Legacy end-to-end TCP is designed for terrestrial networks, not suitable for error-prone, propagation delay varying, and intermittent satellite links. It is necessary to make a clean-slate design for the satellite transport layer. This paper introduces a novel Information-centric Hop-by-Hop transport layer design, INTCP. It carries out hop-by-hop packets retransmission and hop-by-hop congestion control with the help of cache and request-response model. Hop-by-hop retransmission recovers lost packets on hop, reduces retransmission delay. INTCP controls traffic and congestion also by hop. Each hop tries its best to maximize its bandwidth utilization and improves end-to-end throughput. The capability of caching enables asynchronous multicast in transport layer. This would save precious spectrum resources in the satellite network. The performance of INTCP is evaluated with the simulated Starlink constellation. Long-distance communication with more than 1000km is carried out. The results demonstrate that, for the unicast scenario INTCP could reduce one-way delay, delay jitters, and improve throughput compared with the legacy TCP. In multicast scenario, INTCP could achieve more than throughput.
Index Terms:
satellite network, ICN, TCP, congestion controlI Introduction
In recent years, satellite networks are booming again due to new technologies such as satellite miniaturization and rocket reusability [3]. Amazon Blue Origin plans to lunch satellites on the orbits of 590/610/630km [5], while Starlink will lunch satellites on the 3 orbits rings 340/550/1150km [3]. In 2030, there will be more than 25000 satellites in space [5]. Most of them are Low-earth-orbit (LEO) and aims to provide “High speed, Low latency broadband connectivity across the globe” (Starlink’s slogan), especially for the rural area, developing countries, aviation, and marine communication.
It is still a question for the satellite network to provide comparable bandwidth and latency with terrestrial networks. The end-to-end transport layer plays a critical role. It guarantees congestion avoidance and reliable end-to-end transmission. But the widely deployed transport layer of the TCP model [4] is designed for terrestrial networks. It controls congestion windows by the premise that the network congestion results in packet loss. In satellite networks with a relatively high channel error and dynamic topology, packet loss occurs not only by congestion but also by physical layer errors. This would confuse the loss-based congestion control model. Secondly, the large bandwidth-delay product of satellite networks also deteriorates the performance of TCP [11]. In the Starlink constellation working as Bent-Tube, the Round-Trip-Time (RTT) is about tested by users’ equipment [3]. If you connect two locations more than , the RTT will be increased proportionally. Larger RTT will slow down the increment of TCP windows [4]. Thirdly, relative movement among satellites and ground stations makes paths broken frequently. LEO satellites fly over one’s head for just a few minutes [6]. Ground-Satellite-Link (GSL) handover occurs frequently. Besides, Inter-satellite-link (ISL) setup time may range from a few seconds to tens of seconds [9], depending on the relative velocity and power of satellites. Intermittent paths make it hard to maintain a reliable end-to-end connection for the current TCP protocol.
To this end, several new transport layer designs such as TCP Hybla, TCP Spoofing, and TCP splitting, have been proposed to improve the throughput on satellite links [12, 7, 8, 13, 14, 15]. TCP Hybla [12] is a modified version of TCP Reno with a specific design considering a large RTT of satellite networks. But it is still loss-based congestion control that ignores packet loss by channel errors. Besides, another specific Splitting-TCP architecture, such as TCP Spoofing [13], TCP splitting [14], Performance Enhancing Proxy (PEP) [15], has been carried out to take advantage of specific TCP version for satellite links without changing the protocol stack of end-users. Splitting-TCP breaks off end-to-end TCP connections and establishes segmented transport protocols by terrestrial part and the satellite part respectively. The satellite part optimizes its own TCP to efficiently deliver data over satellite networks, while the terrestrial part uses legacy TCP which is transparent to end-users. Unfortunately, it falls short of optimizing transmission on satellite links.
Hop-by-hop transport layer design is considered more suitable for the LEO network due to the fine-grained congestion control. RCP [18] calculates proper sending rate on each router and feedbacks the minimal sending rate over the path to sender, which is used to limit the actual sending rate. But the control loop of RCP is still end-to-end, so it’s not agile enough to transient congestion and bandwidth fluctuation. Another kind of hop-by-hop design [19] uses back-pressure algorithm to determine whether the sending rate should be limited per router. Such design is based on leveraging buffer occupancy of the bottleneck or the available bandwidth of links, but the acquisition of these parameters are not inherently supported in network, especially in wireless satellite network. In this paper, we propose a universal hop-by-hop scheme for congestion control in LEO network.

In this work, we are inspired by the idea of splitting-TCP and show that Information-centric TCP (INTCP) model can improve end-to-end transport performance over error-prone satellite links. The architecture of INTCP is illustrated in Fig. 1 with three modules: Cache, Congestion Control, and Retransmission. It splits end-to-end path into segments, and provides hop-by-hop reliable transmission and congestion control. INTCP recovers packet loss on hop, instead of end-to-end packet retransmission in legacy TCP. This significantly reduces the delay of packet recovery, especially for satellite links with large RTT. Hop-by-hop Congestion Control maximizes the link utilization on each hop. This would exploit the capacity of satellite links while avoiding the impairment of bottleneck link. Furthermore, network-embedded Cache copes with fan-in and fan-out traffic, and cache packets for multicast and hop-retransmission.
INTCP keeps the three fundamental designs of Information-Centric Network (ICN) [1, 2] in transport layer, but ignores others. 1) Named data. Each packet is identified by the name which is used for request and response, not for routing. 2) Request-Response model. A Requester would firstly issue an INTEREST message. Then the Responder responds with DATA message. 3) Cache. All incoming packets would be cached in local storage. Distinguished from the previous ICN and Name-data Network (NDN), INTCP is not designed for narrow waist of network layer, but for transport layer of TCP. It is compatible with any routing schemes, such as IP, Identity, or geographic routing.
With the aforementioned elaborate designs, INTCP is able to tackle the challenges faced by satellite networks. Firstly, with Hop-by-hop Congestion Control, Responder on each hop can react to link condition changes quickly, and improve bandwidth utilization by hop-level other than end-to-end level congestion control. Furthermore, with the help of cache, INTCP can improve end-to-end throughput by Link-Multiplexing. Each hop works separately and tries its best to send more data to the next hop. Due to time-varying capacity among satellite hops, extra incoming data would be stored in Cache, and be delivered to the next-hop when the link conditions become better. Secondly, INTCP enables Hop-by-Hop Loss Recovery. Once a packet is lost on a hop, Requester just re-issues a new INTEREST message and Responder retransmits the lost packets. The recovery delay is only one-hop instead of end-to-end RTT in the TCP model. Lastly, INTCP provides the instinctive capability of Multicast. Thanks to the designs of cache and Request-Response model, any node can request the packets stored in the cache and provides asynchronous multicast in satellite network with precious spectrum resource.
To evaluate the performance of INTCP, we carried out extensive simulated experiments with the trace of Starlink Constellation. It consists of satellites on a km low-earth orbit. For random two terminals on Earth, they connect through moving satellites. On average, INTCP is able to improve end-to-end throughput by about and reduce one-way delay by around compared to Reno and Hybla. For multicast scenarios, INTCP is able to save bandwidth consumption. The evaluation results show that INTCP significantly improves the performance of the satellite network. It is a feasible and new paradigm for satellite transport layer.
The rest parts of the paper are organized as follows. The architecture of INTCP is elaborated in Sec. II. The Segmented Congestion Control and Retransmission Mechanism are introduced in Sec. III and IV. Sec. V demonstrates the performance gains by extensive experiments. Finally, we summarize the paper in Sec. VI.
II Information-centric Transport Layer
INTCP stack, the information-centric transport layer, is equipped on ground stations and INTCP routers. It operates congestion control and retransmits lost packets on each hop. INTCP mainly consists of three modules: Cache, Congestion Control, and Retransmission, as shown in Fig. 2. The rest layers, such as physical layer, link layer, network layer, and application layer are kept as same as in TCP/IP architecture. This provides a viable way to be compatible with current Internet devices and applications.

The transmission paradigm of INTCP is Request-Response-Cache. When an INTEREST message is coming, Responder first tries to retrieve data in Cache. If found, the data will be responded to Requester. If not, the INTEREST message will be forwarded to the next hop. When a DATA message is incoming, the data will be stored in Cache for further retransmission and multicasting. At the same time, The data in Cache will be responded to Requester.
On one hop, Requester runs the congestion control algorithm based on the network state of this hop, such as packet loss, RTT, available bandwidth, etc, then periodically sends the controlling result, i.e. data sending rate, to Responder of this hop. Responder adjusts sending rate to avoid congestion on the hop. Besides, it retransmits lost packets indicated by the interests from Requester. The lost packets are stored in Cache and can be retransmitted immediately. It avoids the long latency of end-to-end packet loss recovery in legacy TCP.
INTEREST message: The format of INTEREST message is like: (Requester, Responder, Name, RangeStart, RangeEnd, SendRate). The Name indicates the name of content or service. The data of content or service is further segmented into packets. Each data packet is labeled by (RangeStart, RangeEnd), representing the byte-level range of the data in the whole content. And in an INTEREST message, the fields represent the range of data it requests for.
The expected send rate for the Responder of a hop, SendRate, is piggybacked in INTEREST message. Once received a packet, Requester records the packet’s delay, arriving rate, and losses, then figures out a send rate according to these pieces of information and feedback it to Responder for congestion control.
Since the Request-Response model is stateless, Responder doesn’t know who requests data. The field of Requester indicates the owner of INTEREST message. When network layer uses IP protocol, it is the IP address of Requester. The Responder responds interest by sending data to this address. The field Responder records the address of the Responder. The satellite router forwards interest to Responder by this address.
DATA Message: DATA message is replied from Responder to Requester. The format of DATA message is like: (Requester, Responder, Name, RangeStart, RangeEnd, Timestamp, Data). The field of Name, RangeStart, RangeEnd, Requester, Responder has been discussed in the above paragraph. Timestamp field indicates the emission time of packet on Responder. Requester uses it to calculate the one-way delay of data transmission on this hop. The field of Data comprises the payload.
III Hop-by-Hop Congestion Control
In INTCP, congestion control is segmented by hop as shown in Fig. 3. Each hop independently performs congestion control. At one end of hop, Requester detects network state and runs congestion control algorithm to get a proper data sending rate, periodically feeding it back to Responder. At the other end of hop, Responder is responsible for controlling DATA message sending to avoid congestion and achieve bandwidth fairness on the hop.

Networks states of hop include one-way-delay (OWD), packet loss and available bandwidth. To get OWD, Responder write the Timestamp into data packet head when sending it, then the Requester calculates the difference between the time it receives this packet and Timestamp in this packet as OWD. packet loss and history throughput can be calculated straightforward on Requester side. This information implicitly stands for the hop’s network conditions and congestion status. Besides, Requester also takes the available size of buffer into consideration. As multiple flows could exist simultaneously on one hop, in order to solve the fairness problem between flows, the congestion control module of satellite router gathers information of all the flows and uniformly calculates available bandwidth for them, thus a fair bandwidth allocating state can be easily reached.
Once the data sending rate piggybacked by interest is parsed by Responder, the Responder will use this value to control the data sending process, no matter where the data is from(from the previous hop or from its cache). We use the token bucket algorithm to realize the rate control function.
Specifically, the goal of our congestion control algorithm is to achieve fairness, high bandwidth utilization and low queuing delay on the hop. It also avoids buffer overflow on Requester. When the buffer on Requester is nearly full and the predicted output bandwidth is less, Requester would generate a smaller sending rate, then the Responder lowers its sending rate correspondingly. If the responder is forwarding data from previous hop, the buffer on it will also be filled up, since the input rate is higher than output rate. At this time, the responder, as a requester on the previous hop, will also generate a smaller sending rate. In this way, when the bottleneck of the network is about to have congestion, the upstream hops will lower their sending rate soon one by one, avoiding too much data to be sent to the bottleneck, which leads to queuing delay increasing and packet loss. While the legacy end-to-end loss-based or RTT-based congestion control algorithms are not suitable for time-varying satellite links, our hop-by-hop algorithm is robust to the variation. The conditions of segmented network can be detected more preciously compared to end-to-end network, and the feedback delay is less than the legacy end-to-end TCP. So INTCP is more agile in satellite network. Besides, different congestion control algorithms can coexist on multiple hops at the same time. One hop with terrestrial link can use the regular methods, like Cubic or Reno. Other hops with satellite links can use our rate-based congestion control. The buffer on satellite router makes it possible to match different rates of hops. Thus, the total bandwidth utilization will be improved.
IV Retransmission Mechanism
INTCP uses a Selective Retransmission mechanism to guarantee reliable data transport in satellite networks. When receiver found packet losses, it would reissue one retransmission Interest message to request the lost packets. In INTCP, there are two retransmission mechanisms: SeqHole Retransmission and Timeout Retransmission, as shown in Fig. 4.
SeqHole Retransmission is belong to hop-by-hop retransmission. It detects and recovers lost packets on each hop. When Requester of hop receives a DATA message, it checks whether the packet number is sequential according to the field of RangeStart and RangeEnd. Once a range of data has been skipped by three subsequent packets, the missed range of data is assumed to be lost. Requester will issue a SeqHole Retransmission Interest message with the format:
INTEREST(name, LostFrom, LostEnd, TTL) | (1) |
The fields of indicate the retransmission range of lost packets. Responder sends back Data message by this range.
The field TTL represents how many hops that this message could be forwarded. Usually, the TTL of SeqHole Retransmission message is . It means that the retransmission Interest can only be forwarded within one hop. This aims to avoid repeated data requests since all downstream nodes would notice packet losses and emit Retransmission Interest. The TTL of one hop makes a constraint that the retransmission requests are not repeated.

Timeout Retransmission is an end-to-end retransmission mechanism. When Consumer issues an INTEREST message, it will record the name, range, time in the local Retransmission Timeout table (RTO). When receiving a packet, Consumer removes the records from RTO table. If RTO runs out and the data are yet not received, Consumer would issue a Timeout Retransmission Interest message.
RTO Retransmission is complementary to the hop-by-hop retransmission. The hop-by-hop retransmission enables on-path packet recovery. This would significantly reduce the recover delay. But it can not guarantee end-to-end reliable transmission since it recovers loss packet within one hop. When satellite orbit changes or link is broken, the lost packet not be recovered by hop should be retransmitted from server by RTO retransmission mechanism.
V Performance Evaluations
In this section, we evaluated INTCP in both unicast and multicast transmission scenes with the trace-based simulation.
V-A Methodology
Dataset: Our experiment needs information of LEO links such as bandwidth, but there is currently no public dataset available, so we simulate them by building a satellite link model and use testing results published by Starlink [3] to set the model parameters. The core constellation of Starlink consists of 1,600 satellites, which are evenly distributed on 32 orbital planes at an altitude of 1150km with an inclination of 53 degrees to the equator. Based on this information, we calculate the satellites’ physical coordinates at any time and use models such as Shannon formula to calculate the maximum physical bandwidth, packet loss rate, and delay of each link. Finally, we use the classic Dijkstra algorithm to obtain the routing table, thereby obtaining the dataset of testbed for the transport layer experiment.
Testbed: A network with up to hundreds of nodes, dynamic topology and varying link properties is necessary for simulating LEO constellation. The legacy network simulators, such as ns-2 and ns-3 [16], work badly on supporting such a highly self-defined network, as well as having validity issues. If we use one real terminal or virtual machine as one node in satellite network, either the cost will be too high to be afforded or the performance will be awful. We build the testbed based on Mininet [17], which creates virtual networks using process-based virtualization and network namespaces that are available in recent Linux kernels. In this way, our satellite network testbed can run on a PC with ordinary hardware. Another advantage of this testbed compared to network simulator is that it supports standard Linux network software, so our implementation of INTCP protocol can be migrated to real Linux devices at zero cost. We choose a Windows PC (Intel i7-8700 CPU, 64GB RAM) as the platform to run the simulation experiment.
Experiment setting: Firstly, a set of experiments in two-hop network are displayed to evaluate the potential improvement of INTCP. This environment is that two endpoints and one satellite router constitute two hops, one for ground link, one for ground-satellite link(GSL). And the GSL parameters are man-made to observe the influence of various factors on network performance. Then we chose transcontinental communication as the simulating scenario because long-distance (more than 1000 kilometers) communication is a typical application scene of satellite networks [6]. In addition, long-distance communication faces more serious problems such as high latency and dynamic topology. Specifically, we chose Beijing and New York as the endpoints of transmission. Both the unicast(single user) scenario and the multicast(multiple users requesting the same content) scenario are illustrated below.
Performance metrics: In the first two experiments, we chose three metrics to evaluate the performance of the network: one-way delay, jitter, and throughput. The definition of one-way delay is the time from that the responder to the requester. The definition of jitter is the delay difference between two consecutive seconds. The definition of throughput is the total number of bits received by the requester in a unit of time. In multicast scenario, we focus on the throughput metric.
V-B Two-hop Experiment
In fig. 5(a), we assume that the bandwidth of GSL fluctuates between 5Mbps and 35Mbps every period in a square wave while the ground link keeps 20Mbps. In the absence of cache, the throughput of end-to-end TCP at every moment is equal to . In hop-by-hop case, the cache on the router can store the data from ground when GSL bandwidth is low, and sends them out when GSL bandwidth is high. So even when the GSL bandwidth is higher than ground link, the GSL bandwidth is still fully used, making the average throughput of hop-by-hop INTCP much higher than end-to-end TCP.
Fig. 5(b) compares the ability to alleviating the impact of intermittent connectivity in TCP and INTCP. We assume a GSL handover event happens in every 20s. As the intermittent time grows, end-to-end TCP apparently suffers performance deterioration, while INTCP can alleviate the negative impact to a negligible level.
Fig. 5(c) shows the gain of on-hop loss recovery on one-way delay. In this experiment, the RTT of ground link is 50ms, the RTT of GSL is from 20ms to 300ms, the loss rate of GSL is 5%. The curve in the center of each band in the figure indicates mean one-way delay. The upper/lower bound of each band indicates the 5% / 95% cumulative distribution point respectively. INTCP reduces average one-way delay and makes the variance smaller, which is important for some applications, e.g., video conference.
V-C Unicast Scenario
In the evaluation of unicast, we transfer data from Beijing to New York, and ran the experiments for 24 hours in the simulation environment. The baseline is the legacy TCP protocol using the Reno congestion control algorithm, called TCP below.
Fig. 6(a) shows the Cumulative Distribution Function(CDF) of one-way delay. TCP Reno is prone to large delay (e.g., 300ms), while 80% delay of INTCP is less than 180ms. It is because in an environment with a high packet loss rate, The performance of TCP drops sharply, and the hop-by-hop retransmission mechanism of INTCP can effectively alleviate this problem.
Fig. 6(b) shows that the jitter of INTCP is almost always less than 100ms, while the jitter of TCP is much larger, because TCP will encounter huge jitter if there is end-to-end retransmission.
Fig. 6(c) shows that the bandwidth of INTCP is significantly better than TCP, because the loss-based congestion control mechanism of Reno wastes a large part of physical bandwidth in a high-delay high-loss-rate network. Its throughput is constrained by the capacity of bottleneck link. While INTCP maximizes each link’s bandwidth utilization. Cache multiplexes these links to improve INTCP throughput.
V-D Multicast transmission evaluation
In multicast transmission evaluation, we assume that lots of users are evenly distributed on the route from Beijing to New York, requesting the same data. The data source is New York. Their request time is very concentrated which means that cache missing due to cache replacement can be ignored.
In order to analyze the contribution of cache, we designed a variant of INTCP, called INTCP-uni, as the baseline. The difference of INTCP-uni is that it has no Cache. It retrievals data directly from the data source like legacy TCP.
Fig. 7 shows that as the number of users increases, the average throughput per INTCP-uni user drops sharply, while INTCP maintains a relatively high bandwidth. For the scenes such as live video streaming, this gap will bring users a big difference in the watching experience.
VI Conclusion
To provide high-speed and low latency Internet access on error-prone satellite links, it is necessary for satellite network to make great innovations on transport layer. INTCP proposes an Information-centric hop-by-hop transport layer, which guarantees end-to-end reliable transmission by hop-by-hop loss recovery and congestion control. Hop-by-hop retransmission reduces the delay of end-to-end reliable transport, while hop-by-hop congestion control maximizes bandwidth utilization of links. Cache and Request-Response model are the basis of hop-by-hop transport layer. INCTP leverages them to recover lost packets and control flow’s traffic per hop. The simulation experiments validate that INTCP improves average throughput about , reduces delay , jitter . In the next step, the learning-based congestion control for satellite network will be investigated. The cost of running INTCP will be discussed and optimized by modifying the cache strategy and forwarding strategy.
References
- [1] V. Jacobson, and et. al., “Networking Named Content,” Proc. 5th International Conference Emerging Networking Experiments and Technologies (ACM CONEXT) , pp. 1-12, Rome, Italy, 2009.
- [2] B. Ahlgren, C. Dannewitz, C. Imbrenda, D. Kutscher, and B. Ohlman, ”A Survey of Information-Centric Networking,” IEEE Communications Magazine, Vol. 50, Issue 7, pp. 26-36, July 2012.
- [3] SPACE EXPLORATION TECHNOLOGIES CORP.”Starlink April 2020 Modification,” SAT-MOD-20200417-00037, April 2020.
- [4] T. R. Henderson and R. H. Katz, ”Transport protocols for Internet-compatible satellite networks,” IEEE Journal on Selected Areas in Communications, vol. 17, no. 2, pp. 326-344, Feb. 1999.
- [5] A. U. Chaudhry and H. Yanikomeroglu, ”Free Space Optics for Next-Generation Satellite Networks,” IEEE Consumer Electronics Magazine ( Early Access )
- [6] Mark Handley, ”Delay is Not an Option: Low Latency Routing in Space.” 17th ACM Workshop on Hot Topics in Networks (HotNets ’18). New York, NY, USA, 85–91, 2018.
- [7] Durst, R.C., Miller, G.J. Travis and E.J. ”TCP extensions for space communications”. Wireless Networks, Vol 3, pp. 389–403, 1997.
- [8] I. Psaras, G. Papastergiou, V. Tsaoussidis and N. Peccia, ”DS-TP: Deep-Space Transport Protocol,” 2008 IEEE Aerospace Conference, Big Sky, MT, 2008, pp. 1-13,
- [9] Debopam Bhattacherjee and Ankit Singla, ”Network topology design at 27,000 km/hour.” In Proceedings of the 15th International Conference on Emerging Networking Experiments And Technologies (CoNEXT ’19), New York, NY, USA, 341–354. 2019.
- [10] Giacomo Giuliari, Tobias Klenze, Markus Legner, David Basin, Adrian Perrig, and Ankit Singla. ”Internet backbones in space.” SIGCOMM Computer Communication. Rev. 50, pp. 25–37, Jan. 2020.
- [11] T. V. Lakshman and U. Madhow, ”The performance of TCP/IP for networks with high bandwidth-delay products and random loss,” IEEE/ACM Trans. Network, vol. 5, no. 3, pp. 336–350, Jun. 1997.
- [12] C. Caini and R. Firrincieli, ”TCP Hybla: A TCP enhancement for het- erogeneous networks,” International Journal on Satellite Communication Network, vol. 22, no. 5, pp. 547–566, 2004.
- [13] M. Bacco, T. de Cola, and A. Gotta, ”TCP new Reno over DVB-RCS2 random access links: Performance analysis and throughput estimation,” in Proc. IEEE GLOBECOM, San Diego, CA, USA, pp. 1–6. Dec. 2015.
- [14] M. Muhammad, F. Kasmis, and T. De Cola, ”Advanced transport satellite protocol,” in Proc. IEEE GLOBECOM, Anaheim, CA, USA, pp. 3299–3304, Dec. 2012.
- [15] J. Border, M. Kojo, J. Griner, G. Montenegro, and Z. Shelby, ”Performance enhancing proxies intended to mitigate link-related degradations,” IETF RFC 3135, Fremont, CA, USA, Jun. 2001.
- [16] ns-3 — a discrete-event network simulator for internet systems. https://www.nsnam.org/
- [17] Lantz, Bob and Heller, Brandon and McKeown, Nick, ”A network in a laptop: rapid prototyping for software-defined networks,” Proceedings of the 9th ACM SIGCOMM Workshop on Hot Topics in Networks , 1-6. 2010.
- [18] Dukkipati, Nandita. ”Rate Control Protocol (RCP): Congestion control to make flows complete quickly.” Stanford University, 2008.
- [19] Partho P. Mishra and Hemant Kanakia. ”A hop by hop rate-based congestion control scheme.” in SIGCOMM Comput. Commun. Rev. 22, 4 (Oct. 1992), 112–123.