improve mycelium argument
This commit is contained in:
@@ -37,6 +37,83 @@ the 80\% success rate sets a baseline expectation, while the 55-second
|
||||
timeout informs analysis of each implementation's keep-alive behavior
|
||||
during source code review.
|
||||
|
||||
\subsection{The Babel routing protocol}
|
||||
\label{sec:babel}
|
||||
|
||||
Babel~\cite{chroboczek_babel_2021} is a distance-vector routing
|
||||
protocol designed for both wired and wireless mesh networks. Each
|
||||
node periodically sends \emph{Hello} messages to discover neighbours
|
||||
and \emph{Update} messages to advertise reachable prefixes along with
|
||||
a numeric cost metric. A node selects the route with the lowest
|
||||
cumulative metric for each destination, subject to a
|
||||
\emph{feasibility condition} that prevents routing loops. Because
|
||||
Babel is distance-vector rather than link-state, nodes only know the
|
||||
cost of their own best path, not the full topology.
|
||||
|
||||
Two properties of Babel matter for the benchmarks in
|
||||
Chapter~\ref{Results}. First, route advertisements are periodic: a
|
||||
node will not learn about a new path until the next Update interval,
|
||||
which can be on the order of minutes depending on the implementation's
|
||||
timer settings. Second, Babel intentionally resists frequent route
|
||||
changes to avoid flapping; a node may continue using a suboptimal path
|
||||
until a significantly better alternative is advertised. Both
|
||||
properties can cause the selected route for a given destination to
|
||||
differ across consecutive benchmark runs, even when the physical
|
||||
topology has not changed.
|
||||
|
||||
\subsection{TCP flow control and congestion control}
|
||||
\label{sec:tcp_windows}
|
||||
|
||||
TCP uses two window mechanisms to regulate how much unacknowledged data
|
||||
a sender may have in flight. The \emph{receive window}
|
||||
(\texttt{rwnd}), also called the \emph{send window} in
|
||||
\texttt{iperf3} output, is advertised by the receiver and reflects how
|
||||
much buffer space it has available. The \emph{congestion window}
|
||||
(\texttt{cwnd}) is maintained locally by the sender and tracks the
|
||||
network's estimated capacity. At any point, the sender may transmit
|
||||
up to $\min(\texttt{rwnd}, \texttt{cwnd})$ bytes beyond the last
|
||||
acknowledged byte \cite{rfc5681}.
|
||||
|
||||
The congestion window starts small (typically a few segments) and
|
||||
grows during the \emph{slow-start} phase, doubling each round trip
|
||||
until it reaches a threshold or triggers a loss event. After that,
|
||||
\emph{congestion avoidance} takes over and the window grows linearly.
|
||||
When the sender detects a loss (through duplicate ACKs or a
|
||||
retransmission timeout), it treats the loss as a signal of congestion:
|
||||
the window is reduced, often halved, and the sender enters a recovery
|
||||
phase before resuming growth. Each retransmission therefore has a
|
||||
direct mechanical cost: it shrinks the congestion window and reduces
|
||||
the instantaneous sending rate.
|
||||
|
||||
The \emph{bandwidth-delay product} (BDP) determines how large the
|
||||
window must be to fully utilize a link. It is the product of the
|
||||
link's bandwidth and the round-trip time:
|
||||
\begin{equation}
|
||||
\text{BDP} = \text{bandwidth} \times \text{RTT}
|
||||
\label{eq:bdp}
|
||||
\end{equation}
|
||||
A 1\,Gbps link with a 1\,ms RTT has a BDP of 125\,KB: the sender
|
||||
must keep at least 125\,KB of unacknowledged data in flight to
|
||||
saturate the link. If the congestion window is smaller than the BDP,
|
||||
the sender will finish transmitting its window and then wait idle for
|
||||
acknowledgements, leaving bandwidth unused. High-latency paths make
|
||||
this problem worse because the BDP grows linearly with RTT. A
|
||||
34\,ms RTT on the same 1\,Gbps link raises the BDP to 4.25\,MB, well
|
||||
beyond the default congestion window of most TCP stacks. One common
|
||||
workaround is to run multiple TCP flows in parallel: each flow
|
||||
maintains its own congestion window, and their aggregate in-flight
|
||||
data can approach the BDP even when no single flow could.
|
||||
|
||||
In VPN benchmarks these two windows appear as distinct bottlenecks. A
|
||||
small receive window means the receiver (or the tunnel endpoint in
|
||||
front of it) cannot absorb data fast enough. A small congestion
|
||||
window means the path between sender and receiver is experiencing
|
||||
loss, forcing TCP into repeated recovery cycles. Comparing congestion
|
||||
windows across VPNs with different maximum segment sizes requires
|
||||
care, because the window is measured in bytes: a VPN with jumbo
|
||||
segments will report a larger byte-valued window for the same number
|
||||
of in-flight segments.
|
||||
|
||||
\subsection{An Overview of Packet Reordering in TCP}
|
||||
TODO \cite{leung_overview_2007}
|
||||
|
||||
|
||||
Reference in New Issue
Block a user