122 lines
6.0 KiB
TeX
122 lines
6.0 KiB
TeX
\chapter{Background} % Main chapter title
|
|
|
|
\label{Background}
|
|
|
|
\subsection{Nix: A Safe and Policy-Free System for Software Deployment}
|
|
|
|
Nix addresses significant issues in software deployment by utilizing
|
|
cryptographic hashes to ensure unique paths for component instances
|
|
\cite{dolstra_nix_2004}. Features such as concurrent installation of
|
|
multiple versions, atomic upgrades, and safe garbage collection make
|
|
Nix a flexible deployment system. This work uses Nix to ensure that
|
|
all VPN builds and system configurations are deterministic.
|
|
|
|
\subsection{NixOS: A Purely Functional Linux Distribution}
|
|
|
|
NixOS extends Nix principles to Linux system configuration
|
|
\cite{dolstra_nixos_2008}. System configurations are reproducible and
|
|
isolated from stateful interactions typical in imperative package
|
|
management. This property is essential for ensuring identical test
|
|
environments across benchmark runs.
|
|
|
|
\subsection{UDP NAT and Firewall Puncturing in the Wild}
|
|
|
|
Halkes and Pouwelse~\cite{halkes_udp_2011} measure UDP hole punching
|
|
efficacy on a live P2P network using the Tribler BitTorrent client.
|
|
Their study finds that 79\% of peers are unreachable due to NAT or
|
|
firewall restrictions, yet 64\% reside behind configurations amenable
|
|
to hole punching. Among compatible peers, over 80\% of puncturing
|
|
attempts succeed, establishing hole punching as a practical NAT
|
|
traversal technique. Their timeout measurements further indicate that
|
|
keep-alive messages must be sent at least every 55 seconds to maintain
|
|
open NAT mappings.
|
|
|
|
These findings directly inform the evaluation criteria for this thesis.
|
|
All mesh VPNs tested rely on UDP hole punching for NAT traversal;
|
|
the 80\% success rate sets a baseline expectation, while the 55-second
|
|
timeout informs analysis of each implementation's keep-alive behavior
|
|
during source code review.
|
|
|
|
\subsection{The Babel routing protocol}
|
|
\label{sec:babel}
|
|
|
|
Babel~\cite{chroboczek_babel_2021} is a distance-vector routing
|
|
protocol designed for both wired and wireless mesh networks. Each
|
|
node periodically sends \emph{Hello} messages to discover neighbours
|
|
and \emph{Update} messages to advertise reachable prefixes along with
|
|
a numeric cost metric. A node selects the route with the lowest
|
|
cumulative metric for each destination, subject to a
|
|
\emph{feasibility condition} that prevents routing loops. Because
|
|
Babel is distance-vector rather than link-state, nodes only know the
|
|
cost of their own best path, not the full topology.
|
|
|
|
Two properties of Babel matter for the benchmarks in
|
|
Chapter~\ref{Results}. First, route advertisements are periodic: a
|
|
node will not learn about a new path until the next Update interval,
|
|
which can be on the order of minutes depending on the implementation's
|
|
timer settings. Second, Babel intentionally resists frequent route
|
|
changes to avoid flapping; a node may continue using a suboptimal path
|
|
until a significantly better alternative is advertised. Both
|
|
properties can cause the selected route for a given destination to
|
|
differ across consecutive benchmark runs, even when the physical
|
|
topology has not changed.
|
|
|
|
\subsection{TCP flow control and congestion control}
|
|
\label{sec:tcp_windows}
|
|
|
|
TCP uses two window mechanisms to regulate how much unacknowledged data
|
|
a sender may have in flight. The \emph{receive window}
|
|
(\texttt{rwnd}), also called the \emph{send window} in
|
|
\texttt{iperf3} output, is advertised by the receiver and reflects how
|
|
much buffer space it has available. The \emph{congestion window}
|
|
(\texttt{cwnd}) is maintained locally by the sender and tracks the
|
|
network's estimated capacity. At any point, the sender may transmit
|
|
up to $\min(\texttt{rwnd}, \texttt{cwnd})$ bytes beyond the last
|
|
acknowledged byte \cite{rfc5681}.
|
|
|
|
The congestion window starts small (typically a few segments) and
|
|
grows during the \emph{slow-start} phase, doubling each round trip
|
|
until it reaches a threshold or triggers a loss event. After that,
|
|
\emph{congestion avoidance} takes over and the window grows linearly.
|
|
When the sender detects a loss (through duplicate ACKs or a
|
|
retransmission timeout), it treats the loss as a signal of congestion:
|
|
the window is reduced, often halved, and the sender enters a recovery
|
|
phase before resuming growth. Each retransmission therefore has a
|
|
direct mechanical cost: it shrinks the congestion window and reduces
|
|
the instantaneous sending rate.
|
|
|
|
The \emph{bandwidth-delay product} (BDP) determines how large the
|
|
window must be to fully utilize a link. It is the product of the
|
|
link's bandwidth and the round-trip time:
|
|
\begin{equation}
|
|
\text{BDP} = \text{bandwidth} \times \text{RTT}
|
|
\label{eq:bdp}
|
|
\end{equation}
|
|
A 1\,Gbps link with a 1\,ms RTT has a BDP of 125\,KB: the sender
|
|
must keep at least 125\,KB of unacknowledged data in flight to
|
|
saturate the link. If the congestion window is smaller than the BDP,
|
|
the sender will finish transmitting its window and then wait idle for
|
|
acknowledgements, leaving bandwidth unused. High-latency paths make
|
|
this problem worse because the BDP grows linearly with RTT. A
|
|
34\,ms RTT on the same 1\,Gbps link raises the BDP to 4.25\,MB, well
|
|
beyond the default congestion window of most TCP stacks. One common
|
|
workaround is to run multiple TCP flows in parallel: each flow
|
|
maintains its own congestion window, and their aggregate in-flight
|
|
data can approach the BDP even when no single flow could.
|
|
|
|
In VPN benchmarks these two windows appear as distinct bottlenecks. A
|
|
small receive window means the receiver (or the tunnel endpoint in
|
|
front of it) cannot absorb data fast enough. A small congestion
|
|
window means the path between sender and receiver is experiencing
|
|
loss, forcing TCP into repeated recovery cycles. Comparing congestion
|
|
windows across VPNs with different maximum segment sizes requires
|
|
care, because the window is measured in bytes: a VPN with jumbo
|
|
segments will report a larger byte-valued window for the same number
|
|
of in-flight segments.
|
|
|
|
\subsection{An Overview of Packet Reordering in TCP}
|
|
TODO \cite{leung_overview_2007}
|
|
|
|
\subsection{Performance Evaluation of TCP over QUIC Tunnels}
|
|
TODO \cite{guo_implementation_2025}
|