Improved Writing made it less verbouse
This commit is contained in:
@@ -10,8 +10,9 @@ follows the impairment profiles from ideal to degraded:
|
||||
Section~\ref{sec:baseline} establishes overhead under ideal
|
||||
conditions, then subsequent sections examine how each VPN responds to
|
||||
increasing network impairment. The chapter concludes with findings
|
||||
from the source code analysis. A recurring theme throughout is that
|
||||
no single metric captures VPN performance; the rankings shift
|
||||
from the source code analysis. A recurring theme is that no single
|
||||
metric captures VPN
|
||||
performance; the rankings shift
|
||||
depending on whether one measures throughput, latency, retransmit
|
||||
behavior, or real-world application performance.
|
||||
|
||||
@@ -184,7 +185,7 @@ opposite extreme: brute-force retransmission can still yield high
|
||||
throughput (814\,Mbps with 1\,163 retransmits), at the cost of wasted
|
||||
bandwidth and unstable flow behavior.
|
||||
|
||||
VpnCloud warrants specific attention: its sender reports 538.8\,Mbps
|
||||
VpnCloud stands out: its sender reports 538.8\,Mbps
|
||||
but the receiver measures only 413.4\,Mbps, leaving a 23\,\% gap (the largest
|
||||
in the dataset). This suggests significant in-tunnel packet loss or
|
||||
buffering at the VpnCloud layer that the retransmit count (857)
|
||||
@@ -256,10 +257,10 @@ times, which cluster into three distinct ranges.
|
||||
\end{table}
|
||||
|
||||
Six VPNs stay below 1.3\,ms, comfortably close to the bare-metal
|
||||
0.60\,ms. VpnCloud is a notable result: it posts the lowest latency
|
||||
of any VPN (1.13\,ms), edging out WireGuard (1.20\,ms), yet its
|
||||
throughput tops out at only 539\,Mbps. Low per-packet latency does
|
||||
not guarantee high bulk throughput. A second group (Headscale,
|
||||
0.60\,ms. VpnCloud posts the lowest latency of any VPN (1.13\,ms), below
|
||||
WireGuard (1.20\,ms), yet its throughput tops out at only 539\,Mbps.
|
||||
Low per-packet latency does not guarantee high bulk throughput. A
|
||||
second group (Headscale,
|
||||
Hyprspace, Yggdrasil) lands in the 1.5--2.2\,ms range, representing
|
||||
moderate overhead. Then there is Mycelium at 34.9\,ms, so far
|
||||
removed from the rest that Section~\ref{sec:mycelium_routing} gives
|
||||
@@ -289,8 +290,8 @@ the CPU, not the network, is the bottleneck.
|
||||
Figure~\ref{fig:latency_throughput} makes this disconnect easy to
|
||||
spot.
|
||||
|
||||
Looking at CPU efficiency more broadly, the qperf measurements
|
||||
reveal a wide spread. Hyprspace (55.1\,\%) and Yggdrasil
|
||||
The qperf measurements also reveal a wide spread in CPU usage.
|
||||
Hyprspace (55.1\,\%) and Yggdrasil
|
||||
(52.8\,\%) consume 5--6$\times$ as much CPU as Internal's
|
||||
9.7\,\%. WireGuard sits at 30.8\,\%, surprisingly high for a
|
||||
kernel-level implementation, though much of that goes to
|
||||
@@ -318,20 +319,21 @@ The single-stream benchmark tests one link direction at a time. The
|
||||
parallel benchmark changes this setup: all three link directions
|
||||
(lom$\rightarrow$yuki, yuki$\rightarrow$luna,
|
||||
luna$\rightarrow$lom) run simultaneously in a circular pattern for
|
||||
60~seconds, each carrying ten TCP streams. Because three independent
|
||||
60~seconds, each carrying one bidirectional TCP stream (six
|
||||
unidirectional flows in total). Because three independent
|
||||
link pairs now compete for shared tunnel resources at once, the
|
||||
aggregate throughput is naturally higher than any single direction
|
||||
alone, which is why even Internal reaches 1.50$\times$ its
|
||||
single-stream figure. The scaling factor (parallel throughput
|
||||
divided by single-stream throughput) therefore captures two effects:
|
||||
the benefit of utilizing multiple link pairs in parallel, and how
|
||||
divided by single-stream throughput) captures two effects:
|
||||
the benefit of using multiple link pairs in parallel, and how
|
||||
well the VPN handles the resulting contention.
|
||||
Table~\ref{tab:parallel_scaling} lists the results.
|
||||
|
||||
\begin{table}[H]
|
||||
\centering
|
||||
\caption{Parallel TCP scaling at baseline. Scaling factor is the
|
||||
ratio of ten-stream to single-stream throughput. Internal's
|
||||
ratio of parallel to single-stream throughput. Internal's
|
||||
1.50$\times$ represents the expected scaling on this hardware.}
|
||||
\label{tab:parallel_scaling}
|
||||
\begin{tabular}{lrrr}
|
||||
@@ -357,7 +359,7 @@ Table~\ref{tab:parallel_scaling} lists the results.
|
||||
The VPNs that gain the most are those most constrained in
|
||||
single-stream mode. Mycelium's 34.9\,ms RTT means a lone TCP stream
|
||||
can never fill the pipe: the bandwidth-delay product demands a window
|
||||
larger than any single flow maintains, so ten streams collectively
|
||||
larger than any single flow maintains, so multiple concurrent flows
|
||||
compensate for that constraint and push throughput to 2.20$\times$
|
||||
the single-stream figure. Hyprspace scales almost as well
|
||||
(2.18$\times$) but for a
|
||||
@@ -379,8 +381,8 @@ streams: throughput drops from 706\,Mbps to 648\,Mbps
|
||||
streams are clearly fighting each other for resources inside the
|
||||
tunnel.
|
||||
|
||||
More streams also amplify existing retransmit problems across the
|
||||
board. Hyprspace climbs from 4\,965 to 17\,426~retransmits;
|
||||
More streams also amplify existing retransmit problems. Hyprspace
|
||||
climbs from 4\,965 to 17\,426~retransmits;
|
||||
VpnCloud from 857 to 6\,023. VPNs that were clean in single-stream
|
||||
mode stay clean under load, while the stressed ones only get worse.
|
||||
|
||||
@@ -702,8 +704,8 @@ propagate.
|
||||
\label{sec:pathological}
|
||||
|
||||
Three VPNs exhibit behaviors that the aggregate numbers alone cannot
|
||||
explain. The following subsections synthesize observations from the
|
||||
preceding benchmarks into per-VPN diagnoses.
|
||||
explain. The following subsections piece together observations from
|
||||
earlier benchmarks into per-VPN diagnoses.
|
||||
|
||||
\paragraph{Hyprspace: Buffer Bloat.}
|
||||
\label{sec:hyprspace_bloat}
|
||||
|
||||
Reference in New Issue
Block a user