Compare commits
4 Commits
e25c230427
...
main
| Author | SHA1 | Date | |
|---|---|---|---|
| a3c533b58f | |||
| bbb5c6e886 | |||
| 13633f092a | |||
| 0e636ee5f3 |
@@ -37,6 +37,83 @@ the 80\% success rate sets a baseline expectation, while the 55-second
|
||||
timeout informs analysis of each implementation's keep-alive behavior
|
||||
during source code review.
|
||||
|
||||
\subsection{The Babel routing protocol}
|
||||
\label{sec:babel}
|
||||
|
||||
Babel~\cite{chroboczek_babel_2021} is a distance-vector routing
|
||||
protocol designed for both wired and wireless mesh networks. Each
|
||||
node periodically sends \emph{Hello} messages to discover neighbours
|
||||
and \emph{Update} messages to advertise reachable prefixes along with
|
||||
a numeric cost metric. A node selects the route with the lowest
|
||||
cumulative metric for each destination, subject to a
|
||||
\emph{feasibility condition} that prevents routing loops. Because
|
||||
Babel is distance-vector rather than link-state, nodes only know the
|
||||
cost of their own best path, not the full topology.
|
||||
|
||||
Two properties of Babel matter for the benchmarks in
|
||||
Chapter~\ref{Results}. First, route advertisements are periodic: a
|
||||
node will not learn about a new path until the next Update interval,
|
||||
which can be on the order of minutes depending on the implementation's
|
||||
timer settings. Second, Babel intentionally resists frequent route
|
||||
changes to avoid flapping; a node may continue using a suboptimal path
|
||||
until a significantly better alternative is advertised. Both
|
||||
properties can cause the selected route for a given destination to
|
||||
differ across consecutive benchmark runs, even when the physical
|
||||
topology has not changed.
|
||||
|
||||
\subsection{TCP flow control and congestion control}
|
||||
\label{sec:tcp_windows}
|
||||
|
||||
TCP uses two window mechanisms to regulate how much unacknowledged data
|
||||
a sender may have in flight. The \emph{receive window}
|
||||
(\texttt{rwnd}), also called the \emph{send window} in
|
||||
\texttt{iperf3} output, is advertised by the receiver and reflects how
|
||||
much buffer space it has available. The \emph{congestion window}
|
||||
(\texttt{cwnd}) is maintained locally by the sender and tracks the
|
||||
network's estimated capacity. At any point, the sender may transmit
|
||||
up to $\min(\texttt{rwnd}, \texttt{cwnd})$ bytes beyond the last
|
||||
acknowledged byte \cite{rfc5681}.
|
||||
|
||||
The congestion window starts small (typically a few segments) and
|
||||
grows during the \emph{slow-start} phase, doubling each round trip
|
||||
until it reaches a threshold or triggers a loss event. After that,
|
||||
\emph{congestion avoidance} takes over and the window grows linearly.
|
||||
When the sender detects a loss (through duplicate ACKs or a
|
||||
retransmission timeout), it treats the loss as a signal of congestion:
|
||||
the window is reduced, often halved, and the sender enters a recovery
|
||||
phase before resuming growth. Each retransmission therefore has a
|
||||
direct mechanical cost: it shrinks the congestion window and reduces
|
||||
the instantaneous sending rate.
|
||||
|
||||
The \emph{bandwidth-delay product} (BDP) determines how large the
|
||||
window must be to fully utilize a link. It is the product of the
|
||||
link's bandwidth and the round-trip time:
|
||||
\begin{equation}
|
||||
\text{BDP} = \text{bandwidth} \times \text{RTT}
|
||||
\label{eq:bdp}
|
||||
\end{equation}
|
||||
A 1\,Gbps link with a 1\,ms RTT has a BDP of 125\,KB: the sender
|
||||
must keep at least 125\,KB of unacknowledged data in flight to
|
||||
saturate the link. If the congestion window is smaller than the BDP,
|
||||
the sender will finish transmitting its window and then wait idle for
|
||||
acknowledgements, leaving bandwidth unused. High-latency paths make
|
||||
this problem worse because the BDP grows linearly with RTT. A
|
||||
34\,ms RTT on the same 1\,Gbps link raises the BDP to 4.25\,MB, well
|
||||
beyond the default congestion window of most TCP stacks. One common
|
||||
workaround is to run multiple TCP flows in parallel: each flow
|
||||
maintains its own congestion window, and their aggregate in-flight
|
||||
data can approach the BDP even when no single flow could.
|
||||
|
||||
In VPN benchmarks these two windows appear as distinct bottlenecks. A
|
||||
small receive window means the receiver (or the tunnel endpoint in
|
||||
front of it) cannot absorb data fast enough. A small congestion
|
||||
window means the path between sender and receiver is experiencing
|
||||
loss, forcing TCP into repeated recovery cycles. Comparing congestion
|
||||
windows across VPNs with different maximum segment sizes requires
|
||||
care, because the window is measured in bytes: a VPN with jumbo
|
||||
segments will report a larger byte-valued window for the same number
|
||||
of in-flight segments.
|
||||
|
||||
\subsection{An Overview of Packet Reordering in TCP}
|
||||
TODO \cite{leung_overview_2007}
|
||||
|
||||
|
||||
+676
-632
File diff suppressed because it is too large
Load Diff
Binary file not shown.
|
After Width: | Height: | Size: 29 KiB |
Binary file not shown.
|
After Width: | Height: | Size: 43 KiB |
Binary file not shown.
|
After Width: | Height: | Size: 42 KiB |
Binary file not shown.
|
After Width: | Height: | Size: 53 KiB |
Binary file not shown.
|
After Width: | Height: | Size: 53 KiB |
@@ -0,0 +1,29 @@
|
||||
fn find_best_route<'a>(&self, routes: &'a RouteList)
|
||||
-> Option<&'a RouteEntry>
|
||||
{
|
||||
let source_table = self.source_table.read().unwrap();
|
||||
let current = routes.selected();
|
||||
let best = routes
|
||||
.iter()
|
||||
.filter(|re| !re.metric().is_infinite()
|
||||
&& source_table.route_feasible(re))
|
||||
.min_by_key(|re|
|
||||
re.metric() + Metric::from(re.neighbour().link_cost()));
|
||||
|
||||
if let (Some(best), Some(current)) = (best, current) {
|
||||
// Only switch if the metric is significantly better
|
||||
// OR if the route is directly connected (metric 0).
|
||||
if (best.source() != current.source()
|
||||
|| best.neighbour() != current.neighbour())
|
||||
&& !(best.metric()
|
||||
+ Metric::from(best.neighbour().link_cost())
|
||||
< current.metric()
|
||||
+ Metric::from(current.neighbour().link_cost())
|
||||
- SIGNIFICANT_METRIC_IMPROVEMENT
|
||||
|| best.metric().is_direct())
|
||||
{
|
||||
return Some(current); // keep existing route
|
||||
}
|
||||
}
|
||||
best
|
||||
}
|
||||
@@ -0,0 +1,9 @@
|
||||
/// Time between HELLO messages, in seconds
|
||||
const HELLO_INTERVAL: u64 = 20;
|
||||
/// Max time used in UPDATE packets.
|
||||
const UPDATE_INTERVAL: Duration =
|
||||
Duration::from_secs(HELLO_INTERVAL * 3 * 5); // 300 s
|
||||
|
||||
/// The amount a metric of a route needs to improve
|
||||
/// before we will consider switching to it.
|
||||
const SIGNIFICANT_METRIC_IMPROVEMENT: Metric = Metric::new(10);
|
||||
@@ -0,0 +1,39 @@
|
||||
type FirewallConntrack struct {
|
||||
sync.Mutex
|
||||
|
||||
Conns map[firewall.Packet]*conn
|
||||
TimerWheel *TimerWheel[firewall.Packet]
|
||||
}
|
||||
|
||||
func (f *Firewall) inConns(
|
||||
fp firewall.Packet, h *HostInfo,
|
||||
caPool *cert.CAPool,
|
||||
localCache firewall.ConntrackCache,
|
||||
) bool {
|
||||
if localCache != nil {
|
||||
if _, ok := localCache[fp]; ok {
|
||||
return true
|
||||
}
|
||||
}
|
||||
conntrack := f.Conntrack
|
||||
conntrack.Lock()
|
||||
|
||||
// Purge every time we test
|
||||
ep, has := conntrack.TimerWheel.Purge()
|
||||
if has {
|
||||
f.evict(ep)
|
||||
}
|
||||
|
||||
c, ok := conntrack.Conns[fp]
|
||||
if !ok {
|
||||
conntrack.Unlock()
|
||||
return false
|
||||
}
|
||||
// ... update expiry ...
|
||||
conntrack.Unlock()
|
||||
|
||||
if localCache != nil {
|
||||
localCache[fp] = struct{}{}
|
||||
}
|
||||
return true
|
||||
}
|
||||
@@ -98,6 +98,16 @@
|
||||
morestring=[b]",
|
||||
sensitive=true,
|
||||
}
|
||||
\lstdefinelanguage{Rust}{
|
||||
morekeywords={as,break,const,continue,crate,else,enum,extern,false,fn,for,
|
||||
if,impl,in,let,loop,match,mod,move,mut,pub,ref,return,self,Self,static,
|
||||
struct,super,trait,true,type,unsafe,use,where,while,async,await,dyn,
|
||||
Some,None,Option,Result,Ok,Err,Duration},
|
||||
morecomment=[l]{//},
|
||||
morecomment=[s]{/*}{*/},
|
||||
morestring=[b]",
|
||||
sensitive=true,
|
||||
}
|
||||
\lstdefinelanguage{Go}{
|
||||
morekeywords={break,case,chan,const,continue,default,defer,else,fallthrough,
|
||||
for,func,go,goto,if,import,interface,map,package,range,return,select,
|
||||
|
||||
@@ -617,3 +617,25 @@
|
||||
PDF:/home/lhebendanz/Zotero/storage/KM9D625Y/Whitner et al. - 2008
|
||||
- Improved Packet Reordering Metrics.pdf:application/pdf},
|
||||
}
|
||||
|
||||
@misc{rfc5681,
|
||||
title = {TCP Congestion Control},
|
||||
author = {Allman, Mark and Paxson, Vern and Blanton, Ethan},
|
||||
year = {2009},
|
||||
month = sep,
|
||||
howpublished = {RFC 5681},
|
||||
doi = {10.17487/RFC5681},
|
||||
url = {https://www.rfc-editor.org/rfc/rfc5681},
|
||||
note = {Obsoletes RFC 2581},
|
||||
}
|
||||
|
||||
@misc{chroboczek_babel_2021,
|
||||
title = {The {Babel} Routing Protocol},
|
||||
author = {Chroboczek, Juliusz and Schinazi, David},
|
||||
year = {2021},
|
||||
month = jun,
|
||||
howpublished = {RFC 8966},
|
||||
doi = {10.17487/RFC8966},
|
||||
url = {https://www.rfc-editor.org/rfc/rfc8966},
|
||||
note = {Obsoletes RFC 6126},
|
||||
}
|
||||
|
||||
Reference in New Issue
Block a user