Add Results.tex for baseline profile
This commit is contained in:
@@ -8,9 +8,9 @@ This chapter describes the methodology used to benchmark and analyze
|
||||
peer-to-peer mesh VPN implementations. The evaluation combines
|
||||
performance benchmarking under controlled network conditions with a
|
||||
structured source code analysis of each implementation. The
|
||||
benchmarking framework prioritizes reproducibility at every layer;
|
||||
benchmarking framework prioritizes reproducibility at every layer,
|
||||
from pinned dependencies and declarative system configuration to
|
||||
automated test orchestration; enabling independent verification of
|
||||
automated test orchestration, enabling independent verification of
|
||||
results and facilitating future comparative studies.
|
||||
|
||||
\section{Experimental Setup}
|
||||
@@ -30,7 +30,7 @@ identical specifications:
|
||||
\end{itemize}
|
||||
|
||||
The presence of hardware cryptographic acceleration is relevant because
|
||||
many VPN implementations leverage AES-NI for encryption, and the results
|
||||
many VPN implementations use AES-NI for encryption, and the results
|
||||
may differ on systems without these features.
|
||||
|
||||
\subsection{Network Topology}
|
||||
@@ -114,10 +114,10 @@ Table~\ref{tab:benchmark_suite} summarises each benchmark.
|
||||
\end{tabular}
|
||||
\end{table}
|
||||
|
||||
The first four benchmarks use well-known network testing tools.
|
||||
The remaining three target workloads that are closer to real-world
|
||||
usage. The subsections below describe the configuration details
|
||||
that the table does not capture.
|
||||
The first four benchmarks use well-known network testing tools;
|
||||
the remaining three target workloads closer to real-world usage.
|
||||
The subsections below describe configuration details that the table
|
||||
does not capture.
|
||||
|
||||
\subsection{Ping}
|
||||
|
||||
@@ -320,7 +320,7 @@ Each metric is summarized as a statistics dictionary containing:
|
||||
\begin{itemize}
|
||||
\bitem{min / max:} Extreme values observed
|
||||
\bitem{average:} Arithmetic mean across samples
|
||||
\bitem{p25 / p50 / p75:} Quartiles via pythons
|
||||
\bitem{p25 / p50 / p75:} Quartiles via Python's
|
||||
\texttt{statistics.quantiles()} method
|
||||
\end{itemize}
|
||||
|
||||
@@ -351,7 +351,7 @@ hyperfine's built-in statistical output.
|
||||
\section{Source Code Analysis}
|
||||
|
||||
To complement the performance benchmarks with architectural
|
||||
understanding, a structured source code analysis was conducted for
|
||||
understanding, we conducted a structured source code analysis of
|
||||
all ten VPN implementations. The analysis followed three phases.
|
||||
|
||||
\subsection{Repository Collection and LLM-Assisted Overview}
|
||||
@@ -377,9 +377,8 @@ aspects:
|
||||
\item Resilience / Central Point of Failure
|
||||
\end{itemize}
|
||||
|
||||
Every claim in the generated overview was required to reference the
|
||||
specific file and line range in the repository that supports it,
|
||||
enabling direct verification.
|
||||
Each agent was required to reference the specific file and line
|
||||
range supporting every claim, enabling direct verification.
|
||||
|
||||
\subsection{Manual Verification}
|
||||
|
||||
@@ -392,19 +391,19 @@ automated summaries remained superficial.
|
||||
\subsection{Feature Matrix and Maintainer Review}
|
||||
|
||||
The findings from both the automated and manual analysis were
|
||||
consolidated into a comprehensive feature matrix cataloguing 131
|
||||
features across all ten VPN implementations. The matrix covers
|
||||
consolidated into a feature matrix cataloguing 131 features across
|
||||
all ten VPN implementations. The matrix covers
|
||||
protocol characteristics, cryptographic primitives, NAT traversal
|
||||
strategies, routing behavior, and security properties.
|
||||
|
||||
The completed feature matrix was published and sent to the respective
|
||||
VPN maintainers for review. Maintainer feedback was incorporated as
|
||||
corrections and clarifications, improving the accuracy of the final
|
||||
classification.
|
||||
VPN maintainers for review. We incorporated their feedback as
|
||||
corrections and clarifications to the final classification.
|
||||
|
||||
\section{Reproducibility}
|
||||
|
||||
Reproducibility is ensured at every layer of the experimental stack.
|
||||
The experimental stack pins or declares every variable that could
|
||||
affect results.
|
||||
|
||||
\subsection{Dependency Pinning}
|
||||
|
||||
@@ -524,7 +523,7 @@ VPNs were selected based on:
|
||||
\bitem{Decentralization:} Preference for solutions without mandatory
|
||||
central servers, though coordinated-mesh VPNs were included for comparison.
|
||||
\bitem{Active development:} Only VPNs with recent commits and
|
||||
maintained releases were considered (with the exception of VPN Cloud).
|
||||
maintained releases were considered (with the exception of VpnCloud).
|
||||
\bitem{Linux support:} All VPNs must run on Linux.
|
||||
\end{itemize}
|
||||
|
||||
|
||||
Reference in New Issue
Block a user