Add Results.tex for baseline profile
6
.gitignore
vendored
@@ -24,7 +24,11 @@ result
|
|||||||
# SyncTeX files
|
# SyncTeX files
|
||||||
*.synctex.gz
|
*.synctex.gz
|
||||||
*.synctex(busy)
|
*.synctex(busy)
|
||||||
|
openspec
|
||||||
|
.claude
|
||||||
|
**/node_modules
|
||||||
|
**/dist
|
||||||
|
log.txt
|
||||||
# PDF files
|
# PDF files
|
||||||
*.pdf
|
*.pdf
|
||||||
|
|
||||||
|
|||||||
@@ -1,603 +0,0 @@
|
|||||||
% Chapter 1
|
|
||||||
|
|
||||||
\chapter{Chapter Title Here} % Main chapter title
|
|
||||||
|
|
||||||
\label{Chapter1} % For referencing the chapter elsewhere, use \ref{Chapter1}
|
|
||||||
|
|
||||||
%----------------------------------------------------------------------------------------
|
|
||||||
|
|
||||||
% Define some commands to keep the formatting separated from the content
|
|
||||||
\newcommand{\keyword}[1]{\textbf{#1}}
|
|
||||||
\newcommand{\tabhead}[1]{\textbf{#1}}
|
|
||||||
\newcommand{\code}[1]{\texttt{#1}}
|
|
||||||
\newcommand{\file}[1]{\texttt{\bfseries#1}}
|
|
||||||
\newcommand{\option}[1]{\texttt{\itshape#1}}
|
|
||||||
|
|
||||||
%----------------------------------------------------------------------------------------
|
|
||||||
|
|
||||||
\section{Welcome and Thank You}
|
|
||||||
Welcome to this \LaTeX{} Thesis Template, a beautiful and easy to use
|
|
||||||
template for writing a thesis using the \LaTeX{} typesetting system.
|
|
||||||
|
|
||||||
If you are writing a thesis (or will be in the future) and its
|
|
||||||
subject is technical or mathematical (though it doesn't have to be),
|
|
||||||
then creating it in \LaTeX{} is highly recommended as a way to make
|
|
||||||
sure you can just get down to the essential writing without having to
|
|
||||||
worry over formatting or wasting time arguing with your word processor.
|
|
||||||
|
|
||||||
\LaTeX{} is easily able to professionally typeset documents that run
|
|
||||||
to hundreds or thousands of pages long. With simple mark-up commands,
|
|
||||||
it automatically sets out the table of contents, margins, page
|
|
||||||
headers and footers and keeps the formatting consistent and
|
|
||||||
beautiful. One of its main strengths is the way it can easily typeset
|
|
||||||
mathematics, even \emph{heavy} mathematics. Even if those equations
|
|
||||||
are the most horribly twisted and most difficult mathematical
|
|
||||||
problems that can only be solved on a super-computer, you can at
|
|
||||||
least count on \LaTeX{} to make them look stunning.
|
|
||||||
|
|
||||||
%----------------------------------------------------------------------------------------
|
|
||||||
|
|
||||||
\section{Learning \LaTeX{}}
|
|
||||||
|
|
||||||
\LaTeX{} is not a \textsc{wysiwyg} (What You See is What You Get)
|
|
||||||
program, unlike word processors such as Microsoft Word or Apple's
|
|
||||||
Pages. Instead, a document written for \LaTeX{} is actually a simple,
|
|
||||||
plain text file that contains \emph{no formatting}. You tell \LaTeX{}
|
|
||||||
how you want the formatting in the finished document by writing in
|
|
||||||
simple commands amongst the text, for example, if I want to use
|
|
||||||
\emph{italic text for emphasis}, I write the \verb|\emph{text}|
|
|
||||||
command and put the text I want in italics in between the curly
|
|
||||||
braces. This means that \LaTeX{} is a \enquote{mark-up} language,
|
|
||||||
very much like HTML.
|
|
||||||
|
|
||||||
\subsection{A (not so short) Introduction to \LaTeX{}}
|
|
||||||
|
|
||||||
If you are new to \LaTeX{}, there is a very good eBook -- freely
|
|
||||||
available online as a PDF file -- called, \enquote{The Not So Short
|
|
||||||
Introduction to \LaTeX{}}. The book's title is typically shortened to
|
|
||||||
just \emph{lshort}. You can download the latest version (as it is
|
|
||||||
occasionally updated) from here:
|
|
||||||
\url{http://www.ctan.org/tex-archive/info/lshort/english/lshort.pdf}
|
|
||||||
|
|
||||||
It is also available in several other languages. Find yours from the
|
|
||||||
list on this page: \url{http://www.ctan.org/tex-archive/info/lshort/}
|
|
||||||
|
|
||||||
It is recommended to take a little time out to learn how to use
|
|
||||||
\LaTeX{} by creating several, small `test' documents, or having a
|
|
||||||
close look at several templates on:\\
|
|
||||||
\url{http://www.LaTeXTemplates.com}\\
|
|
||||||
Making the effort now means you're not stuck learning the system when
|
|
||||||
what you \emph{really} need to be doing is writing your thesis.
|
|
||||||
|
|
||||||
\subsection{A Short Math Guide for \LaTeX{}}
|
|
||||||
|
|
||||||
If you are writing a technical or mathematical thesis, then you may
|
|
||||||
want to read the document by the AMS (American Mathematical Society)
|
|
||||||
called, \enquote{A Short Math Guide for \LaTeX{}}. It can be found online here:
|
|
||||||
\url{http://www.ams.org/tex/amslatex.html}
|
|
||||||
under the \enquote{Additional Documentation} section towards the
|
|
||||||
bottom of the page.
|
|
||||||
|
|
||||||
\subsection{Common \LaTeX{} Math Symbols}
|
|
||||||
There are a multitude of mathematical symbols available for \LaTeX{}
|
|
||||||
and it would take a great effort to learn the commands for them all.
|
|
||||||
The most common ones you are likely to use are shown on this page:
|
|
||||||
\url{http://www.sunilpatel.co.uk/latex-type/latex-math-symbols/}
|
|
||||||
|
|
||||||
You can use this page as a reference or crib sheet, the symbols are
|
|
||||||
rendered as large, high quality images so you can quickly find the
|
|
||||||
\LaTeX{} command for the symbol you need.
|
|
||||||
|
|
||||||
\subsection{\LaTeX{} on a Mac}
|
|
||||||
|
|
||||||
The \LaTeX{} distribution is available for many systems including
|
|
||||||
Windows, Linux and Mac OS X. The package for OS X is called MacTeX
|
|
||||||
and it contains all the applications you need -- bundled together and
|
|
||||||
pre-customized -- for a fully working \LaTeX{} environment and work flow.
|
|
||||||
|
|
||||||
MacTeX includes a custom dedicated \LaTeX{} editor called TeXShop for
|
|
||||||
writing your `\file{.tex}' files and BibDesk: a program to manage
|
|
||||||
your references and create your bibliography section just as easily
|
|
||||||
as managing songs and creating playlists in iTunes.
|
|
||||||
|
|
||||||
%----------------------------------------------------------------------------------------
|
|
||||||
|
|
||||||
\section{Getting Started with this Template}
|
|
||||||
|
|
||||||
If you are familiar with \LaTeX{}, then you should explore the
|
|
||||||
directory structure of the template and then proceed to place your
|
|
||||||
own information into the \emph{THESIS INFORMATION} block of the
|
|
||||||
\file{main.tex} file. You can then modify the rest of this file to
|
|
||||||
your unique specifications based on your degree/university. Section
|
|
||||||
\ref{FillingFile} on page \pageref{FillingFile} will help you do
|
|
||||||
this. Make sure you also read section \ref{ThesisConventions} about
|
|
||||||
thesis conventions to get the most out of this template.
|
|
||||||
|
|
||||||
If you are new to \LaTeX{} it is recommended that you carry on
|
|
||||||
reading through the rest of the information in this document.
|
|
||||||
|
|
||||||
Before you begin using this template you should ensure that its style
|
|
||||||
complies with the thesis style guidelines imposed by your
|
|
||||||
institution. In most cases this template style and layout will be
|
|
||||||
suitable. If it is not, it may only require a small change to bring
|
|
||||||
the template in line with your institution's recommendations. These
|
|
||||||
modifications will need to be done on the \file{MastersDoctoralThesis.cls} file.
|
|
||||||
|
|
||||||
\subsection{About this Template}
|
|
||||||
|
|
||||||
This \LaTeX{} Thesis Template is originally based and created around
|
|
||||||
a \LaTeX{} style file created by Steve R.\ Gunn from the University
|
|
||||||
of Southampton (UK), department of Electronics and Computer Science.
|
|
||||||
You can find his original thesis style file at his site, here:
|
|
||||||
\url{http://www.ecs.soton.ac.uk/~srg/softwaretools/document/templates/}
|
|
||||||
|
|
||||||
Steve's \file{ecsthesis.cls} was then taken by Sunil Patel who
|
|
||||||
modified it by creating a skeleton framework and folder structure to
|
|
||||||
place the thesis files in. The resulting template can be found on
|
|
||||||
Sunil's site here:
|
|
||||||
\url{http://www.sunilpatel.co.uk/thesis-template}
|
|
||||||
|
|
||||||
Sunil's template was made available through
|
|
||||||
\url{http://www.LaTeXTemplates.com} where it was modified many times
|
|
||||||
based on user requests and questions. Version 2.0 and onwards of this
|
|
||||||
template represents a major modification to Sunil's template and is,
|
|
||||||
in fact, hardly recognisable. The work to make version 2.0 possible
|
|
||||||
was carried out by \href{mailto:vel@latextemplates.com}{Vel} and
|
|
||||||
Johannes Böttcher.
|
|
||||||
|
|
||||||
%----------------------------------------------------------------------------------------
|
|
||||||
|
|
||||||
\section{What this Template Includes}
|
|
||||||
|
|
||||||
\subsection{Folders}
|
|
||||||
|
|
||||||
This template comes as a single zip file that expands out to several
|
|
||||||
files and folders. The folder names are mostly self-explanatory:
|
|
||||||
|
|
||||||
\keyword{Appendices} -- this is the folder where you put the
|
|
||||||
appendices. Each appendix should go into its own separate \file{.tex}
|
|
||||||
file. An example and template are included in the directory.
|
|
||||||
|
|
||||||
\keyword{Chapters} -- this is the folder where you put the thesis
|
|
||||||
chapters. A thesis usually has about six chapters, though there is no
|
|
||||||
hard rule on this. Each chapter should go in its own separate
|
|
||||||
\file{.tex} file and they can be split as:
|
|
||||||
\begin{itemize}
|
|
||||||
\item Chapter 1: Introduction to the thesis topic
|
|
||||||
\item Chapter 2: Background information and theory
|
|
||||||
\item Chapter 3: (Laboratory) experimental setup
|
|
||||||
\item Chapter 4: Details of experiment 1
|
|
||||||
\item Chapter 5: Details of experiment 2
|
|
||||||
\item Chapter 6: Discussion of the experimental results
|
|
||||||
\item Chapter 7: Conclusion and future directions
|
|
||||||
\end{itemize}
|
|
||||||
This chapter layout is specialised for the experimental sciences,
|
|
||||||
your discipline may be different.
|
|
||||||
|
|
||||||
\keyword{Figures} -- this folder contains all figures for the thesis.
|
|
||||||
These are the final images that will go into the thesis document.
|
|
||||||
|
|
||||||
\subsection{Files}
|
|
||||||
|
|
||||||
Included are also several files, most of them are plain text and you
|
|
||||||
can see their contents in a text editor. After initial compilation,
|
|
||||||
you will see that more auxiliary files are created by \LaTeX{} or
|
|
||||||
BibTeX and which you don't need to delete or worry about:
|
|
||||||
|
|
||||||
\keyword{example.bib} -- this is an important file that contains all
|
|
||||||
the bibliographic information and references that you will be citing
|
|
||||||
in the thesis for use with BibTeX. You can write it manually, but
|
|
||||||
there are reference manager programs available that will create and
|
|
||||||
manage it for you. Bibliographies in \LaTeX{} are a large subject and
|
|
||||||
you may need to read about BibTeX before starting with this. Many
|
|
||||||
modern reference managers will allow you to export your references in
|
|
||||||
BibTeX format which greatly eases the amount of work you have to do.
|
|
||||||
|
|
||||||
\keyword{MastersDoctoralThesis.cls} -- this is an important file. It
|
|
||||||
is the class file that tells \LaTeX{} how to format the thesis.
|
|
||||||
|
|
||||||
\keyword{main.pdf} -- this is your beautifully typeset thesis (in the
|
|
||||||
PDF file format) created by \LaTeX{}. It is supplied in the PDF with
|
|
||||||
the template and after you compile the template you should get an
|
|
||||||
identical version.
|
|
||||||
|
|
||||||
\keyword{main.tex} -- this is an important file. This is the file
|
|
||||||
that you tell \LaTeX{} to compile to produce your thesis as a PDF
|
|
||||||
file. It contains the framework and constructs that tell \LaTeX{} how
|
|
||||||
to layout the thesis. It is heavily commented so you can read exactly
|
|
||||||
what each line of code does and why it is there. After you put your
|
|
||||||
own information into the \emph{THESIS INFORMATION} block -- you have
|
|
||||||
now started your thesis!
|
|
||||||
|
|
||||||
Files that are \emph{not} included, but are created by \LaTeX{} as
|
|
||||||
auxiliary files include:
|
|
||||||
|
|
||||||
\keyword{main.aux} -- this is an auxiliary file generated by
|
|
||||||
\LaTeX{}, if it is deleted \LaTeX{} simply regenerates it when you
|
|
||||||
run the main \file{.tex} file.
|
|
||||||
|
|
||||||
\keyword{main.bbl} -- this is an auxiliary file generated by BibTeX,
|
|
||||||
if it is deleted, BibTeX simply regenerates it when you run the
|
|
||||||
\file{main.aux} file. Whereas the \file{.bib} file contains all the
|
|
||||||
references you have, this \file{.bbl} file contains the references
|
|
||||||
you have actually cited in the thesis and is used to build the
|
|
||||||
bibliography section of the thesis.
|
|
||||||
|
|
||||||
\keyword{main.blg} -- this is an auxiliary file generated by BibTeX,
|
|
||||||
if it is deleted BibTeX simply regenerates it when you run the main
|
|
||||||
\file{.aux} file.
|
|
||||||
|
|
||||||
\keyword{main.lof} -- this is an auxiliary file generated by
|
|
||||||
\LaTeX{}, if it is deleted \LaTeX{} simply regenerates it when you
|
|
||||||
run the main \file{.tex} file. It tells \LaTeX{} how to build the
|
|
||||||
\emph{List of Figures} section.
|
|
||||||
|
|
||||||
\keyword{main.log} -- this is an auxiliary file generated by
|
|
||||||
\LaTeX{}, if it is deleted \LaTeX{} simply regenerates it when you
|
|
||||||
run the main \file{.tex} file. It contains messages from \LaTeX{}, if
|
|
||||||
you receive errors and warnings from \LaTeX{}, they will be in this
|
|
||||||
\file{.log} file.
|
|
||||||
|
|
||||||
\keyword{main.lot} -- this is an auxiliary file generated by
|
|
||||||
\LaTeX{}, if it is deleted \LaTeX{} simply regenerates it when you
|
|
||||||
run the main \file{.tex} file. It tells \LaTeX{} how to build the
|
|
||||||
\emph{List of Tables} section.
|
|
||||||
|
|
||||||
\keyword{main.out} -- this is an auxiliary file generated by
|
|
||||||
\LaTeX{}, if it is deleted \LaTeX{} simply regenerates it when you
|
|
||||||
run the main \file{.tex} file.
|
|
||||||
|
|
||||||
So from this long list, only the files with the \file{.bib},
|
|
||||||
\file{.cls} and \file{.tex} extensions are the most important ones.
|
|
||||||
The other auxiliary files can be ignored or deleted as \LaTeX{} and
|
|
||||||
BibTeX will regenerate them.
|
|
||||||
|
|
||||||
%----------------------------------------------------------------------------------------
|
|
||||||
|
|
||||||
\section{Filling in Your Information in the \file{main.tex}
|
|
||||||
File}\label{FillingFile}
|
|
||||||
|
|
||||||
You will need to personalise the thesis template and make it your own
|
|
||||||
by filling in your own information. This is done by editing the
|
|
||||||
\file{main.tex} file in a text editor or your favourite LaTeX environment.
|
|
||||||
|
|
||||||
Open the file and scroll down to the third large block titled
|
|
||||||
\emph{THESIS INFORMATION} where you can see the entries for
|
|
||||||
\emph{University Name}, \emph{Department Name}, etc \ldots
|
|
||||||
|
|
||||||
Fill out the information about yourself, your group and institution.
|
|
||||||
You can also insert web links, if you do, make sure you use the full
|
|
||||||
URL, including the \code{http://} for this. If you don't want these
|
|
||||||
to be linked, simply remove the \verb|\href{url}{name}| and only leave the name.
|
|
||||||
|
|
||||||
When you have done this, save the file and recompile \code{main.tex}.
|
|
||||||
All the information you filled in should now be in the PDF, complete
|
|
||||||
with web links. You can now begin your thesis proper!
|
|
||||||
|
|
||||||
%----------------------------------------------------------------------------------------
|
|
||||||
|
|
||||||
\section{The \code{main.tex} File Explained}
|
|
||||||
|
|
||||||
The \file{main.tex} file contains the structure of the thesis. There
|
|
||||||
are plenty of written comments that explain what pages, sections and
|
|
||||||
formatting the \LaTeX{} code is creating. Each major document element
|
|
||||||
is divided into commented blocks with titles in all capitals to make
|
|
||||||
it obvious what the following bit of code is doing. Initially there
|
|
||||||
seems to be a lot of \LaTeX{} code, but this is all formatting, and
|
|
||||||
it has all been taken care of so you don't have to do it.
|
|
||||||
|
|
||||||
Begin by checking that your information on the title page is correct.
|
|
||||||
For the thesis declaration, your institution may insist on something
|
|
||||||
different than the text given. If this is the case, just replace what
|
|
||||||
you see with what is required in the \emph{DECLARATION PAGE} block.
|
|
||||||
|
|
||||||
Then comes a page which contains a funny quote. You can put your own,
|
|
||||||
or quote your favourite scientist, author, person, and so on. Make
|
|
||||||
sure to put the name of the person who you took the quote from.
|
|
||||||
|
|
||||||
Following this is the abstract page which summarises your work in a
|
|
||||||
condensed way and can almost be used as a standalone document to
|
|
||||||
describe what you have done. The text you write will cause the
|
|
||||||
heading to move up so don't worry about running out of space.
|
|
||||||
|
|
||||||
Next come the acknowledgements. On this page, write about all the
|
|
||||||
people who you wish to thank (not forgetting parents, partners and
|
|
||||||
your advisor/supervisor).
|
|
||||||
|
|
||||||
The contents pages, list of figures and tables are all taken care of
|
|
||||||
for you and do not need to be manually created or edited. The next
|
|
||||||
set of pages are more likely to be optional and can be deleted since
|
|
||||||
they are for a more technical thesis: insert a list of abbreviations
|
|
||||||
you have used in the thesis, then a list of the physical constants
|
|
||||||
and numbers you refer to and finally, a list of mathematical symbols
|
|
||||||
used in any formulae. Making the effort to fill these tables means
|
|
||||||
the reader has a one-stop place to refer to instead of searching the
|
|
||||||
internet and references to try and find out what you meant by certain
|
|
||||||
abbreviations or symbols.
|
|
||||||
|
|
||||||
The list of symbols is split into the Roman and Greek alphabets.
|
|
||||||
Whereas the abbreviations and symbols ought to be listed in
|
|
||||||
alphabetical order (and this is \emph{not} done automatically for
|
|
||||||
you) the list of physical constants should be grouped into similar themes.
|
|
||||||
|
|
||||||
The next page contains a one line dedication. Who will you dedicate
|
|
||||||
your thesis to?
|
|
||||||
|
|
||||||
Finally, there is the block where the chapters are included.
|
|
||||||
Uncomment the lines (delete the \code{\%} character) as you write the
|
|
||||||
chapters. Each chapter should be written in its own file and put into
|
|
||||||
the \emph{Chapters} folder and named \file{Chapter1},
|
|
||||||
\file{Chapter2}, etc\ldots Similarly for the appendices, uncomment
|
|
||||||
the lines as you need them. Each appendix should go into its own file
|
|
||||||
and placed in the \emph{Appendices} folder.
|
|
||||||
|
|
||||||
After the preamble, chapters and appendices finally comes the
|
|
||||||
bibliography. The bibliography style (called \option{authoryear}) is
|
|
||||||
used for the bibliography and is a fully featured style that will
|
|
||||||
even include links to where the referenced paper can be found online.
|
|
||||||
Do not underestimate how grateful your reader will be to find that a
|
|
||||||
reference to a paper is just a click away. Of course, this relies on
|
|
||||||
you putting the URL information into the BibTeX file in the first place.
|
|
||||||
|
|
||||||
%----------------------------------------------------------------------------------------
|
|
||||||
|
|
||||||
\section{Thesis Features and Conventions}\label{ThesisConventions}
|
|
||||||
|
|
||||||
To get the best out of this template, there are a few conventions
|
|
||||||
that you may want to follow.
|
|
||||||
|
|
||||||
One of the most important (and most difficult) things to keep track
|
|
||||||
of in such a long document as a thesis is consistency. Using certain
|
|
||||||
conventions and ways of doing things (such as using a Todo list)
|
|
||||||
makes the job easier. Of course, all of these are optional and you
|
|
||||||
can adopt your own method.
|
|
||||||
|
|
||||||
\subsection{Printing Format}
|
|
||||||
|
|
||||||
This thesis template is designed for double sided printing (i.e.
|
|
||||||
content on the front and back of pages) as most theses are printed
|
|
||||||
and bound this way. Switching to one sided printing is as simple as
|
|
||||||
uncommenting the \option{oneside} option of the \code{documentclass}
|
|
||||||
command at the top of the \file{main.tex} file. You may then wish to
|
|
||||||
adjust the margins to suit specifications from your institution.
|
|
||||||
|
|
||||||
The headers for the pages contain the page number on the outer side
|
|
||||||
(so it is easy to flick through to the page you want) and the chapter
|
|
||||||
name on the inner side.
|
|
||||||
|
|
||||||
The text is set to 11 point by default with single line spacing,
|
|
||||||
again, you can tune the text size and spacing should you want or need
|
|
||||||
to using the options at the very start of \file{main.tex}. The
|
|
||||||
spacing can be changed similarly by replacing the
|
|
||||||
\option{singlespacing} with \option{onehalfspacing} or \option{doublespacing}.
|
|
||||||
|
|
||||||
\subsection{Using US Letter Paper}
|
|
||||||
|
|
||||||
The paper size used in the template is A4, which is the standard size
|
|
||||||
in Europe. If you are using this thesis template elsewhere and
|
|
||||||
particularly in the United States, then you may have to change the A4
|
|
||||||
paper size to the US Letter size. This can be done in the margins
|
|
||||||
settings section in \file{main.tex}.
|
|
||||||
|
|
||||||
Due to the differences in the paper size, the resulting margins may
|
|
||||||
be different to what you like or require (as it is common for
|
|
||||||
institutions to dictate certain margin sizes). If this is the case,
|
|
||||||
then the margin sizes can be tweaked by modifying the values in the
|
|
||||||
same block as where you set the paper size. Now your document should
|
|
||||||
be set up for US Letter paper size with suitable margins.
|
|
||||||
|
|
||||||
\subsection{References}
|
|
||||||
|
|
||||||
The \code{biblatex} package is used to format the bibliography and
|
|
||||||
inserts references such as this one \parencite{Reference1}. The
|
|
||||||
options used in the \file{main.tex} file mean that the in-text
|
|
||||||
citations of references are formatted with the author(s) listed with
|
|
||||||
the date of the publication. Multiple references are separated by
|
|
||||||
semicolons (e.g. \parencite{Reference2, Reference1}) and references
|
|
||||||
with more than three authors only show the first author with \emph{et
|
|
||||||
al.} indicating there are more authors (e.g. \parencite{Reference3}).
|
|
||||||
This is done automatically for you. To see how you use references,
|
|
||||||
have a look at the \file{Chapter1.tex} source file. Many reference
|
|
||||||
managers allow you to simply drag the reference into the document as you type.
|
|
||||||
|
|
||||||
Scientific references should come \emph{before} the punctuation mark
|
|
||||||
if there is one (such as a comma or period). The same goes for
|
|
||||||
footnotes\footnote{Such as this footnote, here down at the bottom of
|
|
||||||
the page.}. You can change this but the most important thing is to
|
|
||||||
keep the convention consistent throughout the thesis. Footnotes
|
|
||||||
themselves should be full, descriptive sentences (beginning with a
|
|
||||||
capital letter and ending with a full stop). The APA6 states:
|
|
||||||
\enquote{Footnote numbers should be superscripted, [...], following
|
|
||||||
any punctuation mark except a dash.} The Chicago manual of style
|
|
||||||
states: \enquote{A note number should be placed at the end of a
|
|
||||||
sentence or clause. The number follows any punctuation mark except
|
|
||||||
the dash, which it precedes. It follows a closing parenthesis.}
|
|
||||||
|
|
||||||
The bibliography is typeset with references listed in alphabetical
|
|
||||||
order by the first author's last name. This is similar to the APA
|
|
||||||
referencing style. To see how \LaTeX{} typesets the bibliography,
|
|
||||||
have a look at the very end of this document (or just click on the
|
|
||||||
reference number links in in-text citations).
|
|
||||||
|
|
||||||
\subsubsection{A Note on bibtex}
|
|
||||||
|
|
||||||
The bibtex backend used in the template by default does not correctly
|
|
||||||
handle unicode character encoding (i.e. "international" characters).
|
|
||||||
You may see a warning about this in the compilation log and, if your
|
|
||||||
references contain unicode characters, they may not show up correctly
|
|
||||||
or at all. The solution to this is to use the biber backend instead
|
|
||||||
of the outdated bibtex backend. This is done by finding this in
|
|
||||||
\file{main.tex}: \option{backend=bibtex} and changing it to
|
|
||||||
\option{backend=biber}. You will then need to delete all auxiliary
|
|
||||||
BibTeX files and navigate to the template directory in your terminal
|
|
||||||
(command prompt). Once there, simply type \code{biber main} and biber
|
|
||||||
will compile your bibliography. You can then compile \file{main.tex}
|
|
||||||
as normal and your bibliography will be updated. An alternative is to
|
|
||||||
set up your LaTeX editor to compile with biber instead of bibtex, see
|
|
||||||
\href{http://tex.stackexchange.com/questions/154751/biblatex-with-biber-configuring-my-editor-to-avoid-undefined-citations/}{here}
|
|
||||||
for how to do this for various editors.
|
|
||||||
|
|
||||||
\subsection{Tables}
|
|
||||||
|
|
||||||
Tables are an important way of displaying your results, below is an
|
|
||||||
example table which was generated with this code:
|
|
||||||
|
|
||||||
{\small
|
|
||||||
\begin{verbatim}
|
|
||||||
\begin{table}
|
|
||||||
\caption{The effects of treatments X and Y on the four groups studied.}
|
|
||||||
\label{tab:treatments}
|
|
||||||
\centering
|
|
||||||
\begin{tabular}{l l l}
|
|
||||||
\toprule
|
|
||||||
\tabhead{Groups} & \tabhead{Treatment X} & \tabhead{Treatment Y} \\
|
|
||||||
\midrule
|
|
||||||
1 & 0.2 & 0.8\\
|
|
||||||
2 & 0.17 & 0.7\\
|
|
||||||
3 & 0.24 & 0.75\\
|
|
||||||
4 & 0.68 & 0.3\\
|
|
||||||
\bottomrule\\
|
|
||||||
\end{tabular}
|
|
||||||
\end{table}
|
|
||||||
\end{verbatim}
|
|
||||||
}
|
|
||||||
|
|
||||||
\begin{table}
|
|
||||||
\caption{The effects of treatments X and Y on the four groups studied.}
|
|
||||||
\label{tab:treatments}
|
|
||||||
\centering
|
|
||||||
\begin{tabular}{l l l}
|
|
||||||
\toprule
|
|
||||||
\tabhead{Groups} & \tabhead{Treatment X} & \tabhead{Treatment Y} \\
|
|
||||||
\midrule
|
|
||||||
1 & 0.2 & 0.8\\
|
|
||||||
2 & 0.17 & 0.7\\
|
|
||||||
3 & 0.24 & 0.75\\
|
|
||||||
4 & 0.68 & 0.3\\
|
|
||||||
\bottomrule\\
|
|
||||||
\end{tabular}
|
|
||||||
\end{table}
|
|
||||||
|
|
||||||
You can reference tables with \verb|\ref{<label>}| where the label is
|
|
||||||
defined within the table environment. See \file{Chapter1.tex} for an
|
|
||||||
example of the label and citation (e.g. Table~\ref{tab:treatments}).
|
|
||||||
|
|
||||||
\subsection{Figures}
|
|
||||||
|
|
||||||
There will hopefully be many figures in your thesis (that should be
|
|
||||||
placed in the \emph{Figures} folder). The way to insert figures into
|
|
||||||
your thesis is to use a code template like this:
|
|
||||||
\begin{verbatim}
|
|
||||||
\begin{figure}
|
|
||||||
\centering
|
|
||||||
\includegraphics{Figures/Electron}
|
|
||||||
\decoRule
|
|
||||||
\caption[An Electron]{An electron (artist's impression).}
|
|
||||||
\label{fig:Electron}
|
|
||||||
\end{figure}
|
|
||||||
\end{verbatim}
|
|
||||||
Also look in the source file. Putting this code into the source file
|
|
||||||
produces the picture of the electron that you can see in the figure below.
|
|
||||||
|
|
||||||
\begin{figure}[th]
|
|
||||||
\centering
|
|
||||||
\includegraphics{Figures/Electron}
|
|
||||||
\decoRule
|
|
||||||
\caption[An Electron]{An electron (artist's impression).}
|
|
||||||
\label{fig:Electron}
|
|
||||||
\end{figure}
|
|
||||||
|
|
||||||
Sometimes figures don't always appear where you write them in the
|
|
||||||
source. The placement depends on how much space there is on the page
|
|
||||||
for the figure. Sometimes there is not enough room to fit a figure
|
|
||||||
directly where it should go (in relation to the text) and so \LaTeX{}
|
|
||||||
puts it at the top of the next page. Positioning figures is the job
|
|
||||||
of \LaTeX{} and so you should only worry about making them look good!
|
|
||||||
|
|
||||||
Figures usually should have captions just in case you need to refer
|
|
||||||
to them (such as in Figure~\ref{fig:Electron}). The \verb|\caption|
|
|
||||||
command contains two parts, the first part, inside the square
|
|
||||||
brackets is the title that will appear in the \emph{List of Figures},
|
|
||||||
and so should be short. The second part in the curly brackets should
|
|
||||||
contain the longer and more descriptive caption text.
|
|
||||||
|
|
||||||
The \verb|\decoRule| command is optional and simply puts an aesthetic
|
|
||||||
horizontal line below the image. If you do this for one image, do it
|
|
||||||
for all of them.
|
|
||||||
|
|
||||||
\LaTeX{} is capable of using images in pdf, jpg and png format.
|
|
||||||
|
|
||||||
\subsection{Typesetting mathematics}
|
|
||||||
|
|
||||||
If your thesis is going to contain heavy mathematical content, be
|
|
||||||
sure that \LaTeX{} will make it look beautiful, even though it won't
|
|
||||||
be able to solve the equations for you.
|
|
||||||
|
|
||||||
The \enquote{Not So Short Introduction to \LaTeX} (available on
|
|
||||||
\href{http://www.ctan.org/tex-archive/info/lshort/english/lshort.pdf}{CTAN})
|
|
||||||
should tell you everything you need to know for most cases of
|
|
||||||
typesetting mathematics. If you need more information, a much more
|
|
||||||
thorough mathematical guide is available from the AMS called,
|
|
||||||
\enquote{A Short Math Guide to \LaTeX} and can be downloaded from:
|
|
||||||
\url{ftp://ftp.ams.org/pub/tex/doc/amsmath/short-math-guide.pdf}
|
|
||||||
|
|
||||||
There are many different \LaTeX{} symbols to remember, luckily you
|
|
||||||
can find the most common symbols in
|
|
||||||
\href{http://ctan.org/pkg/comprehensive}{The Comprehensive \LaTeX~Symbol List}.
|
|
||||||
|
|
||||||
You can write an equation, which is automatically given an equation
|
|
||||||
number by \LaTeX{} like this:
|
|
||||||
\begin{verbatim}
|
|
||||||
\begin{equation}
|
|
||||||
E = mc^{2}
|
|
||||||
\label{eqn:Einstein}
|
|
||||||
\end{equation}
|
|
||||||
\end{verbatim}
|
|
||||||
|
|
||||||
This will produce Einstein's famous energy-matter equivalence equation:
|
|
||||||
\begin{equation}
|
|
||||||
E = mc^{2}
|
|
||||||
\label{eqn:Einstein}
|
|
||||||
\end{equation}
|
|
||||||
|
|
||||||
All equations you write (which are not in the middle of paragraph
|
|
||||||
text) are automatically given equation numbers by \LaTeX{}. If you
|
|
||||||
don't want a particular equation numbered, use the unnumbered form:
|
|
||||||
\begin{verbatim}
|
|
||||||
\[ a^{2}=4 \]
|
|
||||||
\end{verbatim}
|
|
||||||
|
|
||||||
%----------------------------------------------------------------------------------------
|
|
||||||
|
|
||||||
\section{Sectioning and Subsectioning}
|
|
||||||
|
|
||||||
You should break your thesis up into nice, bite-sized sections and
|
|
||||||
subsections. \LaTeX{} automatically builds a table of Contents by
|
|
||||||
looking at all the \verb|\chapter{}|, \verb|\section{}| and
|
|
||||||
\verb|\subsection{}| commands you write in the source.
|
|
||||||
|
|
||||||
The Table of Contents should only list the sections to three (3)
|
|
||||||
levels. A \verb|chapter{}| is level zero (0). A \verb|\section{}| is
|
|
||||||
level one (1) and so a \verb|\subsection{}| is level two (2). In your
|
|
||||||
thesis it is likely that you will even use a \verb|subsubsection{}|,
|
|
||||||
which is level three (3). The depth to which the Table of Contents is
|
|
||||||
formatted is set within \file{MastersDoctoralThesis.cls}. If you need
|
|
||||||
this changed, you can do it in \file{main.tex}.
|
|
||||||
|
|
||||||
%----------------------------------------------------------------------------------------
|
|
||||||
|
|
||||||
\section{In Closing}
|
|
||||||
|
|
||||||
You have reached the end of this mini-guide. You can now rename or
|
|
||||||
overwrite this pdf file and begin writing your own
|
|
||||||
\file{Chapter1.tex} and the rest of your thesis. The easy work of
|
|
||||||
setting up the structure and framework has been taken care of for
|
|
||||||
you. It's now your job to fill it out!
|
|
||||||
|
|
||||||
Good luck and have lots of fun!
|
|
||||||
|
|
||||||
\begin{flushright}
|
|
||||||
Guide written by ---\\
|
|
||||||
Sunil Patel: \href{http://www.sunilpatel.co.uk}{www.sunilpatel.co.uk}\\
|
|
||||||
Vel: \href{http://www.LaTeXTemplates.com}{LaTeXTemplates.com}
|
|
||||||
\end{flushright}
|
|
||||||
@@ -23,9 +23,9 @@ reproducible.
|
|||||||
Peer-to-peer architectures promise censorship-resistant, fault-tolerant
|
Peer-to-peer architectures promise censorship-resistant, fault-tolerant
|
||||||
infrastructure by eliminating single points of failure
|
infrastructure by eliminating single points of failure
|
||||||
\cite{shukla_towards_2021}.
|
\cite{shukla_towards_2021}.
|
||||||
These architectures underpin a growing range of systems---from IoT
|
These architectures underpin a growing range of systems, from IoT
|
||||||
edge computing
|
edge computing and content delivery networks to blockchain platforms
|
||||||
and content delivery networks to blockchain platforms like Ethereum.
|
like Ethereum.
|
||||||
Yet realizing these benefits requires distributing nodes across
|
Yet realizing these benefits requires distributing nodes across
|
||||||
genuinely diverse hosting entities.
|
genuinely diverse hosting entities.
|
||||||
|
|
||||||
@@ -69,16 +69,15 @@ mesh VPNs enable direct peer-to-peer connectivity without requiring
|
|||||||
static IP addresses or manual firewall configuration.
|
static IP addresses or manual firewall configuration.
|
||||||
Each node receives a stable virtual address within the overlay network,
|
Each node receives a stable virtual address within the overlay network,
|
||||||
regardless of its underlying network topology.
|
regardless of its underlying network topology.
|
||||||
This capability is transformative:
|
In practice, this means a device behind consumer-grade NAT can
|
||||||
it allows a device behind consumer-grade NAT to participate
|
participate as a first-class peer in a distributed system,
|
||||||
as a first-class peer in a distributed system,
|
|
||||||
removing the primary technical advantage that cloud providers hold.
|
removing the primary technical advantage that cloud providers hold.
|
||||||
|
|
||||||
The Clan deployment framework builds on this foundation.
|
The Clan deployment framework builds on this foundation.
|
||||||
Clan leverages Nix and NixOS to eliminate entire classes of
|
Clan uses Nix and NixOS to eliminate configuration drift and
|
||||||
configuration errors prevalent in contemporary infrastructure deployment,
|
dependency conflicts, reducing operational overhead enough for a
|
||||||
reducing operational overhead to a degree where a single administrator
|
single administrator to reliably self-host complex distributed
|
||||||
can reliably self-host complex distributed services.
|
services.
|
||||||
Overlay VPNs are central to Clan's architecture,
|
Overlay VPNs are central to Clan's architecture,
|
||||||
providing the secure peer connectivity that enables nodes
|
providing the secure peer connectivity that enables nodes
|
||||||
to form cohesive networks regardless of their physical location or
|
to form cohesive networks regardless of their physical location or
|
||||||
@@ -92,10 +91,8 @@ During the development of Clan, a recurring challenge became apparent:
|
|||||||
practitioners held divergent preferences for mesh VPN solutions,
|
practitioners held divergent preferences for mesh VPN solutions,
|
||||||
each citing different edge cases where their chosen VPN
|
each citing different edge cases where their chosen VPN
|
||||||
proved unreliable or lacked essential features.
|
proved unreliable or lacked essential features.
|
||||||
These discussions were largely grounded in anecdotal evidence
|
These discussions were grounded in anecdotal evidence rather than
|
||||||
rather than systematic evaluation.
|
systematic evaluation, motivating the present work.
|
||||||
This observation revealed a clear need for rigorous,
|
|
||||||
evidence-based comparison of peer-to-peer overlay VPN implementations.
|
|
||||||
|
|
||||||
\subsection{Related Work}
|
\subsection{Related Work}
|
||||||
|
|
||||||
@@ -122,9 +119,9 @@ Beyond filling this research gap, a further goal was to create a fully
|
|||||||
automated benchmarking framework capable of generating a public
|
automated benchmarking framework capable of generating a public
|
||||||
leaderboard, similar in spirit to the js-framework-benchmark
|
leaderboard, similar in spirit to the js-framework-benchmark
|
||||||
(see Figure~\ref{fig:js-framework-benchmark}). By providing an
|
(see Figure~\ref{fig:js-framework-benchmark}). By providing an
|
||||||
accessible web interface with regularly updated results, we hope to
|
accessible web interface with regularly updated
|
||||||
encourage P2P VPN developers to optimize their implementations in
|
results, the framework gives VPN developers a concrete, public
|
||||||
pursuit of top rankings.
|
baseline to measure against.
|
||||||
|
|
||||||
\section{Research Contribution}
|
\section{Research Contribution}
|
||||||
|
|
||||||
@@ -132,8 +129,8 @@ This thesis makes the following contributions:
|
|||||||
|
|
||||||
\begin{enumerate}
|
\begin{enumerate}
|
||||||
\item A comprehensive benchmark of ten peer-to-peer VPN
|
\item A comprehensive benchmark of ten peer-to-peer VPN
|
||||||
implementations across seven workloads. Including real-world
|
implementations across seven workloads (including real-world
|
||||||
video streaming and package downloads; and four network
|
video streaming and package downloads) and four network
|
||||||
impairment profiles, producing over 300 unique measurements.
|
impairment profiles, producing over 300 unique measurements.
|
||||||
\item A source code analysis of all ten VPN implementations,
|
\item A source code analysis of all ten VPN implementations,
|
||||||
combining manual code review with LLM-assisted analysis,
|
combining manual code review with LLM-assisted analysis,
|
||||||
@@ -146,9 +143,9 @@ This thesis makes the following contributions:
|
|||||||
independent replication of all results.
|
independent replication of all results.
|
||||||
\item A performance analysis demonstrating that Tailscale
|
\item A performance analysis demonstrating that Tailscale
|
||||||
outperforms the Linux kernel's default networking stack under
|
outperforms the Linux kernel's default networking stack under
|
||||||
degraded conditions, and that kernel parameter tuning; Reno
|
degraded conditions, and that kernel parameter tuning (Reno
|
||||||
congestion control in place of CUBIC, with RACK
|
congestion control in place of CUBIC, with RACK
|
||||||
disabled; yields measurable throughput improvements.
|
disabled) yields measurable throughput improvements.
|
||||||
\item The discovery of several security vulnerabilities across
|
\item The discovery of several security vulnerabilities across
|
||||||
the evaluated VPN implementations.
|
the evaluated VPN implementations.
|
||||||
\item An automated benchmarking framework designed for public
|
\item An automated benchmarking framework designed for public
|
||||||
@@ -225,7 +222,7 @@ This thesis makes the following contributions:
|
|||||||
\caption{Stage 8}
|
\caption{Stage 8}
|
||||||
\end{subfigure}
|
\end{subfigure}
|
||||||
|
|
||||||
\caption{Visionary Webinterface to Setup a Clan Family Network}
|
\caption{Planned web interface for setting up a Clan family network}
|
||||||
\label{fig:vision-stages}
|
\label{fig:vision-stages}
|
||||||
\end{figure}
|
\end{figure}
|
||||||
|
|
||||||
|
|||||||
@@ -8,9 +8,9 @@ This chapter describes the methodology used to benchmark and analyze
|
|||||||
peer-to-peer mesh VPN implementations. The evaluation combines
|
peer-to-peer mesh VPN implementations. The evaluation combines
|
||||||
performance benchmarking under controlled network conditions with a
|
performance benchmarking under controlled network conditions with a
|
||||||
structured source code analysis of each implementation. The
|
structured source code analysis of each implementation. The
|
||||||
benchmarking framework prioritizes reproducibility at every layer;
|
benchmarking framework prioritizes reproducibility at every layer,
|
||||||
from pinned dependencies and declarative system configuration to
|
from pinned dependencies and declarative system configuration to
|
||||||
automated test orchestration; enabling independent verification of
|
automated test orchestration, enabling independent verification of
|
||||||
results and facilitating future comparative studies.
|
results and facilitating future comparative studies.
|
||||||
|
|
||||||
\section{Experimental Setup}
|
\section{Experimental Setup}
|
||||||
@@ -30,7 +30,7 @@ identical specifications:
|
|||||||
\end{itemize}
|
\end{itemize}
|
||||||
|
|
||||||
The presence of hardware cryptographic acceleration is relevant because
|
The presence of hardware cryptographic acceleration is relevant because
|
||||||
many VPN implementations leverage AES-NI for encryption, and the results
|
many VPN implementations use AES-NI for encryption, and the results
|
||||||
may differ on systems without these features.
|
may differ on systems without these features.
|
||||||
|
|
||||||
\subsection{Network Topology}
|
\subsection{Network Topology}
|
||||||
@@ -114,10 +114,10 @@ Table~\ref{tab:benchmark_suite} summarises each benchmark.
|
|||||||
\end{tabular}
|
\end{tabular}
|
||||||
\end{table}
|
\end{table}
|
||||||
|
|
||||||
The first four benchmarks use well-known network testing tools.
|
The first four benchmarks use well-known network testing tools;
|
||||||
The remaining three target workloads that are closer to real-world
|
the remaining three target workloads closer to real-world usage.
|
||||||
usage. The subsections below describe the configuration details
|
The subsections below describe configuration details that the table
|
||||||
that the table does not capture.
|
does not capture.
|
||||||
|
|
||||||
\subsection{Ping}
|
\subsection{Ping}
|
||||||
|
|
||||||
@@ -320,7 +320,7 @@ Each metric is summarized as a statistics dictionary containing:
|
|||||||
\begin{itemize}
|
\begin{itemize}
|
||||||
\bitem{min / max:} Extreme values observed
|
\bitem{min / max:} Extreme values observed
|
||||||
\bitem{average:} Arithmetic mean across samples
|
\bitem{average:} Arithmetic mean across samples
|
||||||
\bitem{p25 / p50 / p75:} Quartiles via pythons
|
\bitem{p25 / p50 / p75:} Quartiles via Python's
|
||||||
\texttt{statistics.quantiles()} method
|
\texttt{statistics.quantiles()} method
|
||||||
\end{itemize}
|
\end{itemize}
|
||||||
|
|
||||||
@@ -351,7 +351,7 @@ hyperfine's built-in statistical output.
|
|||||||
\section{Source Code Analysis}
|
\section{Source Code Analysis}
|
||||||
|
|
||||||
To complement the performance benchmarks with architectural
|
To complement the performance benchmarks with architectural
|
||||||
understanding, a structured source code analysis was conducted for
|
understanding, we conducted a structured source code analysis of
|
||||||
all ten VPN implementations. The analysis followed three phases.
|
all ten VPN implementations. The analysis followed three phases.
|
||||||
|
|
||||||
\subsection{Repository Collection and LLM-Assisted Overview}
|
\subsection{Repository Collection and LLM-Assisted Overview}
|
||||||
@@ -377,9 +377,8 @@ aspects:
|
|||||||
\item Resilience / Central Point of Failure
|
\item Resilience / Central Point of Failure
|
||||||
\end{itemize}
|
\end{itemize}
|
||||||
|
|
||||||
Every claim in the generated overview was required to reference the
|
Each agent was required to reference the specific file and line
|
||||||
specific file and line range in the repository that supports it,
|
range supporting every claim, enabling direct verification.
|
||||||
enabling direct verification.
|
|
||||||
|
|
||||||
\subsection{Manual Verification}
|
\subsection{Manual Verification}
|
||||||
|
|
||||||
@@ -392,19 +391,19 @@ automated summaries remained superficial.
|
|||||||
\subsection{Feature Matrix and Maintainer Review}
|
\subsection{Feature Matrix and Maintainer Review}
|
||||||
|
|
||||||
The findings from both the automated and manual analysis were
|
The findings from both the automated and manual analysis were
|
||||||
consolidated into a comprehensive feature matrix cataloguing 131
|
consolidated into a feature matrix cataloguing 131 features across
|
||||||
features across all ten VPN implementations. The matrix covers
|
all ten VPN implementations. The matrix covers
|
||||||
protocol characteristics, cryptographic primitives, NAT traversal
|
protocol characteristics, cryptographic primitives, NAT traversal
|
||||||
strategies, routing behavior, and security properties.
|
strategies, routing behavior, and security properties.
|
||||||
|
|
||||||
The completed feature matrix was published and sent to the respective
|
The completed feature matrix was published and sent to the respective
|
||||||
VPN maintainers for review. Maintainer feedback was incorporated as
|
VPN maintainers for review. We incorporated their feedback as
|
||||||
corrections and clarifications, improving the accuracy of the final
|
corrections and clarifications to the final classification.
|
||||||
classification.
|
|
||||||
|
|
||||||
\section{Reproducibility}
|
\section{Reproducibility}
|
||||||
|
|
||||||
Reproducibility is ensured at every layer of the experimental stack.
|
The experimental stack pins or declares every variable that could
|
||||||
|
affect results.
|
||||||
|
|
||||||
\subsection{Dependency Pinning}
|
\subsection{Dependency Pinning}
|
||||||
|
|
||||||
@@ -524,7 +523,7 @@ VPNs were selected based on:
|
|||||||
\bitem{Decentralization:} Preference for solutions without mandatory
|
\bitem{Decentralization:} Preference for solutions without mandatory
|
||||||
central servers, though coordinated-mesh VPNs were included for comparison.
|
central servers, though coordinated-mesh VPNs were included for comparison.
|
||||||
\bitem{Active development:} Only VPNs with recent commits and
|
\bitem{Active development:} Only VPNs with recent commits and
|
||||||
maintained releases were considered (with the exception of VPN Cloud).
|
maintained releases were considered (with the exception of VpnCloud).
|
||||||
\bitem{Linux support:} All VPNs must run on Linux.
|
\bitem{Linux support:} All VPNs must run on Linux.
|
||||||
\end{itemize}
|
\end{itemize}
|
||||||
|
|
||||||
|
|||||||
@@ -5,27 +5,812 @@
|
|||||||
\label{Results}
|
\label{Results}
|
||||||
|
|
||||||
This chapter presents the results of the benchmark suite across all
|
This chapter presents the results of the benchmark suite across all
|
||||||
ten VPN implementations and the internal baseline. Results are
|
ten VPN implementations and the internal baseline. The structure
|
||||||
organized by first establishing overhead under ideal conditions, then
|
follows the impairment profiles from ideal to degraded:
|
||||||
examining how each VPN performs under increasing network impairment.
|
Section~\ref{sec:baseline} establishes overhead under ideal
|
||||||
The chapter concludes with findings from the source code analysis.
|
conditions, then subsequent sections examine how each VPN responds to
|
||||||
|
increasing network impairment. The chapter concludes with findings
|
||||||
|
from the source code analysis. A recurring theme throughout is that
|
||||||
|
no single metric captures VPN performance; the rankings shift
|
||||||
|
depending on whether one measures throughput, latency, retransmit
|
||||||
|
behavior, or real-world application performance.
|
||||||
|
|
||||||
\section{Baseline Performance}
|
\section{Baseline Performance}
|
||||||
|
\label{sec:baseline}
|
||||||
|
|
||||||
% Under the baseline impairment profile (no added latency, loss, or
|
The baseline impairment profile introduces no artificial loss or
|
||||||
% reordering), the overhead introduced by each VPN relative to the
|
reordering, so any performance gap between VPNs can be attributed to
|
||||||
% internal (no VPN) baseline and WireGuard can be measured in isolation.
|
the VPN itself. Throughout the plots in this section, the
|
||||||
|
\emph{internal} bar marks a direct host-to-host connection with no VPN
|
||||||
|
in the path; it represents the best the hardware can do. On its own,
|
||||||
|
this link delivers 934\,Mbps on a single TCP stream and a round-trip
|
||||||
|
latency of just
|
||||||
|
0.60\,ms. WireGuard comes remarkably close to these numbers, reaching
|
||||||
|
92.5\,\% of bare-metal throughput with only a single retransmit across
|
||||||
|
an entire 30-second test. Mycelium sits at the other extreme, adding
|
||||||
|
34.9\,ms of latency, roughly 58$\times$ the bare-metal figure.
|
||||||
|
|
||||||
\subsection{Throughput Overhead}
|
\subsection{Test Execution Overview}
|
||||||
|
|
||||||
% TCP and UDP iperf3 results at baseline profile.
|
Running the full baseline suite across all ten VPNs and the internal
|
||||||
% Compare all VPNs against internal and WireGuard.
|
reference took just over four hours. The bulk of that time, about
|
||||||
% Consider a bar chart or grouped table.
|
2.6~hours (63\,\%), was spent on actual benchmark execution; VPN
|
||||||
|
installation and deployment accounted for another 45~minutes (19\,\%),
|
||||||
|
and roughly 21~minutes (9\,\%) went to waiting for VPN tunnels to come
|
||||||
|
up after restarts. The remaining time was consumed by VPN service restarts
|
||||||
|
and traffic-control (tc) stabilization.
|
||||||
|
Figure~\ref{fig:test_duration} breaks this down per VPN.
|
||||||
|
|
||||||
\subsection{Latency Overhead}
|
Most VPNs completed every benchmark without issues, but four failed
|
||||||
|
one test each: Nebula and Headscale timed out on the qperf
|
||||||
|
QUIC performance benchmark after six retries, while Hyprspace and
|
||||||
|
Mycelium failed the UDP iPerf3 test
|
||||||
|
with a 120-second timeout. Their individual success rate is
|
||||||
|
85.7\,\%, with all other VPNs passing the full suite
|
||||||
|
(Figure~\ref{fig:success_rate}).
|
||||||
|
|
||||||
% Ping RTT results at baseline profile.
|
\begin{figure}[H]
|
||||||
% Show min/avg/max/mdev per VPN.
|
\centering
|
||||||
|
\begin{subfigure}[t]{1.0\textwidth}
|
||||||
|
\centering
|
||||||
|
\includegraphics[width=\textwidth]{{Figures/baseline/Average Test
|
||||||
|
Duration per Machine}.png}
|
||||||
|
\caption{Average test duration per VPN, including installation
|
||||||
|
time and benchmark execution}
|
||||||
|
\label{fig:test_duration}
|
||||||
|
\end{subfigure}
|
||||||
|
|
||||||
|
\vspace{1em}
|
||||||
|
|
||||||
|
\begin{subfigure}[t]{1.0\textwidth}
|
||||||
|
\centering
|
||||||
|
\includegraphics[width=\textwidth]{{Figures/baseline/Benchmark
|
||||||
|
Success Rate}.png}
|
||||||
|
\caption{Benchmark success rate across all seven tests}
|
||||||
|
\label{fig:success_rate}
|
||||||
|
\end{subfigure}
|
||||||
|
\caption{Test execution overview. Hyprspace has the longest average
|
||||||
|
duration due to UDP timeouts and long VPN connectivity
|
||||||
|
waits. WireGuard completes fastest. Nebula, Headscale,
|
||||||
|
Hyprspace, and Mycelium each fail one benchmark.}
|
||||||
|
\label{fig:test_overview}
|
||||||
|
\end{figure}
|
||||||
|
|
||||||
|
\subsection{TCP Throughput}
|
||||||
|
|
||||||
|
Each VPN ran a single-stream iPerf3 session for 30~seconds on every
|
||||||
|
link direction (lom$\rightarrow$yuki, yuki$\rightarrow$luna,
|
||||||
|
luna$\rightarrow$lom); Table~\ref{tab:tcp_baseline} shows the
|
||||||
|
averages. Three distinct performance tiers emerge, separated by
|
||||||
|
natural gaps in the data.
|
||||||
|
|
||||||
|
\begin{table}[H]
|
||||||
|
\centering
|
||||||
|
\caption{Single-stream TCP throughput at baseline, sorted by
|
||||||
|
throughput. Retransmits are averaged per 30-second test across
|
||||||
|
all three link directions. The horizontal rules separate the
|
||||||
|
three performance tiers.}
|
||||||
|
\label{tab:tcp_baseline}
|
||||||
|
\begin{tabular}{lrrr}
|
||||||
|
\hline
|
||||||
|
\textbf{VPN} & \textbf{Throughput (Mbps)} &
|
||||||
|
\textbf{Baseline (\%)} & \textbf{Retransmits} \\
|
||||||
|
\hline
|
||||||
|
Internal & 934 & 100.0 & 1.7 \\
|
||||||
|
WireGuard & 864 & 92.5 & 1 \\
|
||||||
|
ZeroTier & 814 & 87.2 & 1163 \\
|
||||||
|
Headscale & 800 & 85.6 & 102 \\
|
||||||
|
Yggdrasil & 795 & 85.1 & 75 \\
|
||||||
|
\hline
|
||||||
|
Nebula & 706 & 75.6 & 955 \\
|
||||||
|
EasyTier & 636 & 68.1 & 537 \\
|
||||||
|
VpnCloud & 539 & 57.7 & 857 \\
|
||||||
|
\hline
|
||||||
|
Hyprspace & 368 & 39.4 & 4965 \\
|
||||||
|
Tinc & 336 & 36.0 & 240 \\
|
||||||
|
Mycelium & 259 & 27.7 & 710 \\
|
||||||
|
\hline
|
||||||
|
\end{tabular}
|
||||||
|
\end{table}
|
||||||
|
|
||||||
|
The top tier ($>$80\,\% of baseline) groups WireGuard, ZeroTier,
|
||||||
|
Headscale, and Yggdrasil, all within 15\,\% of the bare-metal link.
|
||||||
|
A middle tier (55--80\,\%) follows with Nebula, EasyTier, and
|
||||||
|
VpnCloud, while Hyprspace, Tinc, and Mycelium occupy the bottom tier
|
||||||
|
at under 40\,\% of baseline.
|
||||||
|
Figure~\ref{fig:tcp_throughput} visualizes this hierarchy.
|
||||||
|
|
||||||
|
Raw throughput alone is incomplete, however. The retransmit column
|
||||||
|
reveals that not all high-throughput VPNs get there cleanly.
|
||||||
|
ZeroTier, for instance, reaches 814\,Mbps but accumulates
|
||||||
|
1\,163~retransmits per test, over 1\,000$\times$ what WireGuard
|
||||||
|
needs. ZeroTier compensates for tunnel-internal packet loss by
|
||||||
|
repeatedly triggering TCP congestion-control recovery, whereas
|
||||||
|
WireGuard sends data once and it arrives. Across all VPNs,
|
||||||
|
retransmit behaviour falls into three groups: \emph{clean} ($<$110:
|
||||||
|
WireGuard, Internal, Yggdrasil, Headscale), \emph{stressed}
|
||||||
|
(200--900: Tinc, EasyTier, Mycelium, VpnCloud), and
|
||||||
|
\emph{pathological} ($>$950: Nebula, ZeroTier, Hyprspace).
|
||||||
|
|
||||||
|
% TODO: Is this naming scheme any good?
|
||||||
|
|
||||||
|
% TODO: Fix TCP Throughput plot
|
||||||
|
|
||||||
|
\begin{figure}[H]
|
||||||
|
\centering
|
||||||
|
\begin{subfigure}[t]{\textwidth}
|
||||||
|
\centering
|
||||||
|
\includegraphics[width=\textwidth]{{Figures/baseline/tcp/TCP
|
||||||
|
Throughput}.png}
|
||||||
|
\caption{Average single-stream TCP throughput}
|
||||||
|
\label{fig:tcp_throughput}
|
||||||
|
\end{subfigure}
|
||||||
|
|
||||||
|
\vspace{1em}
|
||||||
|
|
||||||
|
\begin{subfigure}[t]{\textwidth}
|
||||||
|
\centering
|
||||||
|
\includegraphics[width=\textwidth]{{Figures/baseline/tcp/TCP
|
||||||
|
Retransmit Rate}.png}
|
||||||
|
\caption{Average TCP retransmits per 30-second test (log scale)}
|
||||||
|
\label{fig:tcp_retransmits}
|
||||||
|
\end{subfigure}
|
||||||
|
\caption{TCP throughput and retransmit rate at baseline. WireGuard
|
||||||
|
leads at 864\,Mbps with 1 retransmit. Hyprspace has nearly 5000
|
||||||
|
retransmits per test. The retransmit count does not always track
|
||||||
|
inversely with throughput: ZeroTier achieves high throughput
|
||||||
|
\emph{despite} high retransmits.}
|
||||||
|
\label{fig:tcp_results}
|
||||||
|
\end{figure}
|
||||||
|
|
||||||
|
Retransmits have a direct mechanical relationship with TCP congestion
|
||||||
|
control. Each retransmit triggers a reduction in the congestion window
|
||||||
|
(\texttt{cwnd}), throttling the sender. This relationship is visible
|
||||||
|
in Figure~\ref{fig:retransmit_correlations}: Hyprspace, with 4965
|
||||||
|
retransmits, maintains the smallest average congestion window in the
|
||||||
|
dataset (205\,KB), while Yggdrasil's 75 retransmits allow a 4.3\,MB
|
||||||
|
window, the largest of any VPN. At first glance this suggests a
|
||||||
|
clean inverse correlation between retransmits and congestion window
|
||||||
|
size, but the picture is misleading. Yggdrasil's outsized window is
|
||||||
|
largely an artifact of its jumbo overlay MTU (32\,731 bytes): each
|
||||||
|
segment carries far more data, so the window in bytes is inflated
|
||||||
|
relative to VPNs using a standard ${\sim}$1\,400-byte MTU. Comparing
|
||||||
|
congestion windows across different MTU sizes is not meaningful
|
||||||
|
without normalizing for segment size. What \emph{is} clear is that
|
||||||
|
high retransmit rates force TCP to spend more time in congestion
|
||||||
|
recovery than in steady-state transmission, capping throughput
|
||||||
|
regardless of available bandwidth. ZeroTier illustrates the
|
||||||
|
opposite extreme: brute-force retransmission can still yield high
|
||||||
|
throughput (814\,Mbps with 1\,163 retransmits), at the cost of wasted
|
||||||
|
bandwidth and unstable flow behavior.
|
||||||
|
|
||||||
|
VpnCloud warrants specific attention: its sender reports 538.8\,Mbps
|
||||||
|
but the receiver measures only 413.4\,Mbps, leaving a 23\,\% gap (the largest
|
||||||
|
in the dataset). This suggests significant in-tunnel packet loss or
|
||||||
|
buffering at the VpnCloud layer that the retransmit count (857)
|
||||||
|
alone does not fully explain.
|
||||||
|
|
||||||
|
Run-to-run variability also differs substantially. WireGuard ranges
|
||||||
|
from 824 to 884\,Mbps (a 60\,Mbps window), while Mycelium ranges
|
||||||
|
from 122 to 379\,Mbps, a 3:1 ratio between worst and best runs. A
|
||||||
|
VPN with wide variance is harder to capacity-plan around than one
|
||||||
|
with consistent performance, even if the average is lower.
|
||||||
|
|
||||||
|
\begin{figure}[H]
|
||||||
|
\centering
|
||||||
|
\begin{subfigure}[t]{\textwidth}
|
||||||
|
\centering
|
||||||
|
\includegraphics[width=\textwidth]{Figures/baseline/retransmits-vs-throughput.png}
|
||||||
|
\caption{Retransmits vs.\ throughput}
|
||||||
|
\label{fig:retransmit_throughput}
|
||||||
|
\end{subfigure}
|
||||||
|
|
||||||
|
\vspace{1em}
|
||||||
|
|
||||||
|
\begin{subfigure}[t]{\textwidth}
|
||||||
|
\centering
|
||||||
|
\includegraphics[width=\textwidth]{Figures/baseline/retransmits-vs-max-congestion-window.png}
|
||||||
|
\caption{Retransmits vs.\ max congestion window}
|
||||||
|
\label{fig:retransmit_cwnd}
|
||||||
|
\end{subfigure}
|
||||||
|
\caption{Retransmit correlations (log scale on x-axis). High
|
||||||
|
retransmits do not always mean low throughput (ZeroTier: 1\,163
|
||||||
|
retransmits, 814\,Mbps), but extreme retransmits do (Hyprspace:
|
||||||
|
4\,965 retransmits, 368\,Mbps). The apparent inverse correlation
|
||||||
|
between retransmits and congestion window size is dominated by
|
||||||
|
Yggdrasil's outlier (4.3\,MB \texttt{cwnd}), which is inflated
|
||||||
|
by its 32\,KB jumbo overlay MTU rather than by low retransmits
|
||||||
|
alone.}
|
||||||
|
\label{fig:retransmit_correlations}
|
||||||
|
\end{figure}
|
||||||
|
|
||||||
|
\subsection{Latency}
|
||||||
|
|
||||||
|
Sorting by latency rearranges the rankings considerably.
|
||||||
|
Table~\ref{tab:latency_baseline} lists the average ping round-trip
|
||||||
|
times, which cluster into three distinct ranges.
|
||||||
|
|
||||||
|
\begin{table}[H]
|
||||||
|
\centering
|
||||||
|
\caption{Average ping RTT at baseline, sorted by latency}
|
||||||
|
\label{tab:latency_baseline}
|
||||||
|
\begin{tabular}{lr}
|
||||||
|
\hline
|
||||||
|
\textbf{VPN} & \textbf{Avg RTT (ms)} \\
|
||||||
|
\hline
|
||||||
|
Internal & 0.60 \\
|
||||||
|
VpnCloud & 1.13 \\
|
||||||
|
Tinc & 1.19 \\
|
||||||
|
WireGuard & 1.20 \\
|
||||||
|
Nebula & 1.25 \\
|
||||||
|
ZeroTier & 1.28 \\
|
||||||
|
EasyTier & 1.33 \\
|
||||||
|
\hline
|
||||||
|
Headscale & 1.64 \\
|
||||||
|
Hyprspace & 1.79 \\
|
||||||
|
Yggdrasil & 2.20 \\
|
||||||
|
\hline
|
||||||
|
Mycelium & 34.9 \\
|
||||||
|
\hline
|
||||||
|
\end{tabular}
|
||||||
|
\end{table}
|
||||||
|
|
||||||
|
Six VPNs stay below 1.3\,ms, comfortably close to the bare-metal
|
||||||
|
0.60\,ms. VpnCloud is a notable result: it posts the lowest latency
|
||||||
|
of any VPN (1.13\,ms), edging out WireGuard (1.20\,ms), yet its
|
||||||
|
throughput tops out at only 539\,Mbps. Low per-packet latency does
|
||||||
|
not guarantee high bulk throughput. A second group (Headscale,
|
||||||
|
Hyprspace, Yggdrasil) lands in the 1.5--2.2\,ms range, representing
|
||||||
|
moderate overhead. Then there is Mycelium at 34.9\,ms, so far
|
||||||
|
removed from the rest that Section~\ref{sec:mycelium_routing} gives
|
||||||
|
it a dedicated analysis.
|
||||||
|
|
||||||
|
ZeroTier's average of 1.28\,ms looks unremarkable, but its maximum
|
||||||
|
RTT spikes to 8.6\,ms, a 6.8$\times$ jump and the largest for any
|
||||||
|
sub-2\,ms VPN. These spikes point to periodic control-plane
|
||||||
|
interference that the average hides.
|
||||||
|
|
||||||
|
\begin{figure}[H]
|
||||||
|
\centering
|
||||||
|
\includegraphics[width=\textwidth]{{Figures/baseline/ping/Average RTT}.png}
|
||||||
|
\caption{Average ping RTT at baseline. Mycelium (34.9\,ms) is a
|
||||||
|
massive outlier at 58$\times$ the internal baseline. VpnCloud is
|
||||||
|
the fastest VPN at 1.13\,ms, slightly below WireGuard (1.20\,ms).}
|
||||||
|
\label{fig:ping_rtt}
|
||||||
|
\end{figure}
|
||||||
|
|
||||||
|
Tinc presents a paradox: it has the third-lowest latency (1.19\,ms)
|
||||||
|
but only the second-lowest throughput (336\,Mbps). Packets traverse
|
||||||
|
the tunnel quickly, yet single-threaded userspace processing cannot
|
||||||
|
keep up with the link speed. The qperf benchmark backs this up: Tinc
|
||||||
|
maxes out at
|
||||||
|
14.9\,\% CPU while delivering just 336\,Mbps, a clear sign that
|
||||||
|
the CPU, not the network, is the bottleneck.
|
||||||
|
Figure~\ref{fig:latency_throughput} makes this disconnect easy to
|
||||||
|
spot.
|
||||||
|
|
||||||
|
Looking at CPU efficiency more broadly, the qperf measurements
|
||||||
|
reveal a wide spread. Hyprspace (55.1\,\%) and Yggdrasil
|
||||||
|
(52.8\,\%) consume 5--6$\times$ as much CPU as Internal's
|
||||||
|
9.7\,\%. WireGuard sits at 30.8\,\%, surprisingly high for a
|
||||||
|
kernel-level implementation, though much of that goes to
|
||||||
|
cryptographic processing. On the efficient end, VpnCloud
|
||||||
|
(14.9\,\%), Tinc (14.9\,\%), and EasyTier (15.4\,\%) do the most
|
||||||
|
with the least CPU time. Nebula and Headscale are missing from
|
||||||
|
this comparison because qperf failed for both.
|
||||||
|
|
||||||
|
%TODO: Explain why they consistently failed
|
||||||
|
|
||||||
|
\begin{figure}[H]
|
||||||
|
\centering
|
||||||
|
\includegraphics[width=\textwidth]{Figures/baseline/latency-vs-throughput.png}
|
||||||
|
\caption{Latency vs.\ throughput at baseline. Each point represents
|
||||||
|
one VPN. The quadrants reveal different bottleneck types:
|
||||||
|
VpnCloud (low latency, moderate throughput), Tinc (low latency,
|
||||||
|
low throughput, CPU-bound), Mycelium (high latency, low
|
||||||
|
throughput, overlay routing overhead).}
|
||||||
|
\label{fig:latency_throughput}
|
||||||
|
\end{figure}
|
||||||
|
|
||||||
|
\subsection{Parallel TCP Scaling}
|
||||||
|
|
||||||
|
The single-stream benchmark tests one link direction at a time. The
|
||||||
|
parallel benchmark changes this setup: all three link directions
|
||||||
|
(lom$\rightarrow$yuki, yuki$\rightarrow$luna,
|
||||||
|
luna$\rightarrow$lom) run simultaneously in a circular pattern for
|
||||||
|
60~seconds, each carrying ten TCP streams. Because three independent
|
||||||
|
link pairs now compete for shared tunnel resources at once, the
|
||||||
|
aggregate throughput is naturally higher than any single direction
|
||||||
|
alone, which is why even Internal reaches 1.50$\times$ its
|
||||||
|
single-stream figure. The scaling factor (parallel throughput
|
||||||
|
divided by single-stream throughput) therefore captures two effects:
|
||||||
|
the benefit of utilizing multiple link pairs in parallel, and how
|
||||||
|
well the VPN handles the resulting contention.
|
||||||
|
Table~\ref{tab:parallel_scaling} lists the results.
|
||||||
|
|
||||||
|
\begin{table}[H]
|
||||||
|
\centering
|
||||||
|
\caption{Parallel TCP scaling at baseline. Scaling factor is the
|
||||||
|
ratio of ten-stream to single-stream throughput. Internal's
|
||||||
|
1.50$\times$ represents the expected scaling on this hardware.}
|
||||||
|
\label{tab:parallel_scaling}
|
||||||
|
\begin{tabular}{lrrr}
|
||||||
|
\hline
|
||||||
|
\textbf{VPN} & \textbf{Single (Mbps)} &
|
||||||
|
\textbf{Parallel (Mbps)} & \textbf{Scaling} \\
|
||||||
|
\hline
|
||||||
|
Mycelium & 259 & 569 & 2.20$\times$ \\
|
||||||
|
Hyprspace & 368 & 803 & 2.18$\times$ \\
|
||||||
|
Tinc & 336 & 563 & 1.68$\times$ \\
|
||||||
|
Yggdrasil & 795 & 1265 & 1.59$\times$ \\
|
||||||
|
Headscale & 800 & 1228 & 1.54$\times$ \\
|
||||||
|
Internal & 934 & 1398 & 1.50$\times$ \\
|
||||||
|
ZeroTier & 814 & 1206 & 1.48$\times$ \\
|
||||||
|
WireGuard & 864 & 1281 & 1.48$\times$ \\
|
||||||
|
EasyTier & 636 & 927 & 1.46$\times$ \\
|
||||||
|
VpnCloud & 539 & 763 & 1.42$\times$ \\
|
||||||
|
Nebula & 706 & 648 & 0.92$\times$ \\
|
||||||
|
\hline
|
||||||
|
\end{tabular}
|
||||||
|
\end{table}
|
||||||
|
|
||||||
|
The VPNs that gain the most are those most constrained in
|
||||||
|
single-stream mode. Mycelium's 34.9\,ms RTT means a lone TCP stream
|
||||||
|
can never fill the pipe: the bandwidth-delay product demands a window
|
||||||
|
larger than any single flow maintains, so ten streams collectively
|
||||||
|
compensate for that constraint and push throughput to 2.20$\times$
|
||||||
|
the single-stream figure. Hyprspace scales almost as well
|
||||||
|
(2.18$\times$) but for a
|
||||||
|
different reason: multiple streams work around the buffer bloat that
|
||||||
|
cripples any individual flow
|
||||||
|
(Section~\ref{sec:hyprspace_bloat}). Tinc picks up a
|
||||||
|
1.68$\times$ boost because several streams can collectively keep its
|
||||||
|
single-threaded CPU busy during what would otherwise be idle gaps in
|
||||||
|
a single flow.
|
||||||
|
|
||||||
|
WireGuard and Internal both scale cleanly at around
|
||||||
|
1.48--1.50$\times$ with zero retransmits, suggesting that
|
||||||
|
WireGuard's overhead is a fixed per-packet cost that does not worsen
|
||||||
|
under multiplexing.
|
||||||
|
|
||||||
|
Nebula is the only VPN that actually gets \emph{slower} with more
|
||||||
|
streams: throughput drops from 706\,Mbps to 648\,Mbps
|
||||||
|
(0.92$\times$) while retransmits jump from 955 to 2\,462. The ten
|
||||||
|
streams are clearly fighting each other for resources inside the
|
||||||
|
tunnel.
|
||||||
|
|
||||||
|
More streams also amplify existing retransmit problems across the
|
||||||
|
board. Hyprspace climbs from 4\,965 to 17\,426~retransmits;
|
||||||
|
VpnCloud from 857 to 6\,023. VPNs that were clean in single-stream
|
||||||
|
mode stay clean under load, while the stressed ones only get worse.
|
||||||
|
|
||||||
|
\begin{figure}[H]
|
||||||
|
\centering
|
||||||
|
\begin{subfigure}[t]{\textwidth}
|
||||||
|
\centering
|
||||||
|
\includegraphics[width=\textwidth]{Figures/baseline/single-stream-vs-parallel-tcp-throughput.png}
|
||||||
|
\caption{Single-stream vs.\ parallel throughput}
|
||||||
|
\label{fig:single_vs_parallel}
|
||||||
|
\end{subfigure}
|
||||||
|
|
||||||
|
\vspace{1em}
|
||||||
|
|
||||||
|
\begin{subfigure}[t]{\textwidth}
|
||||||
|
\centering
|
||||||
|
\includegraphics[width=\textwidth]{Figures/baseline/parallel-tcp-scaling-factor.png}
|
||||||
|
\caption{Parallel TCP scaling factor}
|
||||||
|
\label{fig:scaling_factor}
|
||||||
|
\end{subfigure}
|
||||||
|
\caption{Parallel TCP scaling at baseline. Nebula is the only VPN
|
||||||
|
where parallel throughput is lower than single-stream
|
||||||
|
(0.92$\times$). Mycelium and Hyprspace benefit most from
|
||||||
|
parallelism ($>$2$\times$), compensating for latency and buffer
|
||||||
|
bloat respectively. The dashed line at 1.0$\times$ marks the
|
||||||
|
break-even point.}
|
||||||
|
\label{fig:parallel_tcp}
|
||||||
|
\end{figure}
|
||||||
|
|
||||||
|
\subsection{UDP Stress Test}
|
||||||
|
|
||||||
|
The UDP iPerf3 test uses unlimited sender rate (\texttt{-b 0}),
|
||||||
|
which is a deliberate overload test rather than a realistic workload.
|
||||||
|
The sender throughput values are artifacts: they reflect how fast the
|
||||||
|
sender can write to the socket, not how fast data traverses the
|
||||||
|
tunnel. Yggdrasil, for example, reports 63,744\,Mbps sender
|
||||||
|
throughput because it uses a 32,731-byte block size (a jumbo-frame
|
||||||
|
overlay MTU), inflating the apparent rate per \texttt{send()} system
|
||||||
|
call. Only the receiver throughput is meaningful.
|
||||||
|
|
||||||
|
\begin{table}[H]
|
||||||
|
\centering
|
||||||
|
\caption{UDP receiver throughput and packet loss at baseline
|
||||||
|
(\texttt{-b 0} stress test). Hyprspace and Mycelium timed out
|
||||||
|
at 120 seconds and are excluded.}
|
||||||
|
\label{tab:udp_baseline}
|
||||||
|
\begin{tabular}{lrr}
|
||||||
|
\hline
|
||||||
|
\textbf{VPN} & \textbf{Receiver (Mbps)} &
|
||||||
|
\textbf{Loss (\%)} \\
|
||||||
|
\hline
|
||||||
|
Internal & 952 & 0.0 \\
|
||||||
|
WireGuard & 898 & 0.0 \\
|
||||||
|
Nebula & 890 & 76.2 \\
|
||||||
|
Headscale & 876 & 69.8 \\
|
||||||
|
EasyTier & 865 & 78.3 \\
|
||||||
|
Yggdrasil & 852 & 98.7 \\
|
||||||
|
ZeroTier & 851 & 89.5 \\
|
||||||
|
VpnCloud & 773 & 83.7 \\
|
||||||
|
Tinc & 471 & 89.9 \\
|
||||||
|
\hline
|
||||||
|
\end{tabular}
|
||||||
|
\end{table}
|
||||||
|
|
||||||
|
%TODO: Explain that the UDP test also crashes often,
|
||||||
|
% which makes the test somewhat unreliable
|
||||||
|
% but a good indicator if the network traffic is "different" then
|
||||||
|
% the programmer expected
|
||||||
|
|
||||||
|
Only Internal and WireGuard achieve 0\,\% packet loss. Both operate at
|
||||||
|
the kernel level with proper backpressure that matches sender to
|
||||||
|
receiver rate. Every userspace VPN shows massive loss (69--99\%)
|
||||||
|
because the sender overwhelms the tunnel's processing capacity.
|
||||||
|
Yggdrasil's 98.7\% loss is the most extreme: it sends the most data
|
||||||
|
(due to its large block size) but loses almost all of it. These loss
|
||||||
|
rates do not reflect real-world UDP behavior but reveal which VPNs
|
||||||
|
implement effective flow control. Hyprspace and Mycelium could not
|
||||||
|
complete the UDP test at all, timing out after 120 seconds.
|
||||||
|
|
||||||
|
The \texttt{blksize\_bytes} field reveals each VPN's effective path
|
||||||
|
MTU: Yggdrasil at 32,731 bytes (jumbo overlay), ZeroTier at 2728,
|
||||||
|
Internal at 1448, VpnCloud at 1375, WireGuard at 1368, Tinc at 1353,
|
||||||
|
EasyTier at 1288, Nebula at 1228, and Headscale at 1208 (the
|
||||||
|
smallest). These differences affect fragmentation behavior under real
|
||||||
|
workloads, particularly for protocols that send large datagrams.
|
||||||
|
|
||||||
|
%TODO: Mention QUIC
|
||||||
|
%TODO: Mention again that the "default" settings of every VPN have been used
|
||||||
|
% to better reflect real world use, as most users probably won't
|
||||||
|
% change these defaults
|
||||||
|
% and explain that good defaults are as much a part of good software as
|
||||||
|
% having the features but they are hard to configure correctly
|
||||||
|
|
||||||
|
\begin{figure}[H]
|
||||||
|
\centering
|
||||||
|
\begin{subfigure}[t]{\textwidth}
|
||||||
|
\centering
|
||||||
|
\includegraphics[width=\textwidth]{{Figures/baseline/udp/UDP
|
||||||
|
Throughput}.png}
|
||||||
|
\caption{UDP receiver throughput}
|
||||||
|
\label{fig:udp_throughput}
|
||||||
|
\end{subfigure}
|
||||||
|
|
||||||
|
\vspace{1em}
|
||||||
|
|
||||||
|
\begin{subfigure}[t]{\textwidth}
|
||||||
|
\centering
|
||||||
|
\includegraphics[width=\textwidth]{{Figures/baseline/udp/UDP
|
||||||
|
Packet Loss}.png}
|
||||||
|
\caption{UDP packet loss}
|
||||||
|
\label{fig:udp_loss}
|
||||||
|
\end{subfigure}
|
||||||
|
\caption{UDP stress test results at baseline (\texttt{-b 0},
|
||||||
|
unlimited sender rate). Internal and WireGuard are the only
|
||||||
|
implementations with 0\% loss. Hyprspace and Mycelium are
|
||||||
|
excluded due to 120-second timeouts.}
|
||||||
|
\label{fig:udp_results}
|
||||||
|
\end{figure}
|
||||||
|
|
||||||
|
% TODO: Compare parallel TCP retransmit rate
|
||||||
|
% with single TCP retransmit rate and see what changed
|
||||||
|
|
||||||
|
\subsection{Real-World Workloads}
|
||||||
|
|
||||||
|
Saturating a link with iPerf3 measures peak capacity, but not how a
|
||||||
|
VPN performs under realistic traffic. This subsection switches to
|
||||||
|
application-level workloads: downloading packages from a Nix binary
|
||||||
|
cache and streaming video over RIST. Both interact with the VPN
|
||||||
|
tunnel the way real software does, through many short-lived
|
||||||
|
connections, TLS handshakes, and latency-sensitive UDP packets.
|
||||||
|
|
||||||
|
\paragraph{Nix Binary Cache Downloads.}
|
||||||
|
|
||||||
|
This test downloads a fixed set of Nix packages through each VPN and
|
||||||
|
measures the total transfer time. The results
|
||||||
|
(Table~\ref{tab:nix_cache}) compress the throughput hierarchy
|
||||||
|
considerably: even Hyprspace, the worst performer, finishes in
|
||||||
|
11.92\,s, only 40\,\% slower than bare metal. Once connection
|
||||||
|
setup, TLS handshakes, and HTTP round-trips enter the picture,
|
||||||
|
throughput differences between 500 and 900\,Mbps matter far less
|
||||||
|
than per-connection latency.
|
||||||
|
|
||||||
|
\begin{table}[H]
|
||||||
|
\centering
|
||||||
|
\caption{Nix binary cache download time at baseline, sorted by
|
||||||
|
duration. Overhead is relative to the internal baseline (8.53\,s).}
|
||||||
|
\label{tab:nix_cache}
|
||||||
|
\begin{tabular}{lrr}
|
||||||
|
\hline
|
||||||
|
\textbf{VPN} & \textbf{Mean (s)} &
|
||||||
|
\textbf{Overhead (\%)} \\
|
||||||
|
\hline
|
||||||
|
Internal & 8.53 & -- \\
|
||||||
|
Nebula & 9.15 & +7.3 \\
|
||||||
|
ZeroTier & 9.22 & +8.1 \\
|
||||||
|
VpnCloud & 9.39 & +10.0 \\
|
||||||
|
EasyTier & 9.39 & +10.1 \\
|
||||||
|
WireGuard & 9.45 & +10.8 \\
|
||||||
|
Headscale & 9.79 & +14.8 \\
|
||||||
|
Tinc & 10.00 & +17.2 \\
|
||||||
|
Mycelium & 10.07 & +18.1 \\
|
||||||
|
Yggdrasil & 10.59 & +24.2 \\
|
||||||
|
Hyprspace & 11.92 & +39.7 \\
|
||||||
|
\hline
|
||||||
|
\end{tabular}
|
||||||
|
\end{table}
|
||||||
|
|
||||||
|
Several rankings invert relative to raw throughput. ZeroTier
|
||||||
|
finishes faster than WireGuard (9.22\,s vs.\ 9.45\,s) despite
|
||||||
|
30\,\% fewer raw Mbps and 1\,000$\times$ more retransmits. Yggdrasil
|
||||||
|
is the clearest example: it has the
|
||||||
|
third-highest throughput at 795\,Mbps, yet lands at 24\,\% overhead
|
||||||
|
because its
|
||||||
|
2.2\,ms latency adds up over the many small sequential HTTP requests
|
||||||
|
that constitute a Nix cache download.
|
||||||
|
Figure~\ref{fig:throughput_vs_download} confirms this weak link
|
||||||
|
between raw throughput and real-world download speed.
|
||||||
|
|
||||||
|
\begin{figure}[H]
|
||||||
|
\centering
|
||||||
|
\begin{subfigure}[t]{\textwidth}
|
||||||
|
\centering
|
||||||
|
\includegraphics[width=\textwidth]{{Figures/baseline/Nix Cache
|
||||||
|
Mean Download Time}.png}
|
||||||
|
\caption{Nix cache download time per VPN}
|
||||||
|
\label{fig:nix_cache}
|
||||||
|
\end{subfigure}
|
||||||
|
|
||||||
|
\vspace{1em}
|
||||||
|
|
||||||
|
\begin{subfigure}[t]{\textwidth}
|
||||||
|
\centering
|
||||||
|
\includegraphics[width=\textwidth]{Figures/baseline/raw-throughput-vs-nix-cache-download-time.png}
|
||||||
|
\caption{Raw throughput vs.\ download time}
|
||||||
|
\label{fig:throughput_vs_download}
|
||||||
|
\end{subfigure}
|
||||||
|
\caption{Application-level download performance. The throughput
|
||||||
|
hierarchy compresses under real HTTP workloads: the worst VPN
|
||||||
|
(Hyprspace, 11.92\,s) is only 40\% slower than bare metal.
|
||||||
|
Throughput explains some variance but not all: Yggdrasil
|
||||||
|
(795\,Mbps, 10.59\,s) is slower than Nebula (706\,Mbps, 9.15\,s)
|
||||||
|
because latency matters more for HTTP workloads.}
|
||||||
|
\label{fig:nix_download}
|
||||||
|
\end{figure}
|
||||||
|
|
||||||
|
\paragraph{Video Streaming (RIST).}
|
||||||
|
|
||||||
|
At just 3.3\,Mbps, the RIST video stream sits comfortably within
|
||||||
|
every VPN's throughput budget. This test therefore measures
|
||||||
|
something different: how well the VPN handles real-time UDP packet
|
||||||
|
delivery under steady load. Nine of the eleven VPNs pass without
|
||||||
|
incident, delivering 100\,\% video quality. The 14--16 dropped
|
||||||
|
frames that appear uniformly across all VPNs, including Internal,
|
||||||
|
trace back to encoder warm-up rather than tunnel overhead.
|
||||||
|
|
||||||
|
Headscale is the exception. It averages just 13.1\,\% quality,
|
||||||
|
dropping 288~packets per test interval. The degradation is not
|
||||||
|
bursty but sustained: median quality sits at 10\,\%, and the
|
||||||
|
interquartile range of dropped packets spans a narrow 255--330 band.
|
||||||
|
The qperf benchmark independently corroborates this, having failed
|
||||||
|
outright for Headscale, confirming that something beyond bulk TCP is
|
||||||
|
broken.
|
||||||
|
|
||||||
|
What makes this failure unexpected is that Headscale builds on
|
||||||
|
WireGuard, which handles video flawlessly. TCP throughput places
|
||||||
|
Headscale squarely in Tier~1. Yet the RIST test runs over UDP, and
|
||||||
|
qperf probes latency-sensitive paths using both TCP and UDP. The
|
||||||
|
pattern points toward Headscale's DERP relay or NAT traversal layer
|
||||||
|
as the source. Its effective path MTU of 1\,208~bytes, the smallest
|
||||||
|
of any VPN, likely compounds the issue: RIST packets that exceed
|
||||||
|
this limit must be fragmented, and reassembling fragments under
|
||||||
|
sustained load produces exactly the kind of steady, uniform packet
|
||||||
|
drops the data shows. For video conferencing, VoIP, or any
|
||||||
|
real-time media workload, this is a disqualifying result regardless
|
||||||
|
of TCP throughput.
|
||||||
|
|
||||||
|
Hyprspace reveals a different failure mode. Its average quality
|
||||||
|
reads 100\,\%, but the raw numbers underneath are far from stable:
|
||||||
|
mean packet drops of 1\,194 and a maximum spike of 55\,500, with
|
||||||
|
the 25th, 50th, and 75th percentiles all at zero. Hyprspace
|
||||||
|
alternates between perfect delivery and catastrophic bursts.
|
||||||
|
RIST's forward error correction compensates for most of these
|
||||||
|
events, but the worst spikes are severe enough to overwhelm FEC
|
||||||
|
entirely.
|
||||||
|
|
||||||
|
\begin{figure}[H]
|
||||||
|
\centering
|
||||||
|
\includegraphics[width=\textwidth]{{Figures/baseline/Video
|
||||||
|
Streaming/RIST Quality}.png}
|
||||||
|
\caption{RIST video streaming quality at baseline. Headscale at
|
||||||
|
13.1\% average quality is the clear outlier. Every other VPN
|
||||||
|
achieves 99.8\% or higher. Nebula is at 99.8\% (minor
|
||||||
|
degradation). The video bitrate (3.3\,Mbps) is well within every
|
||||||
|
VPN's throughput capacity, so this test reveals real-time UDP
|
||||||
|
handling quality rather than bandwidth limits.}
|
||||||
|
\label{fig:rist_quality}
|
||||||
|
\end{figure}
|
||||||
|
|
||||||
|
\subsection{Operational Resilience}
|
||||||
|
|
||||||
|
Sustained-load performance does not predict recovery speed. How
|
||||||
|
quickly a tunnel comes up after a reboot, and how reliably it
|
||||||
|
reconverges, matters as much as peak throughput for operational use.
|
||||||
|
|
||||||
|
First-time connectivity spans a wide range. Headscale and WireGuard
|
||||||
|
are ready in under 50\,ms, while ZeroTier (8--17\,s) and VpnCloud
|
||||||
|
(10--14\,s) spend seconds negotiating with their control planes
|
||||||
|
before passing traffic.
|
||||||
|
|
||||||
|
%TODO: Maybe we want to scrap first-time connectivity
|
||||||
|
|
||||||
|
Reboot reconnection rearranges the rankings. Hyprspace, the worst
|
||||||
|
performer under sustained TCP load, recovers in just 8.7~seconds on
|
||||||
|
average, faster than any other VPN. WireGuard and Nebula follow at
|
||||||
|
10.1\,s each. Nebula's consistency is striking: 10.06, 10.06,
|
||||||
|
10.07\,s across its three nodes, pointing to a hard-coded timer
|
||||||
|
rather than topology-dependent convergence.
|
||||||
|
Mycelium sits at the opposite end, needing 76.6~seconds and showing
|
||||||
|
the same suspiciously uniform pattern (75.7, 75.7, 78.3\,s),
|
||||||
|
suggesting a fixed protocol-level wait built into the overlay.
|
||||||
|
|
||||||
|
%TODO: Hard coded timer needs to be verified
|
||||||
|
|
||||||
|
Yggdrasil produces the most lopsided result in the dataset: its yuki
|
||||||
|
node is back in 7.1~seconds while lom and luna take 94.8 and
|
||||||
|
97.3~seconds respectively. The gap likely reflects the overlay's
|
||||||
|
spanning-tree rebuild: a node near the root of the tree reconverges
|
||||||
|
quickly, while one further out has to wait for the topology to
|
||||||
|
propagate.
|
||||||
|
|
||||||
|
%TODO: Needs clarifications what is a "spanning tree build"
|
||||||
|
|
||||||
|
\begin{figure}[H]
|
||||||
|
\centering
|
||||||
|
\begin{subfigure}[t]{\textwidth}
|
||||||
|
\centering
|
||||||
|
\includegraphics[width=\textwidth]{Figures/baseline/reboot-reconnection-time-per-vpn.png}
|
||||||
|
\caption{Average reconnection time per VPN}
|
||||||
|
\label{fig:reboot_bar}
|
||||||
|
\end{subfigure}
|
||||||
|
|
||||||
|
\vspace{1em}
|
||||||
|
|
||||||
|
\begin{subfigure}[t]{\textwidth}
|
||||||
|
\centering
|
||||||
|
\includegraphics[width=\textwidth]{Figures/baseline/reboot-reconnection-time-heatmap.png}
|
||||||
|
\caption{Per-node reconnection time heatmap}
|
||||||
|
\label{fig:reboot_heatmap}
|
||||||
|
\end{subfigure}
|
||||||
|
\caption{Reboot reconnection time at baseline. The heatmap reveals
|
||||||
|
Yggdrasil's extreme per-node asymmetry (7\,s for yuki vs.\
|
||||||
|
95--97\,s for lom/luna) and Mycelium's uniform slowness (75--78\,s
|
||||||
|
across all nodes). Hyprspace reconnects fastest (8.7\,s average)
|
||||||
|
despite its poor sustained-load performance.}
|
||||||
|
\label{fig:reboot_reconnection}
|
||||||
|
\end{figure}
|
||||||
|
|
||||||
|
\subsection{Pathological Cases}
|
||||||
|
\label{sec:pathological}
|
||||||
|
|
||||||
|
Three VPNs exhibit behaviors that the aggregate numbers alone cannot
|
||||||
|
explain. The following subsections synthesize observations from the
|
||||||
|
preceding benchmarks into per-VPN diagnoses.
|
||||||
|
|
||||||
|
\paragraph{Hyprspace: Buffer Bloat.}
|
||||||
|
\label{sec:hyprspace_bloat}
|
||||||
|
|
||||||
|
Hyprspace produces the most severe performance collapse in the
|
||||||
|
dataset. At idle, its ping latency is a modest 1.79\,ms.
|
||||||
|
Under TCP load, that number balloons to roughly 2\,800\,ms, a
|
||||||
|
1\,556$\times$ increase. This is not the network becoming
|
||||||
|
congested; it is the VPN tunnel itself filling up with buffered
|
||||||
|
packets and refusing to drain.
|
||||||
|
|
||||||
|
The consequences ripple through every TCP metric. With 4\,965
|
||||||
|
retransmits per 30-second test (one in every 200~segments), TCP
|
||||||
|
spends most of its time in congestion recovery rather than
|
||||||
|
steady-state transfer, shrinking the average congestion window to
|
||||||
|
205\,KB, the smallest in the dataset. Under parallel load the
|
||||||
|
situation worsens: retransmits climb to 17\,426. The buffering even
|
||||||
|
inverts iPerf3's measurements: the receiver reports 419.8\,Mbps
|
||||||
|
while the sender sees only 367.9\,Mbps, because massive ACK delays
|
||||||
|
cause the sender-side timer to undercount the actual data rate. The
|
||||||
|
UDP test never finished at all, timing out at 120~seconds.
|
||||||
|
|
||||||
|
% Should we always use percentages for retransmits?
|
||||||
|
|
||||||
|
What prevents Hyprspace from being entirely unusable is everything
|
||||||
|
\emph{except} sustained load. It has the fastest reboot
|
||||||
|
reconnection in the dataset (8.7\,s) and delivers 100\,\% video
|
||||||
|
quality outside of its burst events. The pathology is narrow but
|
||||||
|
severe: any continuous data stream saturates the tunnel's internal
|
||||||
|
buffers.
|
||||||
|
|
||||||
|
\paragraph{Mycelium: Routing Anomaly.}
|
||||||
|
\label{sec:mycelium_routing}
|
||||||
|
|
||||||
|
Mycelium's 34.9\,ms average latency appears to be the cost of
|
||||||
|
routing through a global overlay. The per-path numbers, however,
|
||||||
|
reveal a bimodal distribution:
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\bitem{luna$\rightarrow$lom:} 1.63\,ms (direct path, comparable
|
||||||
|
to Headscale at 1.64\,ms)
|
||||||
|
\bitem{lom$\rightarrow$yuki:} 51.47\,ms (overlay-routed)
|
||||||
|
\bitem{yuki$\rightarrow$luna:} 51.60\,ms (overlay-routed)
|
||||||
|
\end{itemize}
|
||||||
|
|
||||||
|
One of the three links has found a direct route; the other two still
|
||||||
|
bounce through the overlay. All three machines sit on the same
|
||||||
|
physical network, so Mycelium's path discovery is failing
|
||||||
|
intermittently, a more specific problem than blanket overlay
|
||||||
|
overhead. Throughput mirrors the split:
|
||||||
|
yuki$\rightarrow$luna reaches 379\,Mbps while
|
||||||
|
luna$\rightarrow$lom manages only 122\,Mbps, a 3:1 gap. In
|
||||||
|
bidirectional mode, the reverse direction on that worst link drops
|
||||||
|
to 58.4\,Mbps, the lowest single-direction figure in the entire
|
||||||
|
dataset.
|
||||||
|
|
||||||
|
\begin{figure}[H]
|
||||||
|
\centering
|
||||||
|
\includegraphics[width=\textwidth]{{Figures/baseline/tcp/Mycelium/Average
|
||||||
|
Throughput}.png}
|
||||||
|
\caption{Per-link TCP throughput for Mycelium, showing extreme
|
||||||
|
path asymmetry caused by inconsistent direct route discovery.
|
||||||
|
The 3:1 ratio between best (yuki$\rightarrow$luna, 379\,Mbps)
|
||||||
|
and worst (luna$\rightarrow$lom, 122\,Mbps) links reflects
|
||||||
|
different overlay routing paths.}
|
||||||
|
\label{fig:mycelium_paths}
|
||||||
|
\end{figure}
|
||||||
|
|
||||||
|
The overlay penalty shows up most clearly at connection setup.
|
||||||
|
Mycelium's average time-to-first-byte is 93.7\,ms (vs.\ Internal's
|
||||||
|
16.8\,ms, a 5.6$\times$ overhead), and connection establishment
|
||||||
|
alone costs 47.3\,ms (3$\times$ overhead). Every new connection
|
||||||
|
incurs that overhead, so workloads dominated by
|
||||||
|
short-lived connections accumulate it rapidly. Bulk downloads, by
|
||||||
|
contrast, amortize it: the Nix cache test finishes only 18\,\%
|
||||||
|
slower than Internal (10.07\,s vs.\ 8.53\,s) because once the
|
||||||
|
transfer phase begins, per-connection latency fades into the
|
||||||
|
background.
|
||||||
|
|
||||||
|
Mycelium is also the slowest VPN to recover from a reboot:
|
||||||
|
76.6~seconds on average, and almost suspiciously uniform across
|
||||||
|
nodes (75.7, 75.7, 78.3\,s). That kind of consistency points to a
|
||||||
|
hard-coded convergence timer in the overlay protocol rather than
|
||||||
|
anything topology-dependent. The UDP test timed out at
|
||||||
|
120~seconds, and even first-time connectivity required a
|
||||||
|
70-second wait at startup.
|
||||||
|
|
||||||
|
% Explain what topology-dependent means in this case.
|
||||||
|
|
||||||
|
\paragraph{Tinc: Userspace Processing Bottleneck.}
|
||||||
|
|
||||||
|
Tinc is a clear case of a CPU bottleneck masquerading as a network
|
||||||
|
problem. At 1.19\,ms latency, packets get through the
|
||||||
|
tunnel quickly. Yet throughput tops out at 336\,Mbps, barely a
|
||||||
|
third of the bare-metal link. The usual suspects do not apply:
|
||||||
|
Tinc's path MTU is a healthy 1\,500~bytes
|
||||||
|
(\texttt{blksize\_bytes} of 1\,353 from UDP iPerf3, comparable to
|
||||||
|
VpnCloud at 1\,375 and WireGuard at 1\,368), and its retransmit
|
||||||
|
count (240) is moderate. What limits Tinc is its single-threaded
|
||||||
|
userspace architecture: one CPU core simply cannot encrypt, copy,
|
||||||
|
and forward packets fast enough to fill the pipe.
|
||||||
|
|
||||||
|
The parallel benchmark confirms this diagnosis. Tinc scales to
|
||||||
|
563\,Mbps (1.68$\times$), beating Internal's 1.50$\times$ ratio.
|
||||||
|
Multiple TCP streams collectively keep that single core busy during
|
||||||
|
what would otherwise be idle gaps in any individual flow, squeezing
|
||||||
|
out throughput that no single stream could reach alone.
|
||||||
|
|
||||||
\section{Impact of Network Impairment}
|
\section{Impact of Network Impairment}
|
||||||
|
|
||||||
|
|||||||
@@ -7,23 +7,30 @@
|
|||||||
\begin{abstract}
|
\begin{abstract}
|
||||||
\addchaptertocentry{Zusammenfassung}
|
\addchaptertocentry{Zusammenfassung}
|
||||||
|
|
||||||
Diese Arbeit untersucht Peer-to-Peer-Mesh-VPNs mithilfe eines
|
Diese Arbeit evaluiert zehn Peer-to-Peer-Mesh-VPN-Implementierungen
|
||||||
reproduzierbaren, Nix-basierten Frameworks, das auf einem
|
unter kontrollierten Netzwerkbedingungen mithilfe eines
|
||||||
Deployment-System namens Clan aufbaut. Wir evaluieren zehn
|
reproduzierbaren, Nix-basierten Benchmark-Frameworks, das auf einem
|
||||||
VPN-Implementierungen, darunter Tailscale (über Headscale),
|
Deployment-System namens Clan aufbaut. Die Implementierungen reichen
|
||||||
Hyprspace, Nebula, Tinc und ZeroTier, under vier
|
von Kernel-Protokollen (WireGuard, als Referenz-Baseline) bis zu
|
||||||
Netzwerkbeeinträchtigungsprofilen mit variierendem Paketverlust,
|
Userspace-Overlays (Tinc, Yggdrasil, Nebula, Hyprspace und
|
||||||
Paketumsortierung, Latenz und Jitter, was über 300 einzelne
|
weitere). Jede wird unter vier Beeinträchtigungsprofilen mit
|
||||||
Messungen in sieben Benchmarks ergibt.
|
variierendem Paketverlust, Paketumsortierung, Latenz und Jitter
|
||||||
|
getestet, was über 300 Messungen in sieben Benchmarks ergibt, von
|
||||||
|
reinem TCP- und UDP-Durchsatz bis zu Video-Streaming und
|
||||||
|
Anwendungs-Downloads.
|
||||||
|
|
||||||
Unsere Analyse zeigt, dass Tailscale under beeinträchtigten
|
Ein zentrales Ergebnis ist, dass keine einzelne Metrik die
|
||||||
Bedingungen den Standard-Netzwerkstack des Linux-Kernels
|
VPN-Leistung vollständig erfasst: Die Rangfolge verschiebt sich je
|
||||||
übertrifft, was auf seinen Userspace-IP-Stack mit optimierten
|
nachdem, ob Durchsatz, Latenz, Retransmit-Verhalten oder
|
||||||
Parametern zurückzuführen ist. Wir bestätigen dies, indem wir die
|
Transferzeit auf Anwendungsebene gemessen wird. Unter
|
||||||
Benchmarks mit entsprechend angepassten Kernel-Parametern erneut
|
Netzwerkbeeinträchtigung übertrifft Tailscale (über Headscale) den
|
||||||
durchführen und vergleichbare Durchsatzgewinne beobachten. Die
|
Standard-Netzwerkstack des Linux-Kernels, eine Anomalie, die wir
|
||||||
Untersuchung deckte zudem eine kritische Sicherheitslücke in einem
|
auf die optimierten Congestion-Control- und Pufferparameter seines
|
||||||
der evaluierten VPNs auf.
|
Userspace-IP-Stacks zurückführen. Eine erneute Durchführung der
|
||||||
|
internen Baseline mit entsprechend angepassten Kernel-Parametern
|
||||||
|
schließt die Lücke und bestätigt diese Erklärung. Die begleitende
|
||||||
|
Quellcodeanalyse deckte eine kritische Sicherheitslücke in einer
|
||||||
|
der evaluierten Implementierungen auf.
|
||||||
|
|
||||||
\end{abstract}
|
\end{abstract}
|
||||||
\endgroup
|
\endgroup
|
||||||
|
|||||||
BIN
Figures/baseline/Average Test Duration per Machine.png
Normal file
|
After Width: | Height: | Size: 48 KiB |
BIN
Figures/baseline/Benchmark Success Rate.png
Normal file
|
After Width: | Height: | Size: 42 KiB |
BIN
Figures/baseline/Nix Cache Mean Download Time.png
Normal file
|
After Width: | Height: | Size: 40 KiB |
BIN
Figures/baseline/Video Streaming/Packets Dropped.png
Normal file
|
After Width: | Height: | Size: 36 KiB |
BIN
Figures/baseline/Video Streaming/RIST Quality.png
Normal file
|
After Width: | Height: | Size: 36 KiB |
BIN
Figures/baseline/latency-vs-throughput.png
Normal file
|
After Width: | Height: | Size: 189 KiB |
BIN
Figures/baseline/parallel-tcp-scaling-factor.png
Normal file
|
After Width: | Height: | Size: 236 KiB |
BIN
Figures/baseline/ping/Average RTT.png
Normal file
|
After Width: | Height: | Size: 36 KiB |
BIN
Figures/baseline/raw-throughput-vs-nix-cache-download-time.png
Normal file
|
After Width: | Height: | Size: 196 KiB |
BIN
Figures/baseline/reboot-reconnection-time-heatmap.png
Normal file
|
After Width: | Height: | Size: 308 KiB |
BIN
Figures/baseline/reboot-reconnection-time-per-vpn.png
Normal file
|
After Width: | Height: | Size: 228 KiB |
BIN
Figures/baseline/retransmits-single-stream-vs-parallel.png
Normal file
|
After Width: | Height: | Size: 218 KiB |
BIN
Figures/baseline/retransmits-vs-max-congestion-window.png
Normal file
|
After Width: | Height: | Size: 210 KiB |
BIN
Figures/baseline/retransmits-vs-throughput.png
Normal file
|
After Width: | Height: | Size: 196 KiB |
BIN
Figures/baseline/single-stream-vs-parallel-tcp-throughput.png
Normal file
|
After Width: | Height: | Size: 208 KiB |
|
Before Width: | Height: | Size: 51 KiB |
BIN
Figures/baseline/tcp/Mycelium/Average Throughput.png
Normal file
|
After Width: | Height: | Size: 35 KiB |
|
Before Width: | Height: | Size: 38 KiB After Width: | Height: | Size: 42 KiB |
|
Before Width: | Height: | Size: 45 KiB After Width: | Height: | Size: 49 KiB |
|
Before Width: | Height: | Size: 39 KiB |
BIN
Figures/baseline/udp/UDP Packet Loss.png
Normal file
|
After Width: | Height: | Size: 42 KiB |
BIN
Figures/baseline/udp/UDP Throughput.png
Normal file
|
After Width: | Height: | Size: 46 KiB |
20
_typos.toml
@@ -1,20 +0,0 @@
|
|||||||
[files]
|
|
||||||
extend-exclude = [
|
|
||||||
"**/secret",
|
|
||||||
"**/value",
|
|
||||||
"**.rev",
|
|
||||||
"**/facter-report.nix",
|
|
||||||
"**/key.json",
|
|
||||||
"pkgs/clan-cli/clan_lib/machines/test_suggestions.py",
|
|
||||||
"Chapters/Zusammenfassung.tex",
|
|
||||||
]
|
|
||||||
|
|
||||||
[default.extend-words]
|
|
||||||
facter = "facter"
|
|
||||||
metalness = "metalness" # would be corrected to metallicity, not sure which one's preferred
|
|
||||||
hda = "hda" # snd_hda_intel
|
|
||||||
dynamicdns = "dynamicdns"
|
|
||||||
substituters = "substituters"
|
|
||||||
|
|
||||||
[default.extend-identifiers]
|
|
||||||
pn = "pn"
|
|
||||||
62
example.bib
@@ -1,62 +0,0 @@
|
|||||||
@article{Reference1,
|
|
||||||
Abstract = {We have developed an enhanced Littrow configuration
|
|
||||||
extended cavity diode laser (ECDL) that can be tuned without
|
|
||||||
changing the direction of the output beam. The output of a
|
|
||||||
conventional Littrow ECDL is reflected from a plane mirror fixed
|
|
||||||
parallel to the tuning diffraction grating. Using a free-space
|
|
||||||
Michelson wavemeter to measure the laser wavelength, we can tune
|
|
||||||
the laser over a range greater than 10 nm without any alteration of
|
|
||||||
alignment.},
|
|
||||||
Author = {C. J. Hawthorn and K. P. Weber and R. E. Scholten},
|
|
||||||
Journal = {Review of Scientific Instruments},
|
|
||||||
Month = {12},
|
|
||||||
Number = {12},
|
|
||||||
Numpages = {3},
|
|
||||||
Pages = {4477--4479},
|
|
||||||
Title = {Littrow Configuration Tunable External Cavity Diode Laser
|
|
||||||
with Fixed Direction Output Beam},
|
|
||||||
Volume = {72},
|
|
||||||
Url = {http://link.aip.org/link/?RSI/72/4477/1},
|
|
||||||
Year = {2001}}
|
|
||||||
|
|
||||||
@article{Reference3,
|
|
||||||
Abstract = {Operating a laser diode in an extended cavity which
|
|
||||||
provides frequency-selective feedback is a very effective method of
|
|
||||||
reducing the laser's linewidth and improving its tunability. We
|
|
||||||
have developed an extremely simple laser of this type, built from
|
|
||||||
inexpensive commercial components with only a few minor
|
|
||||||
modifications. A 780~nm laser built to this design has an output
|
|
||||||
power of 80~mW, a linewidth of 350~kHz, and it has been
|
|
||||||
continuously locked to a Doppler-free rubidium transition for several days.},
|
|
||||||
Author = {A. S. Arnold and J. S. Wilson and M. G. Boshier and J. Smith},
|
|
||||||
Journal = {Review of Scientific Instruments},
|
|
||||||
Month = {3},
|
|
||||||
Number = {3},
|
|
||||||
Numpages = {4},
|
|
||||||
Pages = {1236--1239},
|
|
||||||
Title = {A Simple Extended-Cavity Diode Laser},
|
|
||||||
Volume = {69},
|
|
||||||
Url = {http://link.aip.org/link/?RSI/69/1236/1},
|
|
||||||
Year = {1998}}
|
|
||||||
|
|
||||||
@article{Reference2,
|
|
||||||
Abstract = {We present a review of the use of diode lasers in
|
|
||||||
atomic physics with an extensive list of references. We discuss the
|
|
||||||
relevant characteristics of diode lasers and explain how to
|
|
||||||
purchase and use them. We also review the various techniques that
|
|
||||||
have been used to control and narrow the spectral outputs of diode
|
|
||||||
lasers. Finally we present a number of examples illustrating the
|
|
||||||
use of diode lasers in atomic physics experiments. Review of
|
|
||||||
Scientific Instruments is copyrighted by The American Institute of Physics.},
|
|
||||||
Author = {Carl E. Wieman and Leo Hollberg},
|
|
||||||
Journal = {Review of Scientific Instruments},
|
|
||||||
Keywords = {Diode Laser},
|
|
||||||
Month = {1},
|
|
||||||
Number = {1},
|
|
||||||
Numpages = {20},
|
|
||||||
Pages = {1--20},
|
|
||||||
Title = {Using Diode Lasers for Atomic Physics},
|
|
||||||
Volume = {62},
|
|
||||||
Url = {http://link.aip.org/link/?RSI/62/1/1},
|
|
||||||
Year = {1991}}
|
|
||||||
|
|
||||||
@@ -49,7 +49,10 @@
|
|||||||
|
|
||||||
devShells.default = pkgs.mkShell {
|
devShells.default = pkgs.mkShell {
|
||||||
buildInputs = [
|
buildInputs = [
|
||||||
|
pkgs.nodejs
|
||||||
|
pkgs.vite
|
||||||
texlive
|
texlive
|
||||||
|
pkgs.pandoc
|
||||||
pkgs.inkscape
|
pkgs.inkscape
|
||||||
pkgs.python3
|
pkgs.python3
|
||||||
];
|
];
|
||||||
|
|||||||
33
main.tex
@@ -232,20 +232,27 @@ and Management}} % Your department's name and URL, this is used in
|
|||||||
\begin{abstract}
|
\begin{abstract}
|
||||||
\addchaptertocentry{\abstractname} % Add the abstract to the table of contents
|
\addchaptertocentry{\abstractname} % Add the abstract to the table of contents
|
||||||
|
|
||||||
This thesis benchmarks peer-to-peer mesh VPNs using a reproducible,
|
This thesis evaluates ten peer-to-peer mesh VPN implementations
|
||||||
Nix-based framework built with a deployment system called Clan. We
|
under controlled network conditions using a reproducible, Nix-based
|
||||||
evaluate ten VPN implementations; including Tailscale (via
|
benchmarking framework built on a deployment system called Clan.
|
||||||
Headscale), Hyprspace, Nebula, Tinc, and ZeroTier; under four
|
The implementations range from kernel-level protocols (WireGuard,
|
||||||
network impairment profiles varying packet loss, reordering,
|
used as a reference baseline) to userspace overlays (Tinc,
|
||||||
latency, and jitter, yielding over 300 unique measurements across
|
Yggdrasil, Nebula, Hyprspace, and others). We test each against
|
||||||
seven benchmarks.
|
four impairment profiles that vary packet loss, reordering, latency,
|
||||||
|
and jitter, producing over 300 measurements across seven benchmarks
|
||||||
|
from raw TCP and UDP throughput to video streaming and
|
||||||
|
application-level downloads.
|
||||||
|
|
||||||
Our analysis reveals that Tailscale outperforms the Linux kernel's
|
A central finding is that no single metric captures VPN performance:
|
||||||
default networking stack under degraded conditions, owing to its
|
the rankings shift depending on whether one measures throughput,
|
||||||
userspace IP stack with tuned parameters. We confirm this by
|
latency, retransmit behavior, or application-level transfer time.
|
||||||
re-running benchmarks with matching kernel-side tuning and observe
|
Under network impairment, Tailscale (via Headscale) outperforms the
|
||||||
comparable throughput gains. The investigation also uncovered a
|
Linux kernel's default networking stack, an anomaly we trace to its
|
||||||
critical security vulnerability in one of the evaluated VPNs.
|
userspace IP stack's tuned congestion-control and buffer parameters.
|
||||||
|
Re-running the internal baseline with matching kernel-side tuning
|
||||||
|
closes the gap, confirming the explanation. The accompanying source
|
||||||
|
code analysis uncovered a critical security vulnerability in one of
|
||||||
|
the evaluated implementations.
|
||||||
|
|
||||||
\end{abstract}
|
\end{abstract}
|
||||||
|
|
||||||
|
|||||||
@@ -3,7 +3,9 @@
|
|||||||
imports = [ inputs.treefmt-nix.flakeModule ];
|
imports = [ inputs.treefmt-nix.flakeModule ];
|
||||||
|
|
||||||
perSystem =
|
perSystem =
|
||||||
{ ... }:
|
{
|
||||||
|
...
|
||||||
|
}:
|
||||||
{
|
{
|
||||||
treefmt = {
|
treefmt = {
|
||||||
# Used to find the project root
|
# Used to find the project root
|
||||||
@@ -17,6 +19,7 @@
|
|||||||
"AI_Data/**"
|
"AI_Data/**"
|
||||||
"Figures/**"
|
"Figures/**"
|
||||||
];
|
];
|
||||||
|
|
||||||
programs.typos = {
|
programs.typos = {
|
||||||
enable = true;
|
enable = true;
|
||||||
threads = 4;
|
threads = 4;
|
||||||
|
|||||||