added motivation
This commit is contained in:
0
Chapters/Background.nix
Normal file
0
Chapters/Background.nix
Normal file
194
Chapters/Introduction.tex
Executable file → Normal file
194
Chapters/Introduction.tex
Executable file → Normal file
@@ -1,165 +1,51 @@
|
||||
% Chapter Template
|
||||
|
||||
\chapter{Introduction} % Main chapter title
|
||||
|
||||
\label{Introduction} % Change X to a consecutive number; for
|
||||
% referencing this chapter elsewhere, use \ref{ChapterX}
|
||||
\label{Introduction}
|
||||
|
||||
%----------------------------------------------------------------------------------------
|
||||
% SECTION 1
|
||||
%----------------------------------------------------------------------------------------
|
||||
This chapter introduces the Clan project, articulates its fundamental
|
||||
objectives, outlines the key components, and examines the driving
|
||||
factors motivating its development.
|
||||
|
||||
\section{Methodology}
|
||||
\section{Motivation}
|
||||
|
||||
This chapter outlines the methodology employed in the present
|
||||
research to evaluate and analyze the Clan framework. A visual
|
||||
representation of the argumentation flow central to the Clan Thesis
|
||||
is provided in Figure \ref{fig:clan_thesis_argumentation_tree}.
|
||||
Peer-to-peer (P2P) technologies and decentralization have undergone
|
||||
significant growth and evolution in recent years. These technologies
|
||||
form the backbone of various systems, including P2P Edge
|
||||
Computing—particularly in the context of the Internet of Things
|
||||
(IoT)—Content Delivery Networks (CDNs), and Blockchain platforms such
|
||||
as Ethereum. P2P architectures enable more democratic,
|
||||
censorship-resistant, and fault-tolerant systems by reducing reliance
|
||||
on single points of failure \cite{shukla_towards_2021}.
|
||||
|
||||
However, to fully realize these benefits, a P2P system must deploy
|
||||
its nodes across a diverse set of entities. Greater diversity in
|
||||
hosting increases the network’s resilience to censorship and systemic failures.
|
||||
|
||||
Despite this, recent trends in Ethereum node hosting reveal a
|
||||
significant reliance on centralized cloud providers. Notably, Amazon,
|
||||
Hetzner, and OVH collectively host 70\% of all Ethereum nodes, as
|
||||
illustrated in Figure \ref{fig:ethernodes_hosting}.
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includesvg[width=1\textwidth,
|
||||
keepaspectratio]{Figures/clan_thesis_argumentation_tree.drawio.svg}
|
||||
\caption{Argumentation Tree for the Clan Thesis}
|
||||
\label{fig:clan_thesis_argumentation_tree}
|
||||
\includegraphics[width=1\textwidth]{Figures/ethernodes_hosting.png}
|
||||
\caption{Distribution of Ethereum nodes hosted by various providers
|
||||
\cite{noauthor_isps_nodate}}
|
||||
\label{fig:ethernodes_hosting}
|
||||
\end{figure}
|
||||
|
||||
The structure of this thesis adopts a multi-problem-oriented approach
|
||||
rather than focusing on a single isolated problem. Specifically, it
|
||||
addresses a set of interrelated challenges within the context of
|
||||
enhancing the reliability and manageability of \ac{P2P}
|
||||
networks. The primary objective of this research is to investigate
|
||||
how the Clan framework provides effective solutions to these challenges.
|
||||
|
||||
To achieve this goal, the research is divided into two main components:
|
||||
\begin{enumerate}
|
||||
\item \textbf{Development of a Theoretical Model} \\
|
||||
A theoretical model of the Clan framework will be constructed. This
|
||||
model will consist of a formal specification of the foundational
|
||||
axioms of the system, including the properties and principles that
|
||||
govern its design. From these base axioms, the research will derive
|
||||
key theorems and explore their corresponding boundary conditions. The
|
||||
theoretical analysis will aim to elucidate the underlying mechanisms
|
||||
of the framework and provide a concrete foundation for its evaluation.
|
||||
|
||||
\item \textbf{Empirical Verification of the Theoretical Model} \\
|
||||
To validate the theoretical model, a series of experiments will be
|
||||
conducted on various components of the Clan framework. These
|
||||
experiments will assess how the theoretical predictions align with
|
||||
practical observations. This step is essential to determine the
|
||||
extent to which the theoretical model holds under real-world
|
||||
conditions and to identify its limitations, if any.
|
||||
\end{enumerate}
|
||||
|
||||
TODO
|
||||
|
||||
\section{Related Work}
|
||||
|
||||
The Clan framework operates within the realm of software deployment
|
||||
and peer-to-peer networking,
|
||||
necessitating a deep understanding of existing methodologies in these
|
||||
areas to tackle contemporary challenges.
|
||||
This section will discuss related works encompassing system
|
||||
deployment, peer data management,
|
||||
and low maintenance structured peer-to-peer overlays, which inform
|
||||
the development and positioning of the Clan framework.
|
||||
|
||||
\subsection{Nix: A Safe and Policy-Free System for Software Deployment}
|
||||
|
||||
Nix addresses significant issues in software deployment by utilizing
|
||||
a technique that employs cryptographic
|
||||
hashes to ensure unique paths for component instances \cite{dolstra_nix_2004}.
|
||||
The system is distinguished by its features, such as concurrent
|
||||
installation of multiple versions and variants,
|
||||
atomic upgrades, and safe garbage collection.
|
||||
These capabilities lead to a flexible deployment system that
|
||||
harmonizes source and binary deployments.
|
||||
Nix conceptualizes deployment without imposing rigid policies,
|
||||
thereby offering adaptable strategies for component management.
|
||||
This contrasts with many prevailing systems that are constrained by
|
||||
policy-specific designs,
|
||||
making Nix an easily extensible, safe and versatile deployment solution
|
||||
for configuration files and software.
|
||||
|
||||
\subsection{NixOS: A Purely Functional Linux Distribution}
|
||||
|
||||
NixOS is an extension of the principles established by Nix,
|
||||
presenting a Linux distribution that manages system configurations
|
||||
using purely functional methods . This model ensures that system
|
||||
configurations are reproducible and isolated
|
||||
from stateful interactions typical in imperative models of package management.
|
||||
Because NixOS configurations are built by pure functions, they can overcome the
|
||||
challenges of easily rolling back changes, deploying multiple package versions
|
||||
side-by-side, and achieving deterministic configuration reproduction .
|
||||
The solution is particularly compelling in environments necessitating rigorous
|
||||
reproducibility and minimal configuration drift—a valuable feature
|
||||
for distributed networks .
|
||||
|
||||
\subsection{Disnix: A Toolset for Distributed Deployment}
|
||||
|
||||
The Disnix toolset extends the deployment capabilities of Nix into
|
||||
distributed systems,
|
||||
focusing on automating the deployment process across different network nodes .
|
||||
By leveraging the modular approach of Nix, Disnix enables the
|
||||
consistent deployment of
|
||||
software environments, reducing the incidence of configuration errors
|
||||
across heterogeneous systems.
|
||||
This approach aligns with the needs of distributed systems like those
|
||||
utilized in peer-to-peer networks,
|
||||
where maintaining consistency across nodes is crucial for operational
|
||||
integrity .
|
||||
|
||||
\subsection{The Piazza Peer Data Management Project}
|
||||
|
||||
The peer data management landscape is further enriched by the Piazza
|
||||
project, which offers a
|
||||
flexible integration framework for heterogeneous data sources[5].
|
||||
Piazza's approach to routing and
|
||||
indexing extends beyond traditional network boundaries, showcasing a
|
||||
scalable method for handling
|
||||
vast data spaces in peer-to-peer systems. By addressing challenges
|
||||
related to schema mediation and
|
||||
decentralized data queries, Piazza provides a foundational structure
|
||||
from which Clan can envisage
|
||||
robust peer data management underpinned by strong consistency and reach.
|
||||
|
||||
\subsection{Software-Defined Networking and Low Maintenance Overlays}
|
||||
|
||||
The transition towards software-defined networking (SDN) is
|
||||
represented by systems that decouple
|
||||
the control plane from the data plane, enabling flexible network
|
||||
configurations[6]. In particular,
|
||||
SDN introduces novel paradigms in managing network resources
|
||||
dynamically, facilitating more adaptive
|
||||
and responsive network overlays. The Clan framework can draw on SDN
|
||||
principles to facilitate
|
||||
low-maintenance structured overlays, ensuring robust connectivity and
|
||||
efficient resource allocation
|
||||
in peer-to-peer environments. These systems reduce the overhead
|
||||
associated with managing network state,
|
||||
thus aligning with the inherent value propositions of decentralized networks.
|
||||
|
||||
\subsection{Charon: Declarative Provisioning and Deployment}
|
||||
|
||||
Charon adds another dimension to deployment via its declarative
|
||||
provisioning capabilities[7].
|
||||
By emphasizing a policy-driven approach, Charon aligns closely with
|
||||
the principles of
|
||||
infrastructure-as-code, where deployment processes are consistently
|
||||
repeatable and auditable.
|
||||
Such characteristics are beneficial for frameworks like Clan,
|
||||
ensuring transparency and accuracy
|
||||
in deployment processes across peer-to-peer nodes[8].
|
||||
|
||||
In summation, the evolution of deployment systems and peer data
|
||||
management frameworks underscores
|
||||
the steps necessary for developing robust decentralized systems like
|
||||
the Clan framework.
|
||||
By integrating features from Nix, NixOS, Disnix, and leveraging
|
||||
insights from SDN and Charon,
|
||||
the Clan framework can offer a high degree of reliability,
|
||||
flexibility, and efficiency across
|
||||
distributed networks. This related work provides a foundational
|
||||
understanding that supports the
|
||||
enhancement of the Clan framework towards achieving these objectives.
|
||||
The centralized nature of these providers and their domicile within the
|
||||
same regulatory jurisdiction—the United States—introduces vulnerability.
|
||||
Such a configuration allows for possible governmental intervention,
|
||||
which could lead to network shutdowns or manipulation by leveraging
|
||||
control over these cloud services.
|
||||
|
||||
The reliance on cloud-based solutions is driven by their ease of use,
|
||||
reliability, and the significant technical barriers associated with
|
||||
self-hosting solutions. These barriers include the need for technical
|
||||
expertise and the often unreliable nature of personally managed
|
||||
hosting. Recognizing this gap, the Clan project is proposed to
|
||||
alleviate these barriers, making the process of self-hosting as
|
||||
straightforward and reliable as using a cloud provider. The goal is
|
||||
to democratize the hosting of P2P nodes, enhancing the overall
|
||||
robustness and autonomy of decentralized networks.
|
||||
|
||||
238
Chapters/Methodology.tex
Executable file
238
Chapters/Methodology.tex
Executable file
@@ -0,0 +1,238 @@
|
||||
% Chapter Template
|
||||
|
||||
\chapter{Methodology} % Main chapter title
|
||||
|
||||
\label{Methodology} % Change X to a consecutive number; for
|
||||
% referencing this chapter elsewhere, use \ref{ChapterX}
|
||||
|
||||
%----------------------------------------------------------------------------------------
|
||||
% SECTION 1
|
||||
%----------------------------------------------------------------------------------------
|
||||
|
||||
This chapter describes the methodology used to evaluate and analyze
|
||||
the Clan framework. A summary of the logical flow of this research is
|
||||
depicted in Figure \ref{fig:clan_thesis_argumentation_tree}.
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includesvg[width=1\textwidth,
|
||||
keepaspectratio]{Figures/clan_thesis_argumentation_tree.drawio.svg}
|
||||
\caption{Argumentation Tree for the Clan Thesis}
|
||||
\label{fig:clan_thesis_argumentation_tree}
|
||||
\end{figure}
|
||||
|
||||
The structure of this study adopts a multi-faceted approach,
|
||||
addressing several interrelated challenges in enhancing the
|
||||
reliability and manageability of \ac{P2P} networks.
|
||||
The primary objective is to assess how the Clan framework effectively
|
||||
addresses these challenges.
|
||||
|
||||
The research methodology consists of two main components:
|
||||
\begin{enumerate}
|
||||
\item \textbf{Development of a Theoretical Model} \\
|
||||
A theoretical model of the Clan framework will be constructed.
|
||||
This includes a formal specification of the system's foundational
|
||||
axioms, outlining the principles and properties that guide its
|
||||
design. From these axioms, key theorems will be derived, along
|
||||
with their boundary conditions. The aim is to understand the
|
||||
mechanisms underpinning the framework and establish a basis for
|
||||
its evaluation.
|
||||
|
||||
\item \textbf{Empirical Validation of the Theoretical Model} \\
|
||||
Practical experiments will be conducted to validate the
|
||||
predictions of the theoretical model. These experiments will
|
||||
evaluate how well the model aligns with observed performance in
|
||||
real-world settings. This step is crucial to identifying the
|
||||
model’s strengths and limitations.
|
||||
\end{enumerate}
|
||||
|
||||
The methodology will particularly examine three core components of
|
||||
the Clan framework:
|
||||
\begin{itemize}
|
||||
\item \textbf{Clan Deployment System} \\
|
||||
The deployment system is the core of the Clan framework, enabling
|
||||
the configuration and management of distributed software
|
||||
components. It simplifies complex configurations through Python
|
||||
code, which abstracts the intricacies of the Nix language.
|
||||
Central to this system is the "inventory," a mergeable data
|
||||
structure designed for ensuring consistent service configurations
|
||||
across nodes without conflicts. This component will be analyzed
|
||||
for its design, functionality, efficiency, scalability, and fault
|
||||
resilience.
|
||||
|
||||
\item \textbf{Overlay Networks / Mesh VPNs} \\
|
||||
Overlay networks, also known as "Mesh VPNs," are critical for
|
||||
secure communication in Clan’s \ac{P2P} deployment. The study
|
||||
will evaluate their performance in terms of security,
|
||||
scalability, and resilience to network disruptions. Specifically,
|
||||
the assessment will include how well these networks handle
|
||||
traffic in environments where no device has a public IP address,
|
||||
as well as the impact of node failures on overall
|
||||
connectivity. The analysis will focus on:
|
||||
\begin{itemize}
|
||||
\item \textbf{ZeroTier}: A globally distributed "Ethernet Switch".
|
||||
\item \textbf{Mycelium}: An end-to-end encrypted IPv6 overlay network.
|
||||
\item \textbf{Hyprspace}: A lightweight VPN leveraging IPFS and libp2p.
|
||||
\end{itemize}
|
||||
|
||||
\item \textbf{Data Mesher} \\
|
||||
The Data Mesher is responsible for data synchronization across
|
||||
nodes, ensuring eventual consistency in Clan’s decentralized network. This
|
||||
component will be evaluated for synchronization speed, fault
|
||||
tolerance, and conflict resolution mechanisms. Additionally, it
|
||||
will be analyzed for its resilience in scenarios involving
|
||||
malicious nodes, measuring how effectively it prevents and
|
||||
mitigates manipulation or integrity violations during data
|
||||
replication and distribution.
|
||||
\end{itemize}
|
||||
|
||||
\section{Related Work}
|
||||
|
||||
The Clan framework operates within the realm of software deployment
|
||||
and peer-to-peer networking,
|
||||
necessitating a deep understanding of existing methodologies in these
|
||||
areas to tackle contemporary challenges.
|
||||
This section will discuss related works encompassing system
|
||||
deployment, peer data management,
|
||||
and low maintenance structured peer-to-peer overlays, which inform
|
||||
the development and positioning of the Clan framework.
|
||||
|
||||
\subsection{Nix: A Safe and Policy-Free System for Software Deployment}
|
||||
|
||||
Nix addresses significant issues in software deployment by utilizing
|
||||
a technique that employs cryptographic
|
||||
hashes to ensure unique paths for component instances \cite{dolstra_nix_2004}.
|
||||
The system is distinguished by its features, such as concurrent
|
||||
installation of multiple versions and variants,
|
||||
atomic upgrades, and safe garbage collection.
|
||||
These capabilities lead to a flexible deployment system that
|
||||
harmonizes source and binary deployments.
|
||||
Nix conceptualizes deployment without imposing rigid policies,
|
||||
thereby offering adaptable strategies for component management.
|
||||
This contrasts with many prevailing systems that are constrained by
|
||||
policy-specific designs,
|
||||
making Nix an easily extensible, safe and versatile deployment solution
|
||||
for configuration files and software.
|
||||
|
||||
As Clan makes extensive use of Nix for deployment, understanding the
|
||||
foundations and principles of Nix is crucial for evaluating inner workings.
|
||||
|
||||
\subsection{NixOS: A Purely Functional Linux Distribution}
|
||||
|
||||
NixOS is an extension of the principles established by Nix,
|
||||
presenting a Linux distribution that manages system configurations
|
||||
using purely functional methods \cite{dolstra_nixos_2008}. This model
|
||||
ensures that system
|
||||
configurations are reproducible and isolated
|
||||
from stateful interactions typical in imperative models of package management.
|
||||
Because NixOS configurations are built by pure functions, they can overcome the
|
||||
challenges of easily rolling back changes, deploying multiple package versions
|
||||
side-by-side, and achieving deterministic configuration reproduction .
|
||||
The solution is particularly compelling in environments necessitating rigorous
|
||||
reproducibility and minimal configuration drift—a valuable feature
|
||||
for distributed networks .
|
||||
|
||||
Clan also leverages NixOS for system configuration and deployment,
|
||||
making it essential to understand how NixOS's functional model works.
|
||||
|
||||
\subsection{Disnix: A Toolset for Distributed Deployment}
|
||||
|
||||
Disnix extends the Nix philosophy to the challenge of distributed
|
||||
deployment, offering a toolset that enables system administrators and
|
||||
developers to perform automatic deployment of service-oriented
|
||||
systems across a network of machines \cite{van_der_burg_disnix_2014}.
|
||||
Disnix leverages the features of Nix to manage complex intra-dependencies.
|
||||
Meaning dependencies that exist on a network level instead on a binary levle.
|
||||
The overlap with the Clan framework is evident in the focus on deployment, how
|
||||
they differ will be explored in the evaluation of Clan's deployment system.
|
||||
|
||||
\subsection{State of the Art in Software Defined Networking}
|
||||
|
||||
The work by Bakhshi \cite{bakhshi_state_2017} surveys the
|
||||
foundational principles and recent developments in Software Defined
|
||||
Networking (SDN). It describes SDN as a paradigm that separates the
|
||||
control plane from the data plane, enabling centralized, programmable
|
||||
control over network behavior. The paper focuses on the architectural
|
||||
components of SDN, including the three-layer abstraction model—the
|
||||
application layer, control layer, and data layer—and highlights the
|
||||
role of SDN controllers such as OpenDaylight, Floodlight, and Ryu.
|
||||
|
||||
A key contribution of the paper is its identification of challenges
|
||||
and open research questions in SDN. These include issues related to
|
||||
scalability, fault tolerance, and the security risks introduced by
|
||||
centralized control.
|
||||
|
||||
This work is relevant to evaluating Clan’s role as a
|
||||
Software Defined Network deployment tool and as a
|
||||
comparison point against the state of the art.
|
||||
|
||||
\subsection{Low Maintenance Peer-to-Peer Overlays}
|
||||
|
||||
Structured Peer-to-Peer (P2P) overlay networks offer scalability and
|
||||
efficiency but often require significant maintenance to handle
|
||||
challenges such as peer churn and mismatched logical and physical
|
||||
topologies. Shukla et al. propose a novel approach to designing
|
||||
Distributed Hash Table (DHT)-based P2P overlays by integrating
|
||||
Software Defined Networks (SDNs) to dynamically adjust
|
||||
application-specific network policies and rules
|
||||
\cite{shukla_towards_2021}. This method reduces maintenance overhead
|
||||
by aligning overlay topology with the underlying physical network,
|
||||
thus improving performance and reducing communication costs.
|
||||
|
||||
The relevance of this work to Clan lies in its addressing of
|
||||
operational complexity in managing P2P networks.
|
||||
|
||||
\subsection{Full-Mesh VPN Performance Evaluation}
|
||||
|
||||
The work by Kjorveziroski et al. \cite{kjorveziroski_full-mesh_2024}
|
||||
provides a comprehensive evaluation of full-mesh VPN solutions,
|
||||
specifically focusing on their use as underlay networks for
|
||||
distributed systems, such as Kubernetes clusters. Their benchmarks
|
||||
analyze the performance of VPNs with built-in NAT traversal
|
||||
capabilities, including ZeroTier, emphasizing throughput, reliability
|
||||
under packet loss, and behavior when relay mechanisms are used. For
|
||||
the Clan framework, these insights are particularly relevant in
|
||||
assessing the performance and scalability of its Overlay Networks
|
||||
component. By benchmarking ZeroTier alongside its peers, the paper
|
||||
offers an established reference point for evaluating how Mesh VPN
|
||||
solutions like ZeroTier perform under conditions similar to the
|
||||
intricacies of peer-to-peer systems managed by Clan.
|
||||
|
||||
\subsection{AMC: Towards Trustworthy and Explorable CRDT Applications}
|
||||
|
||||
Jeffery and Mortier \cite{jeffery_amc_2023} present the Automerge
|
||||
Model Checker (AMC), a tool aimed at verifying and dynamically
|
||||
exploring the correctness of applications built on Conflict-Free
|
||||
Replicated Data Types (CRDTs). Their work addresses critical
|
||||
challenges associated with implementing and optimizing
|
||||
operation-based (op-based) CRDTs, particularly emphasizing how these
|
||||
optimizations can inadvertently introduce subtle bugs in distributed
|
||||
systems despite rigorous testing methods like fuzz testing. As part
|
||||
of their contributions, they implemented the "Automerge" library in
|
||||
Rust, an op-based CRDT framework that exposes a JSON-like API and
|
||||
supports local-first and asynchronous collaborative operations.
|
||||
|
||||
This paper is particularly relevant to the development and evaluation
|
||||
of the Data Mesher component of the Clan framework, which utilizes
|
||||
state-based (or value-based) CRDTs for synchronizing distributed data
|
||||
across peer-to-peer nodes. While Automerge addresses issues pertinent
|
||||
to op-based CRDTs, the discussion on verification techniques, edge
|
||||
case handling, and model-checking methodologies provides
|
||||
cross-cutting insights to the complexities of ops based CRDTs and is
|
||||
a good argument for using simpler state based CRDTs.
|
||||
|
||||
\subsection{Keep CALM and CRDT On}
|
||||
|
||||
The work by Laddad et al. \cite{laddad_keep_2022} complements and
|
||||
expands upon concepts presented in the AMC paper. By revisiting the
|
||||
foundations of CRDTs, the authors address limitations related to
|
||||
reliance on eventual consistency and propose techniques to
|
||||
distinguish between safe and unsafe queries using monotonicity
|
||||
results derived from the CALM Theorem. This inquiry is highly
|
||||
relevant for the Data Mesher component of Clan, as it delves into
|
||||
operational and observable consistency guarantees that can optimize
|
||||
both efficiency and safety in distributed query execution.
|
||||
Specifically, the insights on query models and coordination-free
|
||||
approaches advance the understanding of how CRDT-based systems, like
|
||||
the Data Mesher, manage distributed state effectively without
|
||||
compromising safety guarantees.
|
||||
4
Figures/clan_thesis_argumentation_tree_2.drawio.svg
Normal file
4
Figures/clan_thesis_argumentation_tree_2.drawio.svg
Normal file
File diff suppressed because one or more lines are too long
|
After Width: | Height: | Size: 392 KiB |
BIN
Figures/ethernodes_hosting.png
Normal file
BIN
Figures/ethernodes_hosting.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 265 KiB |
@@ -348,7 +348,8 @@ KOMA-script documentation for details.}]{fancyhdr}
|
||||
% \begin{longtable}{#1}
|
||||
% }{%
|
||||
% \end{longtable}
|
||||
% \addtocounter{table}{-1}% Don't count this table as one of the document tables
|
||||
% \addtocounter{table}{-1}% Don't count this table as one of the
|
||||
% document tables
|
||||
% \ifbool{nolistspace}{\endgroup}{}
|
||||
% }
|
||||
|
||||
|
||||
37
main.tex
37
main.tex
@@ -215,22 +215,24 @@ and Management}} % Your department's name and URL, this is used in
|
||||
|
||||
\begin{abstract}
|
||||
\addchaptertocentry{\abstractname} % Add the abstract to the table of contents
|
||||
This thesis explores Clan, an open-source machine configuration
|
||||
management framework
|
||||
designed to provide a single source of truth in peer-to-peer networks.
|
||||
Key to this investigation are the underlying peer-to-peer technologies,
|
||||
ZeroTier and Mycelium, which enable decentralized network connections,
|
||||
and the "Data-Mesher" a decentralized conflict free replicated database.
|
||||
This thesis begins with an in-depth analysis of fault tolerance
|
||||
mechanisms in these technologies,
|
||||
evaluating how robust they are against node failures and network disruptions.
|
||||
Next, scalability is examined through both theoretical models and
|
||||
practical implementations.
|
||||
Finally, the security of these technologies is evaluated through
|
||||
various attack scenarios.
|
||||
By examining fault tolerance, scalability, and security, this
|
||||
thesis aims to evaluate how Clan
|
||||
and these supporting technologies contribute to the management of
|
||||
This thesis investigates Clan, an open-source framework for machine
|
||||
configuration management in peer-to-peer networks. The research
|
||||
focuses on its capabilities as a unified source of truth for
|
||||
managing distributed systems. Underpinning this analysis are key
|
||||
technologies: ZeroTier, Mycelium, and the "Data Mesher," a
|
||||
conflict-free replicated database supporting decentralized operations.
|
||||
|
||||
The study examines three main aspects critical to evaluating Clan's
|
||||
efficacy: fault tolerance, scalability, and security. Fault
|
||||
tolerance is analyzed in the context of network disruptions and
|
||||
node failures. Scalability is evaluated through theoretical models
|
||||
and real-world implementations to measure system performance under
|
||||
varying loads. Security is tested through targeted attack scenarios
|
||||
to assess the framework's resilience to potential threats.
|
||||
|
||||
By comprehensively addressing these three aspects, this thesis aims
|
||||
to provide a detailed evaluation of Clan and its supporting
|
||||
technologies, particularly in the management of distributed
|
||||
peer-to-peer systems.
|
||||
\end{abstract}
|
||||
|
||||
@@ -331,8 +333,9 @@ and Management}} % Your department's name and URL, this is used in
|
||||
|
||||
% Include the chapters of the thesis as separate files from the Chapters folder
|
||||
% Uncomment the lines as you write the chapters
|
||||
|
||||
\include{Chapters/Introduction}
|
||||
\include{Chapters/Methodology}
|
||||
|
||||
%\include{Chapters/Chapter1}
|
||||
%\include{Chapters/Chapter2}
|
||||
%\include{Chapters/Chapter3}
|
||||
|
||||
@@ -74,40 +74,6 @@
|
||||
{Attachment:/home/lhebendanz/Zotero/storage/WCI9PCTE/inet_nohop_decen_hashtable.pdf:application/pdf},
|
||||
}
|
||||
|
||||
@inproceedings{tiesel_multi-homed_2016,
|
||||
location = {New York, {NY}, {USA}},
|
||||
title = {Multi-Homed on a Single Link: Using Multiple {IPv}6 Access Networks},
|
||||
isbn = {978-1-4503-4443-2},
|
||||
url = {https://doi.org/10.1145/2959424.2959434},
|
||||
doi = {10.1145/2959424.2959434},
|
||||
series = {{ANRW} '16},
|
||||
shorttitle = {Multi-Homed on a Single Link},
|
||||
abstract = {Small companies and branch offices often have bandwidth
|
||||
demands and redundancy needs that go beyond the commercially
|
||||
available Internet access products in their price range. One way to
|
||||
overcome this problem is to bundle existing Internet access
|
||||
products. In effect, they become multi-homed often without running
|
||||
{BGP} or even getting an {AS} number.Currently, these users rely on
|
||||
proprietary L4 load balancing routers, proprietary multi-channel
|
||||
{VPN} routers, or sometimes {LISP}, to bundle their "cheaper"
|
||||
Internet access network links, e.g., via (v){DSL}, {DOCSIS},
|
||||
{HSDPA}, or {LTE}. While most products claim transport-layer
|
||||
transparency they add complexity via middleboxes, map each {TCP}
|
||||
connection to a single interface, and have limited application
|
||||
support. Thus, in this paper we propose an alternative:
|
||||
Auto-configuration of multiple {IPv}6 prefixes on a single L2 link.
|
||||
We discuss how this enables applications to take advantage of
|
||||
combining multiple access networks at with minimal system changes.},
|
||||
pages = {16--18},
|
||||
booktitle = {Proceedings of the 2016 Applied Networking Research Workshop},
|
||||
publisher = {Association for Computing Machinery},
|
||||
author = {Tiesel, Philipp S. and May, Bernd and Feldmann, Anja},
|
||||
urldate = {2024-09-23},
|
||||
date = {2016-07},
|
||||
file =
|
||||
{Attachment:/home/lhebendanz/Zotero/storage/W44Z4XEE/inet_ipv6_vpn.pdf:application/pdf},
|
||||
}
|
||||
|
||||
@article{bakhshi_state_2017,
|
||||
title = {State of the Art and Recent Research Advances in Software
|
||||
Defined Networking},
|
||||
@@ -152,23 +118,6 @@
|
||||
Art and Recent Research Advances in Software.pdf:application/pdf},
|
||||
}
|
||||
|
||||
@article{han_distributed_2015,
|
||||
title = {Distributed hybrid P2P networking systems},
|
||||
volume = {8},
|
||||
issn = {1936-6450},
|
||||
url = {https://doi.org/10.1007/s12083-014-0298-7},
|
||||
doi = {10.1007/s12083-014-0298-7},
|
||||
pages = {555--556},
|
||||
number = {4},
|
||||
journaltitle = {Peer-to-Peer Netw. Appl.},
|
||||
author = {Han, Jungsoo},
|
||||
urldate = {2024-11-19},
|
||||
date = {2015-07-01},
|
||||
langid = {english},
|
||||
file = {Full Text PDF:/home/lhebendanz/Zotero/storage/XVFPW4CM/Han
|
||||
- 2015 - Distributed hybrid P2P networking systems.pdf:application/pdf},
|
||||
}
|
||||
|
||||
@online{noauthor_sci-hub_nodate,
|
||||
title = {Sci-Hub},
|
||||
url = {https://sci-hub.usualwant.com/},
|
||||
@@ -199,69 +148,6 @@
|
||||
peer-to-peer overlays.pdf:application/pdf},
|
||||
}
|
||||
|
||||
@article{naik_next_2020,
|
||||
title = {Next level peer-to-peer overlay networks under high
|
||||
churns: a survey},
|
||||
volume = {13},
|
||||
issn = {1936-6442, 1936-6450},
|
||||
url = {http://link.springer.com/10.1007/s12083-019-00839-8},
|
||||
doi = {10.1007/s12083-019-00839-8},
|
||||
shorttitle = {Next level peer-to-peer overlay networks under high churns},
|
||||
pages = {905--931},
|
||||
number = {3},
|
||||
journaltitle = {Peer-to-Peer Netw. Appl.},
|
||||
author = {Naik, Ashika R. and Keshavamurthy, Bettahally N.},
|
||||
urldate = {2024-11-19},
|
||||
date = {2020-05},
|
||||
langid = {english},
|
||||
file = {PDF:/home/lhebendanz/Zotero/storage/PWMXVDES/Naik and
|
||||
Keshavamurthy - 2020 - Next level peer-to-peer overlay networks
|
||||
under high churns a survey.pdf:application/pdf},
|
||||
}
|
||||
|
||||
@inproceedings{guilloteau_painless_2022,
|
||||
location = {Heidelberg, Germany},
|
||||
title = {Painless Transposition of Reproducible Distributed
|
||||
Environments with {NixOS} Compose},
|
||||
rights = {https://doi.org/10.15223/policy-029},
|
||||
isbn = {978-1-66549-856-2},
|
||||
url = {https://ieeexplore.ieee.org/document/9912715/},
|
||||
doi = {10.1109/CLUSTER51413.2022.00051},
|
||||
abstract = {Development of environments for distributed systems is
|
||||
a tedious and time-consuming iterative process. The reproducibility
|
||||
of such environments is a crucial factor for rigorous scientific
|
||||
contributions. We think that being able to smoothly test
|
||||
environments both locally and on a target distributed platform
|
||||
makes development cycles faster and reduces the friction to adopt
|
||||
better experimental practices. To address this issue, this paper
|
||||
introduces the notion of environment transposition and implements
|
||||
it in {NixOS} Compose, a tool that generates reproducible
|
||||
distributed environments. It enables users to deploy their
|
||||
environments on virtualized (docker, {QEMU}) or physical
|
||||
(Grid’5000) platforms with the same unique description of the
|
||||
environment. We show that {NixOS} Compose enables to build
|
||||
reproducible environments without overhead by comparing it to
|
||||
state-of-the-art solutions for the generation of distributed
|
||||
environments ({EnOSlib} and Kameleon). {NixOS} Compose actually
|
||||
enables substantial performance improvements on image building time
|
||||
over Kameleon (up to 11x faster for initial builds and up to 19x
|
||||
faster when building a variation of an existing environment).},
|
||||
eventtitle = {2022 {IEEE} International Conference on Cluster
|
||||
Computing ({CLUSTER})},
|
||||
pages = {1--12},
|
||||
booktitle = {2022 {IEEE} International Conference on Cluster
|
||||
Computing ({CLUSTER})},
|
||||
publisher = {{IEEE}},
|
||||
author = {Guilloteau, Quentin and Bleuzen, Jonathan and Poquet,
|
||||
Millian and Richard, Olivier},
|
||||
urldate = {2024-11-24},
|
||||
date = {2022-09},
|
||||
langid = {english},
|
||||
file = {PDF:/home/lhebendanz/Zotero/storage/SEEITEJA/Guilloteau et
|
||||
al. - 2022 - Painless Transposition of Reproducible Distributed
|
||||
Environments with NixOS Compose.pdf:application/pdf},
|
||||
}
|
||||
|
||||
@inproceedings{dolstra_nixos_2008,
|
||||
location = {New York, {NY}, {USA}},
|
||||
title = {{NixOS}: a purely functional Linux distribution},
|
||||
@@ -298,37 +184,6 @@
|
||||
- 2010 - NixOS A Purely Functional Linux Distribution.pdf:application/pdf},
|
||||
}
|
||||
|
||||
@article{tatarinov_piazza_2003,
|
||||
title = {The Piazza peer data management project},
|
||||
volume = {32},
|
||||
issn = {0163-5808},
|
||||
url = {https://doi.org/10.1145/945721.945732},
|
||||
doi = {10.1145/945721.945732},
|
||||
abstract = {A major problem in today's information-driven world is
|
||||
that sharing heterogeneous, semantically rich data is incredibly
|
||||
difficult. Piazza is a peer data management system that enables
|
||||
sharing heterogeneous data in a distributed and scalable way.
|
||||
Piazza assumes the participants to be interested in sharing data,
|
||||
and willing to define pairwise mappings between their schemas.
|
||||
Then, users formulate queries over their preferred schema, and a
|
||||
query answering system expands recursively any mappings relevant to
|
||||
the query, retrieving data from other peers. In this paper, we
|
||||
provide a brief overview of the Piazza project including our work
|
||||
on developing mapping languages and query reformulation algorithms,
|
||||
assisting the users in defining mappings, indexing, and enforcing
|
||||
access control over shared data.},
|
||||
pages = {47--52},
|
||||
number = {3},
|
||||
journaltitle = {{SIGMOD} Rec.},
|
||||
author = {Tatarinov, Igor and Ives, Zachary and Madhavan, Jayant
|
||||
and Halevy, Alon and Suciu, Dan and Dalvi, Nilesh and Dong, Xin
|
||||
(Luna) and Kadiyska, Yana and Miklau, Gerome and Mork, Peter},
|
||||
urldate = {2024-11-24},
|
||||
date = {2003-09-01},
|
||||
file = {PDF:/home/lhebendanz/Zotero/storage/MRK3XWJG/Tatarinov et
|
||||
al. - 2003 - The Piazza peer data management project.pdf:application/pdf},
|
||||
}
|
||||
|
||||
@article{van_der_burg_disnix_2014,
|
||||
title = {Disnix: A toolset for distributed deployment},
|
||||
volume = {79},
|
||||
@@ -367,40 +222,6 @@
|
||||
Snapshot:/home/lhebendanz/Zotero/storage/VHPTLVMW/S0167642312000639.html:text/html},
|
||||
}
|
||||
|
||||
@inproceedings{dolstra_charon_2013,
|
||||
title = {Charon: Declarative provisioning and deployment},
|
||||
url = {https://ieeexplore.ieee.org/abstract/document/6607691},
|
||||
doi = {10.1109/RELENG.2013.6607691},
|
||||
shorttitle = {Charon},
|
||||
abstract = {We introduce Charon, a tool for automated provisioning
|
||||
and deployment of networks of machines from declarative
|
||||
specifications. Building upon {NixOS}, a Linux distribution with a
|
||||
purely functional configuration management model, Charon
|
||||
specifications completely describe the desired configuration of
|
||||
sets of “logical” machines, including all software packages and
|
||||
services that need to be present on those machines, as well as
|
||||
their desired “physical” characteristics. Given such
|
||||
specifications, Charon will provision cloud resources (such as
|
||||
Amazon {EC}2 instances) as required, build and deploy packages, and
|
||||
activate services. We argue why declarativity and integrated
|
||||
provisioning and configuration management are important properties,
|
||||
and describe our experience with Charon.},
|
||||
eventtitle = {2013 1st International Workshop on Release
|
||||
Engineering ({RELENG})},
|
||||
pages = {17--20},
|
||||
booktitle = {2013 1st International Workshop on Release Engineering
|
||||
({RELENG})},
|
||||
author = {Dolstra, Eelco and Vermaas, Rob and Levy, Shea},
|
||||
urldate = {2024-11-24},
|
||||
date = {2013-05},
|
||||
keywords = {Databases, {IP} networks, Linux, Production, Servers,
|
||||
Software, Testing},
|
||||
file = {IEEE Xplore Abstract
|
||||
Record:/home/lhebendanz/Zotero/storage/LDFB982I/6607691.html:text/html;PDF:/home/lhebendanz/Zotero/storage/6VBUL8L5/Dolstra
|
||||
et al. - 2013 - Charon Declarative provisioning and
|
||||
deployment.pdf:application/pdf},
|
||||
}
|
||||
|
||||
@article{laddad_keep_2022,
|
||||
title = {Keep {CALM} and {CRDT} On},
|
||||
volume = {16},
|
||||
@@ -501,3 +322,11 @@
|
||||
- Nix A Safe and Policy-Free System for Software
|
||||
Deployment.pdf:application/pdf},
|
||||
}
|
||||
|
||||
@online{noauthor_isps_nodate,
|
||||
title = {{ISPs} - ethernodes.org - The Ethereum Network \& Node Explorer},
|
||||
url = {https://ethernodes.org/networkType/Hosting},
|
||||
urldate = {2024-12-02},
|
||||
file = {ISPs - ethernodes.org - The Ethereum Network & Node
|
||||
Explorer:/home/lhebendanz/Zotero/storage/BH7E2FAL/Hosting.html:text/html},
|
||||
}
|
||||
|
||||
Reference in New Issue
Block a user