Browse Source

Third pass

Signed-off-by: Quentin Schulz <quentin.schulz@free-electrons.com>
master
Quentin Schulz 2 years ago
parent
commit
682ed0f13a
4 changed files with 148 additions and 158 deletions
  1. +44
    -46
      tex/ci_tests.tex
  2. +14
    -14
      tex/conclusion.tex
  3. +12
    -15
      tex/presentation_free-electrons.tex
  4. +78
    -83
      tex/toolchains.tex

+ 44
- 46
tex/ci_tests.tex View File

@@ -9,7 +9,7 @@

\textbf{KernelCI} is a project started a few years ago that aims at compiling on
a per hour basis a lot of different upstream \textbf{Linux} kernel trees, then
sending jobs \footnote{Jobs are the resource \textbf{LAVA} deals with. A more
sending jobs\footnote{Jobs are the resource \textbf{LAVA} deals with. A more
detailed explanation can be found in \ref{ssub:the_jobs} at page
\pageref{ssub:the_jobs}.} using those kernels to the subscribed labs, before
aggregating the results on different summary pages, where it is easy to see if a
@@ -90,7 +90,7 @@ Since the tested software is an operating system, it needs to run on real
hardware, and thus, it differs from more usual CI that typically runs in some
container technology.

Build last year, the farm at Free Electrons takes the form of a big cabinet,
Built last year, the farm at Free Electrons takes the form of a big cabinet,
with eight drawers, capable of storing up to 50 boards. With those devices, USB
hubs, switches, and ATX power supply with their TCP controlled relays can be
found in each and every drawer. For the main management, the farm also hosts a
@@ -145,7 +145,7 @@ interested in the original software architecture can still read this blog post:
\label{ssub:the_jobs}

The main resource \textbf{LAVA} has to deal with is the job. A job is defined by
a \textbf{YAML} \footnote{\url{http://yaml.org/}} structure, describing multiple
a \textbf{YAML}\footnote{\url{http://yaml.org/}} structure, describing multiple
sections:
\begin{itemize}
\item The \textbf{device-type}, which is the name of the device you want to
@@ -224,7 +224,7 @@ Electrons' one. It is thus split into two parts: a master, and a worker.
At the beginning, a basic and functional, but still blurred specification was
made, but it required a proof-of-concept to see how it would fit in final
production. It had quickly been named \textbf{CTT}, standing for \emph{Custom
Test Tool} \footnote{You can find the sources at this address:
Test Tool}\footnote{You can find the sources at this address:
\url{https://github.com/free-electrons/custom_tests_tool}}, and that is how the
software building the custom jobs will be designated till the end of this
report.
@@ -238,9 +238,9 @@ would have to maintain in the future.
\subsubsection{Understanding the depth of LAVA's jobs}
\label{ssub:understanding_the_lava_jobs}

The first simple part has been about \textbf{LAVA}. Since \textbf{KernelCI}
already provides everything (kernel, dtb and rootfs) needed to run a successful
job in \textbf{LAVA}, the only part remaining was crafting and sending jobs.
The first simple part was about \textbf{LAVA}. Since \textbf{KernelCI} already
provides everything (kernel, dtb and rootfs) needed to run a successful job in
\textbf{LAVA}, the only part remaining was crafting and sending jobs.

An easy and simple, yet flexible solution, was to use a template engine to
parametrize a generic job written once and for all.
@@ -250,20 +250,20 @@ readable and easy to write, the data structure itself required by \textbf{LAVA}
is a bit complex, and it is thus truly inconvenient to write tests by hand.

Once filled, the template would just have to be sent to \textbf{LAVA} through
its XML-RPC \footnote{\url{https://en.wikipedia.org/wiki/XML-RPC}} API to create
its XML-RPC\footnote{\url{https://en.wikipedia.org/wiki/XML-RPC}} API to create
a job.

Knowing what to put in that template has been one of the most interesting moment
of this part, since it was like discovering a new programming language. There
are always new features to discover, and new mechanics for using them, and
finally to make \textbf{LAVA} do exactly what you want. It is also during this
period that most of the migration to \textbf{LAVA v2} has been prepared, meaning
that the configuration of the different levels of \textbf{LAVA} has been altered.
Knowing what to put in that template was one of the most interesting moment of
this part, since it was like discovering a new programming language. There are
always new features to discover, and new mechanisms for using them, and finally
to make \textbf{LAVA} do exactly what you want. It is also during this period
that most of the migration to \textbf{LAVA v2} was prepared, meaning that the
configuration of the different levels of \textbf{LAVA} was altered.

This often required to discuss with the \textbf{LAVA} community, on
It was often required to discuss with the \textbf{LAVA} community, on
\url{irc://irc.freenode.net#linaro-lava}, to get clarification when the
documentation happened to be incomplete, or when \textbf{LAVA} needed to be
improved \footnote{See these patches for example: \\
improved\footnote{See these patches for example: \\
\url{https://git.linaro.org/lava/lava-dispatcher.git/commit/?id=8df17dd7355cd82f37e1ef22a6c9d88ede44f650} \\
\url{https://git.linaro.org/lava/lava-dispatcher.git/commit/?id=3bfdcdb672f1a15da96bbb221a26847dd6bf2865} \\
Also don't hesitate to run \verb$git log --author "Florent Jacquet"$ in the
@@ -279,17 +279,17 @@ artifacts, such as the user-built kernel. This would be useful for the first
manual mode of the tool, when a user would launch some jobs from his
workstation, using a kernel built from one of his working trees.

\textbf{LAVA} allowing the use of files local to the dispatcher, it has been a
\textbf{LAVA} allowing the use of files local to the dispatcher, it would be a
really convenient solution to provide the artifacts without setting up some
\textbf{FTP} server or other complicated means of serving files.

\textbf{SSH}, with the \verb$scp$ command, allows efficient and reliable file transfers
between two machines, and since the engineers have an easy access to the
dispatcher using one of Free Electrons' VPNs \footnote{Virtual Private Network
dispatcher using one of Free Electrons' VPNs\footnote{Virtual Private Network
(\url{https://en.wikipedia.org/wiki/Virtual\_private\_network})}, it would be
easy to give them permissions to send files.

With \textbf{Python}, the \textbf{paramiko} \footnote{\url{http://www.paramiko.org/}}
With \textbf{Python}, the \textbf{paramiko}\footnote{\url{http://www.paramiko.org/}}
library, allowing a native use of \textbf{SSH}, makes the choice of that
protocol even more comfortable.

@@ -305,10 +305,10 @@ allowing to retrieve their latest builds.
The most difficult part was then to make sure that the crawler would have enough
information about the boards to fetch their specific artifacts, while trying to
avoid having a very big file storing every possible data about the boards.
Once done, getting the artifacts would be only a matter of crafting the right
Once done, getting the artifacts would only be a matter of crafting the right
URL.

This ended up with a simple \textbf{JSON} \footnote{\url{http://json.org/}}
This ended up with a simple \textbf{JSON}\footnote{\url{http://json.org/}}
file, storing the list of the boards, each storing four strings:
\begin{itemize}
\item \textbf{arch}, the architecture of the board, to guess which kernel to
@@ -341,7 +341,7 @@ userland shell.

Before writing more complex tests which would require some time of development,
a simple \verb$echo "Hello world!"$ made just the job. This allowed to do a lot
of test, checking all possible solutions, and finally define an architecture
of testing, checking all possible solutions, and finally define an architecture
that would be both simple and functional enough for the custom tests' needs.

\subsubsection{Integrating custom tools in the root file system}
@@ -354,20 +354,19 @@ rootfs should be compiled for each ARM flavour, unlike the extremely generic one
built by \textbf{KernelCI}.

An easy and flexible way of building custom root filesystems is to use
\textbf{Buildroot} \footnote{\url{https://buildroot.org/}}. This led to some
simple glue scripts
\footnote{\url{https://github.com/free-electrons/buildroot-ci}} building the few
configurations requested by the farm, which are mainly including \emph{iperf}
\footnote{\url{https://en.wikipedia.org/wiki/Iperf} and \url{https://iperf.fr/}}
and a full \emph{ping} \footnote{One that includes the \verb$-f$ option, for
ping floods.} version for network stressing, and \emph{Bonnie++}
\footnote{\url{https://en.wikipedia.org/wiki/Bonnie++} and
\textbf{Buildroot}\footnote{\url{https://buildroot.org/}}. This led to some
simple glue scripts\footnote{\url{https://github.com/free-electrons/buildroot-ci}}
building the few configurations requested by the farm, which are mainly
including \emph{iperf}\footnote{\url{https://en.wikipedia.org/wiki/Iperf}
and \url{https://iperf.fr/}} and a full \emph{ping}\footnote{One that includes
the \verb$-f$ option, for ping floods.} version for network stressing, and
\emph{Bonnie++}\footnote{\url{https://en.wikipedia.org/wiki/Bonnie++} and
\url{http://www.coker.com.au/bonnie++/}} for filesystem performances, over a
classic Busybox \footnote{\url{https://en.wikipedia.org/wiki/BusyBox} and
classic Busybox\footnote{\url{https://en.wikipedia.org/wiki/BusyBox} and
\url{https://busybox.net/}} that provides the rest of the system.

As my first contact with \textbf{Buildroot}, this was a quick but interesting part
that made me discover the power of build systems in the embedded world.
As my first experience with \textbf{Buildroot}, this was a quick but interesting
part that made me discover the power of build systems in the embedded world.

\subsubsection{The road to complex jobs}
\label{ssub:the_road_to_complex_jobs}
@@ -378,18 +377,17 @@ to be written.

As Busybox provides only \emph{Ash} as its default shell, the scripts needed to
be compatible with this software, and thus could not take advantage of some Bash
features. This turned out to be quite an exercise, since most of the OS in 2017
features. This turned out to be quite an exercise, since most of the OSs in 2017
provide the latter by default, and the differences may in some cases cause some
headache finding workarounds for complex operations.

The other most interesting part was the development of the first
\emph{Multinode job}
\footnote{\url{https://validation.linaro.org/static/docs/v2/multinodeapi.html}}.
\emph{Multinode job}\footnote{\url{https://validation.linaro.org/static/docs/v2/multinodeapi.html}}.
This is the \textbf{LAVA} term to describe jobs that require multiple devices,
such as some network-related jobs. Since the boards need to interact, they need
to be synchronized, and \textbf{LAVA} provides some tools in the runtime
environment to allow data exchanges between the devices, but as with classic
threads or processes, this can quickly leads to some race conditions, deadlocks,
threads or processes, this can quickly lead to some race conditions, deadlocks,
or other fancy concurrency problematics.

Once all those problems addressed, with the network tests running, a little
@@ -422,11 +420,11 @@ get the results they are interested in.
\label{ssub:goal}

The next and last step toward fully customized CI tests, was building custom
kernels. Just like \textbf{KernelCI} does every hour, the goal is to watch a
list of kernel trees, pull them, then build them with specific configuration,
kernels. Just like \textbf{KernelCI} does every hour, the goal is to monitor a
list of kernel trees, pull them, then build them with specific configurations,
and store the artifacts online, so that \textbf{LAVA} could easily use them.

Custom kernels come in really handy in two cases. When the engineers would like
Custom kernels really come in handy in two cases. When the engineers would like
to follow a specific tree they work on, but this tree is not close enough to
mainline and \textbf{KernelCI} does not track it, Free Electrons' builder would
be in charge of it. The other useful case, is when a test requires custom kernel
@@ -436,11 +434,11 @@ that are platform specific, thus not in the range of \textbf{KernelCI}'s builds.
\subsubsection{Setting up a kernel builder}
\label{ssub:setting_up_a_kernel_builder}

Mainly based on \textbf{KernelCI}'s Jenkins scripts, but with some adaptation to
work in standalone, the builder
\footnote{\url{https://github.com/free-electrons/kernel-builder}} is split in
two parts: a first script that checks the trees and prepares tarballs of the
sources when needed, and a second script that builds the prepared sources.
Mainly based on \textbf{KernelCI}'s Jenkins scripts, but with some
modifications to work in standalone, the builder\footnote{\url{https://github.com/free-electrons/kernel-builder}}
is split in two parts: a first script that checks the trees and prepares
tarballs of the sources when needed, and a second script that builds the
prepared sources.

In the overall CI architecture, the two scripts are called sequentially, just
before launching \textbf{CTT} in automatic mode, so that the newly created
@@ -473,7 +471,7 @@ configuration files, better user-interface, improved safety regarding the
crafted jobs, and a fully rewritten README.

Despite not being originally planned in the main subject of the internship, this
has been a truly instructive part, since it was all about software design, and
truly was an instructive part, since it was all about software design, and
making choices that would help make the tool maintainable for the long term, and
not something that would fall into oblivion in less than six months.


+ 14
- 14
tex/conclusion.tex View File

@@ -4,25 +4,25 @@ To bring this report to a close, it is good to note at least two main things.

First point is about continuous integration. This marvelous workflow, despite
being usually not trivial to set in place, can bring a considerable gain of time
at all the levels of the development. This can be amazingly efficient at
detecting regressions, and helping to reproduce them through well defined
procedures, but this can also be by providing pre-compiled artifacts usable by
many more people than the developers, from the packagers to the far end user who
is only looking for a working software.
at all the levels of the development. It can be amazingly efficient at detecting
regressions, and helping to reproduce them through well defined procedures, but
it can also be used to provide pre-compiled artifacts usable by a larger
audience than developers, from the packagers to the far end user who is only
looking for a working software.

The second point concerns more the Open-Source world, and all its subtleties,
in which I had to evolve a lot during all my work on \textbf{LAVA}.

For the technical sides, the overall process of upstreaming a patch produced
when some problem occurs in some particular setup, and the difficulties to make
that patch working for everyone, has surely been the most interesting and
educative part, compared to bug reporting, or back-porting for example.
On the technical side, the overall process of upstreaming a patch produced when
some problem occurs in some particular setup, and the difficulties to make that
patch working for everyone, surely was the most interesting and educative part,
compared to bug reporting, or back-porting for example.

Then on the social side, I have been able to deepen my experience in the many
positive and negative aspects of the work with a community, such as the
amazingly efficient, and personalized help that people can produce, or troubles
that can appear when someone wants to include a functionality that does not
please everyone.
Then on the social side, I was able to deepen my experience in the many positive
and negative aspects of the work with a community, such as the amazingly
efficient, and personalized help that people can give, or troubles that can
appear when someone wants to include a functionality that does not please
everyone.

Finally, I think this internship brought me one of the best work experience I
could have expected from a company, with both friendliness and seriousness, and

+ 12
- 15
tex/presentation_free-electrons.tex View File

@@ -31,28 +31,26 @@ also working as the management team.
\section{Strong focus on Free and Open Source software programs}

Since its creation, Free Electrons is doing its best to contribute to the Free
Software community. It does so by releasing all its training materials
\footnote{\url{http://free-electrons.com/training/}} under free documentation
license \footnote{\url{https://creativecommons.org/licenses/by-sa/3.0/}}. The
company also strongly encourages clients to share our combined work with the
community and thus has a preference for clients willing to interact with and
give back to the free software community by sparing some of project's time on
upstreaming modifications to free software programs.
Software community. It does so by releasing all its training materials\footnote{\url{http://free-electrons.com/training/}}
under free documentation license\footnote{\url{https://creativecommons.org/licenses/by-sa/3.0/}}.
The company also strongly encourages clients to share our combined
work with the community and thus has a preference for clients willing
to interact with and give back to the free software community by
sparing some of project's time on upstreaming modifications to free
software programs.

It is worth noticing that all the code produced in my internship, being new
projects or patches for existing ones, has been released and published either
on Free Electrons' Github organization page
\footnote{\url{https://github.com/free-electrons/}}, or in the project's
upstream code management tool.
on Free Electrons' Github organization page\footnote{\url{https://github.com/free-electrons/}},
or in the project's upstream code management tool.

\section{A recognized expertise}
Free Electrons' engineers are well-known in communities of free software
programs, such as the \textbf{Linux} kernel and \textbf{Buildroot}, thanks to
their tremendous number of contributions in these different projects and also by
attending and presenting talks in famous conferences around the globe, like the
Embedded Linux Conference
\footnote{\url{http://www.embeddedlinuxconference.com/}} or the FOSDEM
\footnote{\url{https://fosdem.org}}.
Embedded Linux Conference\footnote{\url{http://www.embeddedlinuxconference.com/}}
or the FOSDEM\footnote{\url{https://fosdem.org}}.

\begin{figure}[H]
\includegraphics[width=\textwidth]{free-electrons-contributions.png}
@@ -77,8 +75,7 @@ organized in different parts, called subsystems, and also needs maintainers
which take care of a subsystem by validating code being merged to the
\textbf{Linux} kernel source code.

Free Electrons has currently 6 maintainers
\footnote{\url{https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/MAINTAINERS}}
Free Electrons has currently 6 maintainers\footnote{\url{https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/MAINTAINERS}}
in its engineering team:
\begin{itemize}
\item Alexandre Belloni, maintainer of ATMEL SoCs, Real-Time Clock,

+ 78
- 83
tex/toolchains.tex View File

@@ -1,4 +1,5 @@
\chapter{Cross-compilation toolchains}
\chapter{Cross-compi:w
lation toolchains}
\label{cha:cross_compilation_toolchains}

\section{Overall goal}
@@ -6,22 +7,22 @@

Working in the embedded world obviously lead to the use of cross-compilation
toolchains. They are almost always needed, but usually quite specific, and most
of the time, people have two choices: using an existing generic toolchain
\footnote{Such as Linaro's ones:
\url{https://releases.linaro.org/components/toolchain/binaries/}}, or building
their own one, which takes a bit of time and knowledge.
of the time, people have two choices: using an existing generic toolchain\footnote{
Such as Linaro's ones:\url{https://releases.linaro.org/components/toolchain/binaries/}},
or building their own, which takes a bit of time and knowledge.

\textbf{Buildroot} already solves the knowledge part by providing a ready to use
build system supporting a tremendous amount of combinations allowing one to
build the wanted toolchain without knowing all the details of the process. But
that usually means spending at least half an hour only for the toolchain's build
before the real work can begin. This time can quickly increase up to two hours
depending on the computer's performance and the toolchain configuration.
that usually means spending at least half an hour only for building the
toolchain before the real work can begin. This time can quickly increase up to
two hours depending on the computer's performance and the toolchain
configuration.

If we could solve the time problem by pre-building a large variety of
toolchains, combining many attributes, such as the architecture and the C
standard library, with different versions each time, we could cover a large part
of the common cases where people use to build their own toolchains.
of the common cases for which people are used to build their own toolchains.

\subsection{Specifications details}
\label{sub:specifications_details}
@@ -34,7 +35,7 @@ describe a bit more what the final set of toolchains should look like.

The targeted architectures have been numerous from the beginning, because most
of them are coming in big-endian and little-endian. Moreover, some of the widely
used architecture, such as ARM or MIPS, come with many flavours, and on top of
used architectures, such as ARM or MIPS, come in many flavours, and on top of
that, have a 32 and a 64 bits version.

Those multiple factors led to a quite long list:
@@ -80,35 +81,32 @@ Those multiple factors led to a quite long list:
\subsubsection{C libraries}
\label{ssub:c_libraries}

The C libraries are the three common open-source ones:
There are three common open-source C libraries:
\begin{itemize}
\item glibc: the most common C library, used in most non-embedded platforms
\footnote{\url{https://www.gnu.org/software/libc/}}
\item glibc: the most common C library, used in most non-embedded platforms\footnote{\url{https://www.gnu.org/software/libc/}}
\item uClibc: a small C library that intends to behave as smaller version of
the glibc \footnote{\url{https://uclibc.org/}}
the glibc\footnote{\url{https://uclibc.org/}}
\item musl: a tiny C library, efficient for example for static linking, or
for quick startup due to less dynamic links
\footnote{\url{https://www.musl-libc.org/}}
for quick startup due to less dynamic links\footnote{\url{https://www.musl-libc.org/}}
\end{itemize}

\subsubsection{Two versions for each}
\label{ssub:two_versions_for_each}

As the needs for toolchains are very wide depending on the use-cases, some
people would like to use a \emph{stable} and reliable version, with less
features, but also less bugs, while others will prefer a more
people may would like to use a \emph{stable} and reliable version, with less
features, but also less bugs, while others would prefer a more
\emph{bleeding-edge} one, with the latest available features of every possible
software.

These two versions, \emph{stable} and \emph{bleeding-edge}, almost doubled the
number of combinations to produce, which was already too high to manage by hand.

As every combination is not automatically a valid one \footnote{Indeed, the
support for some architecture may not be complete in or the other piece of
software, and only the build system can tell if a particular configuration will
work or not.}, and with the enormous amount of configurations, it was inevitable
to have a tool making the combinations and deciding whether its is a valid one
or not.
As every combination is not automatically valid\footnote{Indeed, the support for
some architecture may not be complete in or the other piece of software, and
only the build system can tell if a particular configuration will work or not.},
and with the enormous amount of configurations, it was inevitable to have a tool
making the combinations and deciding whether it is a valid.


\section{Developing the builder}
@@ -127,19 +125,19 @@ as \verb$cat$, \verb$grep$, or \verb$set -x$, useful for debugging.
\subsection{Generating the combinations}
\label{sub:generating_the_combinations}

As the goal is to cover a large variety of combination, this process needs to be
As the goal is to cover a large variety of combinations, this process needs to be
procedural. Just as the \textbf{Linux} kernel, \textbf{Buildroot} uses the
Kconfig system \footnote{\url{https://en.wikipedia.org/wiki/Kconfig}}, which
Kconfig system\footnote{\url{https://en.wikipedia.org/wiki/Kconfig}}, which
means that fragments of configuration can be made and combined into the final
\verb$.config$ file.

Unfortunately, not all combination are valid ones, and to check what works and
what does not, the use of \textbf{Buildroot} was again required. The final
Unfortunately, not all combinations are valid, and to check which works and
which does not, the use of \textbf{Buildroot} was again required. The final
script thus concatenates the fragments across each and every possibilities, and
runs a \verb$make olddefconfig$ \footnote{This \textbf{Buildroot} command
runs a \verb$make olddefconfig$\footnote{This \textbf{Buildroot} command
generates a full valid configuration from what is already in \verb$.config$.}
before checking if lines were removed from the originally built fragment,
meaning that it was an invalid one.
meaning that it was an invalid fragment.

As expected, with all the wanted fragments in place, this quickly generated a
tremendous amount of possible toolchains, reaching over 130 valid combinations
@@ -149,39 +147,38 @@ that then had to be built!
\label{sub:first_the_builder}

One of the main constraints regarding the toolchains, was that they should be
able to run even on quite old operating systems, meaning old libc bindings, and
able to run even on quite old operating systems (with old libc bindings), and
thus, they should be built on a quite old operating system too.

The chosen solution was simply to launch the build in a \verb$chroot$
\footnote{\url{https://en.wikipedia.org/wiki/Chroot}} environment, made with
\verb$debootstrap$ \footnote{\url{https://wiki.debian.org/Debootstrap}}, using
an old Debian version.
The chosen solution was simply to launch the build in a \verb$chroot$\footnote{\url{https://en.wikipedia.org/wiki/Chroot}}
environment, made with\verb$debootstrap$\footnote{\url{https://wiki.debian.org/Debootstrap}},
using an old Debian version.

Once the environment in place, building the toolchain is only a matter of
running a few \verb$make$ with the right Kconfig fragment.
running a few \verb$make$ with the right Kconfig fragments.

Here is the final script:
\url{https://github.com/free-electrons/toolchains-builder/blob/master/build_chroot.sh}

\subsection{Second, the tests}
\label{sub:second_the_tests}
\subsection{Then, the tests}
\label{sub:then_the_tests}

The second constraint was that the toolchains should be automatically tested
before releasing them. As it is not trivial to achieve a hundred percent of
before sharing them. As it is not trivial to achieve a hundred percent of
coverage on a toolchain, the focus has been kept on a quite common case:
building a full \textbf{Linux} system, including a kernel, and its root
filesystem, before launching it with \textbf{QEMU}
\footnote{\url{https://www.qemu.org/} \\ Almost architecture were supported, but
there were still a few, like Blackfin or OpenRISC, that could not be tested that
way.}, and checking that the boot reaches userland without problem. This would
at least validate that the toolchain is stable enough to build a full set of
binaries able to boot a system.
filesystem, before launching it with \textbf{QEMU}\footnote{\url{https://www.qemu.org/}
\\Almost architecture were supported, but there were still a few, like Blackfin
or OpenRISC, that could not be tested that way.}, and checking that the system
reaches userland without any problem. This would at least validate that the
toolchain is stable enough to build a full set of binaries able to boot a
system.

To automatize the launch of the commands, while being able to check their
output, the \verb$expect$ command has been greatly helpful. With a script as
output, the \verb$expect$ command is greatly helpful. With a script as
simple as "run a, check for b, then run c, etc...", the tool was quickly able to
make the basic boot test, and it is now easy to extend this script to make a lot
more tests in the booted system.
more in the booted system.

The final \textbf{expect} script looks like this:
\url{https://github.com/free-electrons/toolchains-builder/blob/master/expect.sh}
@@ -196,17 +193,17 @@ generated. This is also at this moment that the READMEs are generated, before
being included in the archive.

At the end of this packaging process, a massive upload is done with all the
useful files, such as the toolchain archive, but also the different compilation
useful files, such as the toolchain archives, but also the different compilation
logs, the tests logs, the manifests as separated files, the checksums, and more
importantly, the sources and the licences of every software used in the
toolchain. This legal aspect has been most important, since the toolchains are
toolchain. This legal aspect is very important since the toolchains are
released to everyone, and do not remain for internal use only.

The upload is made through \textbf{SSH}, using the \verb$rsync$ command, since
it is one more time a simple a reliable tool. Moreover, \verb$rsync$ provides an
interesting feature: the upload is made to a temporary file, and once complete,
a simple \verb$mv$ is done, so that the final file instantly appears as a whole,
and thus, race conditions at the filesystem level are avoided.
it is one more time a simple and reliable tool. Moreover, \verb$rsync$ provides
an interesting feature: the upload is made to a temporary file, and once
complete, a simple \verb$mv$ is done, so that the final file instantly appears
as a whole, and thus, race conditions at the filesystem level are avoided.

The destination folders follow a well defined structure so that it is easy to
browse and manually find a toolchain afterwards. The naming of the toolchains is
@@ -224,21 +221,21 @@ days of CPU time would have to be spent for only one full build.
Free Electrons' CTO, Thomas, as a \textbf{Buildroot} maintainer, had already
searched a bit for CI services that could be able to perform such a build. Since
the \textbf{Buildroot} team already runs continuous integration jobs for their
tool \footnote{This is also about letting \textbf{Buildroot} build a large
tool\footnote{This is also about letting \textbf{Buildroot} build a large
quantity of different configurations}, and it is currently running on
\url{gitlab.com}'s CI service
\footnote{\url{https://docs.gitlab.com/ee/ci/quick_start/} \\ \url{gitlab.com}
is the public and free to use instance of the \textbf{Gitlab} software, hosted
and administered by \emph{GitLab, Inc}.}, it was worth testing it.
\url{gitlab.com}'s CI service\footnote{\url{https://docs.gitlab.com/ee/ci/quick_start/}
\\ \url{gitlab.com} is the public and free to use instance of the
\textbf{Gitlab} software, hosted and administered by \emph{GitLab, Inc}.}, it
was worth testing it.

The setup only consists of a simple \verb$.gitlab-ci.yaml$ file, that describes
the jobs to do, and it was easy to generate it once a valid toolchain fragment
had been found. As a bonus point, they provide a straightforward way to give the
jobs a private key, meaning the use of \textbf{SSH} was not a problem at all.

This solution quickly proved to work well, and even more, it has been very
efficient. Completing a full build in the CI infrastructure, meaning more than
130 toolchains, is generally done in a bit more than two hours. This impressive
This solution quickly proved to work well, and even more, it proved to be very
efficient. Completing a full build (i.e. more than 130 toolchains) in the CI
infrastructure is generally done in a bit more than two hours. This impressive
performance has allowed to do a lot of tests, making the release quite reliable.
For providing such a powerful tool, \url{gitlab.com}'s team surely deserves many
kudos!
@@ -250,15 +247,15 @@ The final step toward a public release was of course developing a website
presenting an easy way to select the toolchains among the many choices.
Moreover, the website should be easy to refresh when a new release is built.

A static website generator using \textbf{Python}, using some basic
\textbf{Jinja2} templates, and crawling through the filesystem to discover the
toolchains, has been a simple solution, secured, easy to deploy, and flexible
enough to do exactly what was expected.
A static website generator using \textbf{Python}, some basic \textbf{Jinja2}
templates and crawling through the filesystem to discover the toolchains, has
been a simple solution, secured, easy to deploy, and flexible enough to do
exactly what was expected.

The basic workflow is to run the generator, giving him the path to the
toolchains' storage, and the path to the web root folder. The script then walks
through the toolchains and their manifest, to generate a full static website
composed only of \emph{HTML} files, which display really fast, and are by design
toolchains' storage, and to the web root folder. The script then walks through
the toolchains and their manifest, to generate a full static website composed
only of \emph{HTML} files, which display really fast, and are by design
protected against many types of web attacks, also making the site reliable and
simple to manage.

@@ -269,16 +266,16 @@ deployed here: \url{http://toolchains.free-electrons.com}.
\section{Overall summary}
\label{sec:overall_summary}

The final setup ended with the following chain of operation:
The final setup ended with the following chain of operations:
\begin{enumerate}
\item The \verb$update_gitlab-ci.sh$ script is run on the maintainer's
computer, and generates the configurations before pushing to a Gitlab
branch a commit embedding the generated fragments and the corresponding
\verb$.gitlab-ci.yaml$ file.
\item Gitlab thus triggers the different jobs, and they are executed in
\item Gitlab thus triggers the different jobs, which are then executed in
parallel on the different available workers.
\item At the end of the jobs, the toolchains are pushed to a storage server.
\item Every minutes, the storage server has a script watching for potential
\item Every minute, the storage server runs a script watching for potential
new toolchains, and if found, refreshes the website.
\end{enumerate}

@@ -294,21 +291,20 @@ The final setup ended with the following chain of operation:
\section{Release, feedback, and updates}
\label{sec:release_feedback_and_updates}

Through word of mouth in the communities, and a blog post
\footnote{\url{http://free-electrons.com/blog/free-and-ready-to-use-cross-compilation-toolchains/}}
Through word of mouth in the communities, and a blog post\footnote{\url{http://free-electrons.com/blog/free-and-ready-to-use-cross-compilation-toolchains/}}
on Free Electrons' website, the news of the release quickly spread, and feedback
came very quickly. The overall feeling was quite positive, with many people
sending thankful messages though various ways.
sending thankful messages by various means.

Among the messages, some questions were raised, which led to the creation of an
FAQ page on the website, aggregating the most common ones.

With the coming of the next \textbf{Buildroot} version, including, among others
improvements, a more recent GCC, a new release has been prepared. But with the
growing number of available toolchains, a new page must have been made, as a
per-architecture summary, presenting all the possible version, even the old,
deprecated ones. It is thus easy to track any possible software version that is,
or has been released in the toolchains set.
improvements, a more recent GCC, a new release was prepared. But with the
growing number of available toolchains, a new page had to be made, as a
per-architecture summary, presenting all the possible versions, even the old and
deprecated ones. It is thus easy to track any possible software version that is
or was released in the toolchains set.

\section{Final words}
\label{sec:final_words}
@@ -319,7 +315,6 @@ me to discover a large amount of uncommon architectures, and to better
understand how toolchains work from the inside, which I did not expect at all at
first glance.

In the meantime, this has also been a nice contribution to the open-source
world, since it is a service that did not exist six months ago, and which is now
more and more used by many different people and projects around the world.

In the meantime, this also was a nice contribution to the open-source world,
since it is a service that did not exist six months ago, and which is now more
and more used by many different people and projects around the world.

Loading…
Cancel
Save