You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

480 lines

  1. \chapter{Custom tests for Continuous Integration}
  2. \label{cha:custom_tests_for_continuous_integration}
  3. \section{The need for custom tests}
  4. \label{sec:the_need_for_custom_tests}
  5. \subsection{The KernelCI project}
  6. \label{sub:the_kernelci_project}
  7. \textbf{KernelCI} is a project started a few years ago that aims at compiling on
  8. a per hour basis a lot of different upstream \textbf{Linux} kernel trees, then
  9. sending jobs\footnote{Jobs are the resource \textbf{LAVA} deals with. A more
  10. detailed explanation can be found in \ref{ssub:the_jobs} at page
  11. \pageref{ssub:the_jobs}.} using those kernels to the subscribed labs, before
  12. aggregating the results on different summary pages, where it is easy to see if a
  13. board has some problems booting \textbf{Linux}, and when the problems started to
  14. occur.
  15. This project already does a great part of the CI loop by building and displaying
  16. the results, but still needs the collaboration of the many labs contributing
  17. their device availability to have the complete process.
  18. With all the different kind of devices provided by the labs from all over the
  19. world, they have achieved over the years a quite good coverage of the supported
  20. device types in \textbf{Linux}. But even if they test a great number of
  21. platforms, the only jobs they send make little more than booting and running a
  22. few basic commands in userspace.
  23. Jobs as simple as this are not suitable when it comes to test the SATA, the
  24. USB, or some other specific systems that are usually unused during boot. But
  25. of course, as \textbf{KernelCI} has to deal with many different labs, they can
  26. not afford to make the jobs more specific, since many boards, despite being
  27. identical, may have different configurations and different devices attached to
  28. it.
  29. To fill that gap, custom tests must be set up at a smaller scale, to comply with
  30. the specificities of the lab and the devices.
  31. \subsection{Specifications}
  32. \label{sub:specifications}
  33. As Free Electrons counts many device family maintainers in its engineering team,
  34. it is particularly important for them to have the finest CI set up for those
  35. devices. Moreover, they frequently work and develop new drivers on custom
  36. \textbf{Linux} trees, often coming from the device's vendor tree.
  37. Since this process can take quite a long time, it would be wonderful to have
  38. some CI process in place for those custom trees, and custom features still being
  39. in development.
  40. Considering the jobs already run by \textbf{KernelCI}, something more specific
  41. but still mostly similar was needed to implement the custom tests. Two parts
  42. were to be set up:
  43. \begin{itemize}
  44. \item launching custom scripts on the devices to check the specific
  45. functionalities.
  46. \item building and booting custom kernel trees that are not already taken
  47. care of by \textbf{KernelCI}.
  48. \end{itemize}
  49. Of course, once both parts are running, they can be combined to launch custom
  50. scripts on custom kernels.
  51. Last but not least, the overall architecture should be supporting two
  52. operating modes:
  53. \begin{itemize}
  54. \item a \textbf{manual} one, that could be triggered by hand, to run only
  55. the asked specific tests with a user-built kernel, and give immediate
  56. report once the job has been run.
  57. \item an \textbf{automatic} one, that would run tests every day, and make
  58. daily reports to the maintainers.
  59. \end{itemize}
  60. \section{The CI lab}
  61. \label{sec:the_ci_lab}
  62. Applying continuous integration to an operating system is not an easy task.
  63. Since it needs external hardware management, power supply control, and
  64. input/output control, it clearly requires a complex infrastructure.
  65. Free Electrons has a lab now for more than one year, that it is running fine,
  66. continuously testing more than 35 devices, and reporting public results
  67. making them available for the \textbf{Linux} community.
  68. \subsection{The hardware part}
  69. \label{sub:the_hardware_part}
  70. Since the tested software is an operating system, it needs to run on real
  71. hardware, and thus, it differs from more usual CI that typically runs in some
  72. container technology.
  73. Built last year, the farm at Free Electrons takes the form of a big cabinet,
  74. with eight drawers, capable of storing up to 50 boards. With those devices, USB
  75. hubs, switches, and ATX power supply with their TCP controlled relays can be
  76. found in each and every drawer. For the main management, the farm also hosts a
  77. main USB hub, a main switch, and most importantly, the NUC that runs parts of
  78. the software driving the whole lab.
  79. Everything is well wired and labelled to keep the material in a maintainable
  80. state.
  81. For more information about the ins and outs of this part, one can read this blog
  82. article:
  83. \url{}.
  84. \begin{figure}[H]
  85. \centering
  86. \includegraphics[height=0.4\paperheight]{lab.JPG}
  87. \caption{An overview of the cabinet hosting the hardware part of the lab.}
  88. \label{fig:lab}
  89. \end{figure}
  90. \subsection{The software: LAVA}
  91. \label{sub:the_lava_software}
  92. In order to run everything from a software point of view, \textbf{LAVA} has been
  93. used from the very beginning. LAVA stands for \emph{Linaro Automated Validation
  94. Architecture}, and provides a quite good way to manage and schedule automated
  95. tests on embedded devices. Most of the job to do to implement custom tests was
  96. about using \textbf{LAVA} the right way to perform the tests, so this part will
  97. include more details than the hardware one.
  98. \subsubsection{LAVA: v1 or v2?}
  99. \label{ssub:lava_v1_or_v2}
  100. Since \textbf{LAVA} is currently in the middle of a huge refactoring of its
  101. internal workflow and exposed API, we usually find people talking about
  102. \textbf{LAVA v1} and \textbf{LAVA v2}. It is actually the same software, with
  103. different behaviors at many levels, but with the same final goal: running what
  104. is called a job, a user described list of actions, on a device, and reporting
  105. the events that happened.
  106. At the beginning of the internship, only the first version was in use, and part
  107. of the job was about migrating to the second one. This ran through a lot of
  108. different problems and bugs that needed to be fixed before taking the next step,
  109. but we finally ended up with a fresh and running architecture using mostly
  110. \textbf{LAVA v2}.
  111. For clarity's sake, only the \textbf{final setup} will be described in this report. Those
  112. interested in the original software architecture can still read this blog post:
  113. \url{}.
  114. \subsubsection{The jobs}
  115. \label{ssub:the_jobs}
  116. The main resource \textbf{LAVA} has to deal with is the job. A job is defined by
  117. a \textbf{YAML}\footnote{\url{}} structure, describing multiple
  118. sections:
  119. \begin{itemize}
  120. \item The \textbf{device-type}, which is the name of the device you want to
  121. run the test on. \\
  122. \emph{Examples: beaglebone-black, sun8i-h3-orangepi-pc, ...}
  123. \item The \textbf{boot method}, to tell \textbf{LAVA} which method should be
  124. used to boot the device. \\
  125. \emph{Examples: \textbf{fastboot} or \textbf{U-boot}, \textbf{NFS} or
  126. ramdisk, \textbf{SSH}, ...}
  127. \item The \textbf{artifacts} URLs. This includes the \emph{kernel}, the
  128. \emph{device tree}, the \emph{modules}, and the \emph{rootfs}. Only the
  129. kernel is completely mandatory to boot the boards, but the other ones
  130. are common for almost every non-exotic device.
  131. \item The \textbf{tests} to run once the device is booted. This can include
  132. many possibilities, since it generally points to some shell scripts to
  133. be executed as root in userspace. It is the main place to customize the
  134. tests.
  135. \item Other less important sections, such as some \textbf{metadata}, the
  136. \textbf{notifications}, or custom \textbf{timeouts}.
  137. \end{itemize}
  138. Once a job has been sent either by the web interface or by the API, it is queued
  139. until a device corresponding to the asked device-type is free to be used. Then
  140. the job gets scheduled and run, before finishing either with the
  141. \textbf{Complete} status when everything ran well, or the \textbf{Incomplete}
  142. status when there was a problem during the execution of the different tasks.
  143. When a test is complete, \textbf{LAVA} provides access to the results by many
  144. ways, such as the web UI, the API, some emails, or a callback system making the
  145. job able to push its results to some other APIs.
  146. \subsubsection{A distributed system}
  147. \label{ssub:a_distributed_system}
  148. \textbf{LAVA} has been designed to scale for far more complex labs than Free
  149. Electrons' one. It is thus split into two parts: a master, and a worker.
  150. \begin{description}
  151. \item[The master]: \\
  152. The master can exist in only instance. It is in charge of three tasks:
  153. \begin{itemize}
  154. \item The \textbf{web interface}, allowing in-browser interaction
  155. with the software.
  156. \item The job \textbf{scheduler}, responsible for sending the queued
  157. jobs to the available devices.
  158. \item The \textbf{dispatcher-master}, that manages the different
  159. possible workers, and sends jobs to them.
  160. \end{itemize}
  161. The master also has the connection to the relational database.
  162. \item[The worker]: \\
  163. The worker is divided in only two parts:
  164. \begin{itemize}
  165. \item The \textbf{slave} is the part that connects to the
  166. dispatcher-master and receives the jobs to run.
  167. \item The \textbf{dispatcher} is the only part that really interacts
  168. with the devices under test. It is spawned on demand by the
  169. slave when a job needs to be run. It is also the only part that
  170. does not run as a daemon.
  171. \end{itemize}
  172. \end{description}
  173. \begin{figure}[H]
  174. \centering
  175. \includegraphics[width=0.8\linewidth]{arch-overview.png}
  176. \caption{Architecture schematic of the \textbf{LAVA} software.}
  177. \label{fig:arch-overview}
  178. \end{figure}
  179. \section{Developing custom tests}
  180. \label{sec:developing_custom_tests}
  181. \subsection{Beginning a proof of concept}
  182. \label{sub:beginning_a_proof_of_concept}
  183. At the beginning, a basic and functional, but still blurred specification was
  184. made, but it required a proof-of-concept to see how it would fit in final
  185. production. It had quickly been named \textbf{CTT}, standing for \emph{Custom
  186. Test Tool}\footnote{You can find the sources at this address:
  187. \url{}}, and that is how the
  188. software building the custom jobs will be designated till the end of this
  189. report.
  190. The choice of \textbf{Python} was more than obvious, since this language is accessible
  191. and widely used in the embedded world for its flexibility and portability.
  192. Moreover, most of the Free Electrons engineers had already used it, and it was
  193. not an option to introduce a new, unknown technology, in an architecture they
  194. would have to maintain in the future.
  195. \subsubsection{Understanding the depth of LAVA's jobs}
  196. \label{ssub:understanding_the_lava_jobs}
  197. The first simple part was about \textbf{LAVA}. Since \textbf{KernelCI} already
  198. provides everything (kernel, dtb and rootfs) needed to run a successful job in
  199. \textbf{LAVA}, the only part remaining was crafting and sending jobs.
  200. An easy and simple, yet flexible solution, was to use a template engine to
  201. parametrize a generic job written once and for all.
  202. The job syntax is using the human friendly \textbf{YAML}, but even if it is
  203. readable and easy to write, the data structure itself required by \textbf{LAVA}
  204. is a bit complex, and it is thus truly inconvenient to write tests by hand.
  205. Once filled, the template would just have to be sent to \textbf{LAVA} through
  206. its XML-RPC\footnote{\url{}} API to create
  207. a job.
  208. Knowing what to put in that template was one of the most interesting moment of
  209. this part, since it was like discovering a new programming language. There are
  210. always new features to discover, and new mechanisms for using them, and finally
  211. to make \textbf{LAVA} do exactly what you want. It is also during this period
  212. that most of the migration to \textbf{LAVA v2} was prepared, meaning that the
  213. configuration of the different levels of \textbf{LAVA} was altered.
  214. It was often required to discuss with the \textbf{LAVA} community, on
  215. \url{irc://}, to get clarification when the
  216. documentation happened to be incomplete, or when \textbf{LAVA} needed to be
  217. improved\footnote{See these patches for example: \\
  218. \url{} \\
  219. \url{} \\
  220. Also don't hesitate to run \verb$git log --author "Florent Jacquet"$ in the
  221. \verb$lava-dispatcher$ and the \verb$lava-server$ projects to get an overview of
  222. the contributions made to \textbf{LAVA} (Also available in appendix
  223. \ref{cha:list_of_contributions_to_lava}, page
  224. \pageref{cha:list_of_contributions_to_lava}).
  225. }.
  226. \subsubsection{Using custom artifacts}
  227. \label{ssub:using_custom_artifacts}
  228. Once we could easily send jobs, the next step was about sending custom
  229. artifacts, such as the user-built kernel. This would be useful for the first
  230. manual mode of the tool, when a user would launch some jobs from his
  231. workstation, using a kernel built from one of his working trees.
  232. \textbf{LAVA} allowing the use of files local to the dispatcher, it would be a
  233. really convenient solution to provide the artifacts without setting up some
  234. \textbf{FTP} server or other complicated means of serving files.
  235. \textbf{SSH}, with the \verb$scp$ command, allows efficient and reliable file transfers
  236. between two machines, and since the engineers have an easy access to the
  237. dispatcher using one of Free Electrons' VPNs\footnote{Virtual Private Network
  238. (\url{\_private\_network})}, it would be
  239. easy to give them permissions to send files.
  240. With \textbf{Python}, the \textbf{paramiko}\footnote{\url{}}
  241. library, allowing a native use of \textbf{SSH}, makes the choice of that
  242. protocol even more comfortable.
  243. \subsubsection{Launching automatic jobs}
  244. \label{ssub:crawling_for_existing_artifacts}
  245. The other mode of the tool, as an automatic launcher, would require to fetch
  246. pre-built artifacts available from a remote location, such as
  247. \textbf{KernelCI}'s storage, or Free Electrons' once the custom builds would be
  248. set up. Fortunately, the \textbf{KernelCI} website also provides an API,
  249. allowing to retrieve their latest builds.
  250. The most difficult part was then to make sure that the crawler would have enough
  251. information about the boards to fetch their specific artifacts, while trying to
  252. avoid having a very big file storing every possible data about the boards.
  253. Once done, getting the artifacts would only be a matter of crafting the right
  254. URL.
  255. This ended up with a simple \textbf{JSON}\footnote{\url{}}
  256. file, storing the list of the boards, each storing four strings:
  257. \begin{itemize}
  258. \item \textbf{arch}, the architecture of the board, to guess which kernel to
  259. use.
  260. \item \textbf{dt}, the device-tree, also mandatory for booting the devices,
  261. and unique to each and every one of them.
  262. \item \textbf{rootfs}, since they are built for many architecture flavours
  263. (ARMv4, ARMv5, ARMv7, and ARMv8)
  264. \item \textbf{test\_plan}, since it is mandatory for \textbf{LAVA}, and must
  265. be configured on a per device basis.
  266. \end{itemize}
  267. This file proved to be simple enough, and the crafter's job is now only about
  268. crafting an URL, and checking if the artifact actually exists.
  269. With the crawlers done, and the rest of the tool already working, the only thing
  270. that remained to be done in \textbf{CTT} was the custom scripts to be run once
  271. the userspace is reached.
  272. \subsection{Running custom scripts}
  273. \label{sub:running_custom_scripts}
  274. \subsubsection{Writing a test suite}
  275. \label{ssub:writing_a_test_suite}
  276. Among the many possibilities brought by the \textbf{LAVA} job structure, is the
  277. possibility of designating a \emph{git} repository and a path in that repo to a
  278. file that would be executed automatically by \textbf{LAVA} from the device's
  279. userland shell.
  280. Before writing more complex tests which would require some time of development,
  281. a simple \verb$echo "Hello world!"$ made just the job. This allowed to do a lot
  282. of testing, checking all possible solutions, and finally define an architecture
  283. that would be both simple and functional enough for the custom tests' needs.
  284. \subsubsection{Integrating custom tools in the root file system}
  285. \label{ssub:integrating_custom_tools_in_the_rootfs}
  286. Before writing more advanced test scripts in the test suite, a problem had to be
  287. solved. Many of the tests would require tools or commands that are not shipped
  288. by default in \textbf{KernelCI}'s rootfs. Moreover, a requirement was that this
  289. rootfs should be compiled for each ARM flavour, unlike the extremely generic one
  290. built by \textbf{KernelCI}.
  291. An easy and flexible way of building custom root filesystems is to use
  292. \textbf{Buildroot}\footnote{\url{}}. This led to some
  293. simple glue scripts\footnote{\url{}}
  294. building the few configurations requested by the farm, which are mainly
  295. including \emph{iperf}\footnote{\url{}
  296. and \url{}} and a full \emph{ping}\footnote{One that includes
  297. the \verb$-f$ option, for ping floods.} version for network stressing, and
  298. \emph{Bonnie++}\footnote{\url{} and
  299. \url{}} for filesystem performances, over a
  300. classic Busybox\footnote{\url{} and
  301. \url{}} that provides the rest of the system.
  302. As my first experience with \textbf{Buildroot}, this was a quick but interesting
  303. part that made me discover the power of build systems in the embedded world.
  304. \subsubsection{The road to complex jobs}
  305. \label{ssub:the_road_to_complex_jobs}
  306. With a test suite and custom root filesystem, the overall architecture was in
  307. place. To verify that everything would work as expected, more complex tests were
  308. to be written.
  309. As Busybox provides only \emph{Ash} as its default shell, the scripts needed to
  310. be compatible with this software, and thus could not take advantage of some Bash
  311. features. This turned out to be quite an exercise, since most of the OSs in 2017
  312. provide the latter by default, and the differences may in some cases cause some
  313. headache finding workarounds for complex operations.
  314. The other most interesting part was the development of the first
  315. \emph{Multinode job}\footnote{\url{}}.
  316. This is the \textbf{LAVA} term to describe jobs that require multiple devices,
  317. such as some network-related jobs. Since the boards need to interact, they need
  318. to be synchronized, and \textbf{LAVA} provides some tools in the runtime
  319. environment to allow data exchanges between the devices, but as with classic
  320. threads or processes, this can quickly lead to some race conditions, deadlocks,
  321. or other interesting concurrency problematics.
  322. Once all those problems addressed, with the network tests running, a little
  323. presentation to the team was given, so that everyone would know the status
  324. of the custom Continuous Integration, and this also allowed to show them the
  325. architecture so that they could easily add new boards and tests in the future.
  326. \subsection{Adding some reporting}
  327. \label{sub:adding_some_reporting}
  328. With the two operating modes of \textbf{CTT}, came two modes of reporting: one
  329. for the manual tests, and the other for the daily ones.
  330. For the first and easy part, it was just about adding the correct \emph{notify}
  331. section to the jobs template, so that when an engineer sends a job manually to
  332. \textbf{LAVA}, his email address is included in the definition and he gets a
  333. message as soon as the job is finished, with some details about what worked and
  334. what failed.
  335. For the second part, the daily tests, the need was to aggregate the results of
  336. the past twenty-four hours into a single, personalized email. Indeed, each
  337. engineer can subscribe to some devices, and in order not to make the reporting
  338. too verbose, a script builds a specific email for every one, so that people only
  339. get the results they are interested in.
  340. \subsection{Integrating custom kernels}
  341. \label{sub:integrating_custom_kernels}
  342. \subsubsection{Goal}
  343. \label{ssub:goal}
  344. The next and last step toward fully customized CI tests, was building custom
  345. kernels. Just like \textbf{KernelCI} does every hour, the goal is to monitor a
  346. list of kernel trees, pull them, then build them with specific configurations,
  347. and store the artifacts online, so that \textbf{LAVA} could easily use them.
  348. Custom kernels really come in handy in two cases. When the engineers would like
  349. to follow a specific tree they work on, but this tree is not close enough to
  350. mainline and \textbf{KernelCI} does not track it, Free Electrons' builder would
  351. be in charge of it. The other useful case, is when a test requires custom kernel
  352. configuration, such as the activation of the hardware cryptographic modules,
  353. that are platform specific, thus not in the range of \textbf{KernelCI}'s builds.
  354. \subsubsection{Setting up a kernel builder}
  355. \label{ssub:setting_up_a_kernel_builder}
  356. Mainly based on \textbf{KernelCI}'s Jenkins scripts, but with some
  357. modifications to work in standalone, the builder\footnote{\url{}}
  358. is split in two parts: a first script that checks the trees and prepares
  359. tarballs of the sources when needed, and a second script that builds the
  360. prepared sources.
  361. In the overall CI architecture, the two scripts are called sequentially, just
  362. before launching \textbf{CTT} in automatic mode, so that the newly created
  363. kernel images can quickly be tested. Of course this required adding to
  364. \textbf{CTT} the requested logic to crawl either on \textbf{KernelCI}'s storage,
  365. or Free Electrons' one.
  366. \subsection{A full rework before the end}
  367. \label{sub:a_full_rework_before_the_end}
  368. Before the end of the internship, everything was fully operational, up and
  369. running, but a quite huge problem remained. Indeed, the whole code of
  370. \textbf{CTT} had been developed quickly, as a proof-of-concept, and even if the
  371. technological choices were not bad, the overall design of the software made it
  372. awful to maintain.
  373. The decision was taken, as about one month remained, to rework completely the
  374. tool, so that it would be easier in the future to add new features. The
  375. technical debt brought by the proof-of-concept design pattern would also be
  376. paid.
  377. One of the engineers had already taken time to rework small parts, but was
  378. keeping the internal API untouched when some functions or classes necessitated
  379. to be split in multiple ones. More was needed, but still, he had quite a good
  380. vision of the tool's design, and greatly helped in its refactoring.
  381. This brought along the way many interesting side-effects: unit tests to almost
  382. all the newly created classes, flexible and modular design, simpler
  383. configuration files, better user-interface, improved safety regarding the
  384. crafted jobs, and a fully rewritten README.
  385. Despite not being originally planned in the main subject of the internship, this
  386. truly was an instructive part, since it was all about software design, and
  387. making choices that would help make the tool maintainable for the long term, and
  388. not something that would fall into oblivion in less than six months.