Open Hardware as critical infrastructure?
This blog is part of a series on open hardware and key messages for public policy. Read the introduction and access other #OHpolicy blogs here.
By Luis Felipe R. Murillo, Research Associate at the School of Data
Science, University of Virginia
It is a common held perception that Open Hardware is mostly a hobbyist technology: it is good for self-training and educational activities, but hardly suitable for scientific or industrial applications. This perception, however, could not be more misguided. Open Hardware is, in fact, operating in critical infrastructures as you read these lines.
By “infrastructure” I mean the social and technical support that is assembled in very specific ways for the realization of a particular kind of work or for supporting other types of infrastructure [1]. “Critical,” in turn, signals the fact that a particular kind of infrastructure is necessary for the work to be carried out. Be it from people or machines, or any combination of them. Think of digital infrastructures that support services of companies like Facebook and Twitter: they are infrastructural, but not critical for they can be substituted by other infrastructures (that came before and after them, that are publicly accountable) to provide similar functionalities.
Open Hardware, it turns out, is more infrastructural than most people realize. It has been used, after all, in unexpected ways to create, standardize, document, share, and repair critical infrastructures. In the span of two fast-paced decades, Open Hardware became an essential part of an economy of common resources and tools to create new but also to maintain legacy technologies — from research instruments as serious as fusion reactors (Faugel and Bobkov, 2013) to environmental monitoring instruments and stations (Murillo 2016; Ali et al. 2016; Camprodon et al. 2019) [2] and more.
I came to find Open Hardware in unexpected infrastructures through the study of Free and Open Source development. My research is part of a field that concentrates on the collaborative dynamics of production of technologies as common resources. Dynamics which are, for the lack of a better expression, heaven and hell combined: they reflect, at once, the respect of the work of others in cooperative arrangements that span the globe for community-building but also, as it is very much part of this world, challenges of sustainability, exploitation of small volunteer teams, corporate capture (which is increasingly the case) and, sadly, recurrent events of discrimination. Yet, open and libre technologies find their way into unlikely places like company and government infrastructures, public policy centers, research and educational institutions around the globe in growing numbers. It was precisely this movement beyond the usual circuits of community-driven projects that intrigued me, so I set out to study open technologies in a large bureaucracy that operates as an infrastructure provider for international collaborations in high energy physics (HEP), the European Organization for Nuclear Research (CERN), where I worked as a visiting researcher [3]. It was in this context that I found a different dynamic of collaborative development to maintain research infrastructures of high security facilities.
CERN is not a conventional bureaucracy: it is one that keeps producing unintended results, such as transformative technologies that eventually became critical infrastructure. From a boring, integrated system for sharing papers among physicists, for example, to the creation of the earliest web technologies. It is also the site of a curious initiative for “Open Hardware” created in 2013 with the goal of supporting a community of sharing for hardware engineers working for HEP facilities [4]. From this initiative, Open Hardware became a means for sharing the load of developing research infrastructures with high degrees of quality control and assurance. Critical as CERN is for the HEP community worldwide, one would not expect Open Hardware to play such an important part.
Since its inception, however, the Open Hardware initiative at CERN lead to the creation and promotion of several infrastructural technologies. It designed, for example, a network technology (as Open Hardware) that is meant to substitute the timing system of particle accelerators with a project called “White Rabbit” after the unforgettable character in Lewis Carroll’s “Alice in Wonderland” [5]. This system is described by CERN engineers as a means for “distributing clocks” to ensure that the network shares a common time (with nanosecond scale accuracy). This initiative is not only crucial for keeping up critical time for the purposes of basic research. It is particularly relevant for having advanced the work of Open Hardware in many directions with the creation of a standard for White Rabbit (vetted by IEEE [6] in 2019), a set of licenses for Open Hardware (the CERN Open Hardware license [7]), a repository for OH projects [8], and, last but not least, institutional support for community projects, such as KiCAD, which is formed by a fairly distributed team of software developers working on this Free and Open Source tool for printed circuit board design [9].
It is not only in the context of network hardware with programmable logic that we find Open Hardware at CERN, we also find it in the tunnels of the Large Hadron Collider (LHC) where it is as infrastructural and critical. The “Radiation Tolerant LED PSU“ [10] project developed by the CERN engineers James Levine and Jean Marie Foray provides open design files for a radiation-resistant power supply of the emergency lighting, not only for illuminating the underground pathways that are necessary for maintenance and upgrading of the most important research instrument of the organization, but also for enabling the exchange and collaborative development of solutions with research facilities with similar needs. Open Hardware where you least expect it: for the security needs of research installations.
Another area that we find Open Hardware at CERN is in its data center. It is well-known the crucial role that the organization has played in advancing techniques for large-scale data analysis, pioneering the acquisition, treatment, and storage of double-digit petabytes of data per year. To archive data at this scale, tapes are used and manipulated by robotic arms that load them into tape decks. The problem with this technology is that data gets corrupted often due to specks of dust that enter the data center uninvited. To address this problem, the CERN engineer Julien Leduc created the project “Data Centre Environmental Sensor” with common parts of the Open Hardware tool-set [11]. The result: as levels of dust increase in the data center, the system prevents the tape decks from being loaded, so the risk of corrupting data is mitigated. Infrastructural and invisible to most experimental physicists, sure, but nonetheless critical if data preservation is to be taken seriously. Open Hardware, again, where you least expect it.
And this is just the small tip of what currently exists out there (as in the famous image of the, now melting, iceberg). We do not know, in fact, where most applications are: infrastructures are supposed to be invisible and very few people (relatively) have the patience and funding to study them. What we do know, however, is that we have validated Open Hardware running in critical infrastructures. Open Hardware has been proven, thus far, to be flexible and adaptable enough to cover a wide range of use-cases: from educational and hobbyist activities (which are essential for a vibrant community, let’s not discount them as “irrelevant”) to mostly invisible and infrastructural solutions operating in research labs around the globe. We can anticipate that much more is yet to come in terms of infrastructural Open Hardware. So, my friends, next time you hear the argument that Open Hardware is only good for hobbyists, reconsider this received wisdom under the light of concrete examples in which it is not only infrastructural but also critical.
REFERENCES
[1] For one of the most generative definitions of the term, see: Susan Leigh Star’s article “Ethnography of Infrastructures”.
[2] Faugel and Bobkov, 2013; Murillo 2016; Ali et al. 2016; Camprodon et al. 2019.
[3] Many thanks to Pietari Kauttu (CERN-KT), Javier Serrano, Erik van der Bij, and Maciej Lipinksi (CERN-BE-CO hardware and timing section) for creating an amazingly supportive environment for this research. To learn more, please refer to the article I am preparing on the topic as well as [this wiki page] with further documentation.
[4] CERN Open Hardware brochure
[5] For more information about White Rabbit, visit the wiki page of the project.
[6] Integration of White Rabbit into the IEEE1588–2019 standard.
[7] CERN Open Hardware License in its various versions.
[8] CERN Open Hardware repository
[9] CERN support for the [KiCAD project].
[10] Radiation Tolerant LED power supply unit project.
[11] Data Centre Environmental Monitor project.