Shortly after the open-source release of Razor in late May, we started to see requests from users for more information about the Microkernel that is used by Razor. What distribution was used as the basis for this Microkernel? What services does it provide? What is involved in building a custom Microkernel that will support my hardware? Will the Microkernel be open-sourced as well? If so, will it be part of the Razor project or a separate (but related) project?
At the same time that these requests started coming in from the Razor community, we saw our first Razor issue that was directly linked to the Razor Microkernel. The issue was with support for a networking card that we hadn’t seen before (the Broadcom NetXtreme II card) that was presenting some issues for a Razor user (The Microkernel it wasn’t checking in with the Razor server on machines that used this network card because the it couldn’t connect to the underlying network). In the end, the issue turned out to be that the firmware needed to support this network card was not included in the Microkernel (even though firmware for this card that would work with our Microkernel was readily available).
Our intention all along was to make this project publically available, but we still hadn’t worked out the last remaining issues around automating the build of a Microkernel ISO from the Razor Microkernel project itself (at the time of the Razor release late last month the process of building a new version of the Microkernel ISO “from source” was still fairly tedious, manual, and error prone). We have finally resolved the last of these issues, and are proud to announce that the Razor-Microkernel project is now publicly available as an open-source project that is hosted by Puppet Labs. The source code for this project is freely available under a GPLv2 license (which is the license that governs the underlying Linux kernel that the Razor-Microkernel project is based on), and the project itself can be found here.
Given the general interest in the Razor Microkernel itself, Nick Weaver also asked me if I would be interested in guest writing a blog post that provides users with a bit more background about the Razor Microkernel itself (what it is, how Razor uses it, and how it can be customized). At the end of this post there are links to pages on the Razor-Microkernel Project Wiki, where you can find more in depth information about the Microkernel (including a guide that will help you if you decide that you would like to build your own version of the Razor Microkernel to support your special needs).
Perhaps I should start out by introducing myself (since most of you don’t know me). My name is Tom McSweeney, and I’m one of the co-creators of Razor. While Nick was primarily focused on developing the Razor Server over the past few months, my primary focus has been on developing the Razor Microkernel (and helping out with the development of the Razor Server in my spare time). In terms of my background, I’ve been working as a Sr. Technologist in the Office of the CTO at EMC for about 5 years now. Prior to that, I worked for a number of years as a software architect in one of the Java Centers at Sun Microsystems. Overall, I’ve spent many years designing, developing, and deploying large-scale software systems for a variety of platforms, from servers to back-end telecommunications gear to embedded systems (and even handsets). In almost every case, discovery and provisioning of devices across the network was one of the harder (and in many cases critical) issues that had to be resolved.
Last fall, as part of an internal project at EMC, Nick and I were tasked with selecting a framework that could be used for power-control and bare-metal provisioning of servers in a modern datacenter environment. Ideally, the framework that we selected would support both bare-metal and “virtual bare-metal” provisioning, but we knew that provisioning on OS onto physical hardware was an absolute necessity for the use cases being considered for that project. After fighting with several existing frameworks that each claimed to have already solved this problem for us (including Baracus and Crowbar), we decided that it would probably be easier to build our own framework for this task than to try to make one of the existing frameworks do what we needed them to do.
Once we started putting together a design for our own solution (the solution that would eventually become Razor), one of the first issues that we had to resolve was exactly how we would discover new nodes in the network (so that the Razor server could start managing them). Whatever tool we used for node discovery would have to be able to provide the Razor Server with a view into the capabilities of those nodes (either physical or virtual) so that the Razor Server could use that meta-data to decide exactly what it should do with the nodes that were discovered.
After discussing alternatives, we decided that the best approach would be to use a small, in-memory Linux kernel for this node discovery process. There are a number of alternatives available today when it comes to small, in-memory Linux kernels (Damn Small Linux, SliTaz, Porteus, and Puppy LinuX all come to mind), so we narrowed our choices down to just distributions with an total size smaller than 256MB (to speed up delivery of the image to the node) that were under active development, and that included a relatively recent Linux kernel (i.e. distributions built using a v3.0.x Linux kernel). As an additional constraint, we knew that we would be using Facter during the node discovery process, so we searched for distributions that included pre-built versions of Ruby and the system-level commands that Facter uses (like dmidecode). Finally, we knew that we would want to build custom extensions for our Microkernel (perhaps even commercial versions of these extensions) so we looked at distributions that provided an easy mechanism for building custom extensions and that were licensed under a “commercial friendly” open-source license.
Once we applied all of these constraints to the various distributions that we had were comparing with each other, there was one distribution that clearly stood out on the list, and that distribution was Tiny Core Linux. Tiny Core Linux (or TCL) easily met all of our constraints (even a few constraints that we hadn’t thought of initially):
Now that we had selected the distribution that we were going to use to build our Microkernel, it was time to turn our attention to the additional components that we would be deploying within that distribution to support the node discovery process.
In order to successfully perform node discovery, a number of standard TCL extensions (and their dependencies) are installed during the Microkernel boot process:
These extensions (which we’ll refer to as the “built-in extensions”) are set up to automatically install during the boot process and, as such, are readily available during the Microkernel setup and initialization process.
In addition to these “built-in extensions”, the Razor Microkernel also downloads and installs a set of “additional extensions”. These additional extensions are downloaded and installed from a TCE mirror (rather than being installed from a local directory in the Microkernel filesystem) at the end of the Microkernel boot process (rather than during the boot process). In the current release of the Razor Microkernel, there is only one “additional extension” that might be installed during the system initialization process, an extension that installs the Open VM Tools package (this extension is only installed if when the Microkernel is deployed to a VM running in a VMware-related environment).
Additional extensions can also be provided by an external TCE mirror (perhaps even by the Razor Server itself), and it is a simple configuration change (on the Razor Server) to point the Microkernel at different TCE mirror containing additional extensions that it should install. If additional extensions are installed from an external TCE mirror, they will be installed in addition to (not instead of) those that are installed from the internal TCE mirror after the boot process completes.
As part of the Microkernel boot process, the Ruby Gems package is also installed (“from source”, using a gzipped tarfile that is bundled into the ISO itself). Once this package is installed, several “Ruby Gems” are then installed as part of this same system initialization process. Currently, this list of gems includes the following four gems:
Which gems are actually installed is determined using a list that is “burned into the Microkernel ISO”. The list itself is actually a part of the Razor-Microkernel project; and the gems are meant to be downloaded (from a local gem repository) during the process of building the ISO (although currently this list is used to bundle a fixed set of gems into the ISO during the Microkernel ISO build process).
The final components that make up the Razor Microkernel are a set of key services that are started automatically during system initialization. This set of services includes the following:
Once the system is fully initialized, the components that are running (and the connections between them) look something like this:
Often, when we talk about the Razor Microkernel, we’re actually referring to the Microkernel Controller that is running within the Razor Microkernel (since that’s the component that interacts directly with the Razor Server) but, as is shown in this diagram, there are actually several services that all work together to provide the full functionality of the complete Razor Microkernel.
There are two basic operations that the Microkernel Controller performs when interacting with the Razor Server:
The Node Checkin process is periodic (with the timing defined as part of the Razor Server configuration). The Razor Microkernel simply sends a “checkin request” to the Razor Server every N seconds, and the Razor Server looks for that node in the list of nodes that is managing. Based on what the Razor Server finds, one of three things might happen:
If the node registration process is triggered (either because the Razor Server has sent back a register command in the response to a checkin request or because the Microkernel itself has detected that the current facts gathered during the node checkin process are different from the facts that it last reported to the Razor Server), then a new set of facts for that node are reported to the Razor Server in a Node Registration request. This set of facts contains the latest information gathered by the Microkernel Controller (using Facter, combined with information gathered using the lshw and lscpu commands).
In this posting we have described what the Razor Microkernel is and we’ve shown how the Razor Microkernel is used by the Razor Server for node discovery. We’ve shown how the Microkernel is constructed from the Tiny Core Linux “Core” distribution (the basis the Razor Microkernel) and also broken down the Microkernel in a bit more detail in order to show how the services running within the Microkernel are organized internally.
If you are looking for a more detailed view into the Razor Microkernel, we invite you to visit the Razor-Microkernel project page itself. That project is now freely accessible through the Puppet Labs GitHub site, and can be found here. This project contains the source code for the services that were described above, as well as the scripts that you need to build your own versions of the Razor Microkernel. The project site also includes several Wiki pages that provide more detailed information about the Razor Microkernel than we can provide in a blog posting like this. Of particular interest might be the following pair of pages:
Once again, we’d like to welcome you all to the new Razor-Microkernel project. As always, comments and feedback are welcome.