Cloud, Coding, Scripts

Razor OSS #2 : Public Release of MK

Shortly after the open-source release of Razor in late May, we started to see requests from users for more information about the Microkernel that is used by Razor.  What distribution was used as the basis for this Microkernel?  What services does it provide?  What is involved in building a custom Microkernel that will support my hardware?  Will the Microkernel be open-sourced as well?  If so, will it be part of the Razor project or a separate (but related) project?

At the same time that these requests started coming in from the Razor community, we saw our first Razor issue that was directly linked to the Razor Microkernel.  The issue was with support for a networking card that we hadn’t seen before (the Broadcom NetXtreme II card) that was presenting some issues for a Razor user (The Microkernel it wasn’t checking in with the Razor server on machines that used this network card because the it couldn’t connect to the underlying network).  In the end, the issue turned out to be that the firmware needed to support this network card was not included in the Microkernel (even though firmware for this card that would work with our Microkernel was readily available).

Our intention all along was to make this project publically available, but we still hadn’t worked out the last remaining issues around automating the build of a Microkernel ISO from the Razor Microkernel project itself (at the time of the Razor release late last month the process of building a new version of the Microkernel ISO “from source” was still fairly tedious, manual, and error prone).  We have finally resolved the last of these issues, and are proud to announce that the Razor-Microkernel project is now publicly available as an open-source project that is hosted by Puppet Labs.  The source code for this project is freely available under a GPLv2 license (which is the license that governs the underlying Linux kernel that the Razor-Microkernel project is based on), and the project itself can be found here.

Given the general interest in the Razor Microkernel itself, Nick Weaver also asked me if I would be interested in guest writing a blog post that provides users with a bit more background about the Razor Microkernel itself (what it is, how Razor uses it, and how it can be customized).  At the end of this post there are links to pages on the Razor-Microkernel Project Wiki, where you can find more in depth information about the Microkernel (including a guide that will help you if you decide that you would like to build your own version of the Razor Microkernel to support your special needs).

A personal introduction

Perhaps I should start out by introducing myself (since most of you don’t know me).  My name is Tom McSweeney, and I’m one of the co-creators of Razor.  While Nick was primarily focused on developing the Razor Server over the past few months, my primary focus has been on developing the Razor Microkernel (and helping out with the development of the Razor Server in my spare time).  In terms of my background, I’ve been working as a Sr. Technologist in the Office of the CTO at EMC for about 5 years now.  Prior to that, I worked for a number of years as a software architect in one of the Java Centers at Sun Microsystems.  Overall, I’ve spent many years designing, developing, and deploying large-scale software systems for a variety of platforms, from servers to back-end telecommunications gear to embedded systems (and even handsets).  In almost every case, discovery and provisioning of devices across the network was one of the harder (and in many cases critical) issues that had to be resolved.

A bit of history

Last fall, as part of an internal project at EMC, Nick and I were tasked with selecting a framework that could be used for power-control and bare-metal provisioning of servers in a modern datacenter environment.  Ideally, the framework that we selected would support both bare-metal and “virtual bare-metal” provisioning, but we knew that provisioning on OS onto physical hardware was an absolute necessity for the use cases being considered for that project.  After fighting with several existing frameworks that each claimed to have already solved this problem for us (including Baracus and Crowbar), we decided that it would probably be easier to build our own framework for this task than to try to make one of the existing frameworks do what we needed them to do.

Once we started putting together a design for our own solution (the solution that would eventually become Razor), one of the first issues that we had to resolve was exactly how we would discover new nodes  in the network (so that the Razor server could start managing them).  Whatever tool we used for node discovery would have to be able to provide the Razor Server with a view into the capabilities of those nodes (either physical or virtual) so that the Razor Server could use that meta-data to decide exactly what it should do with the nodes that were discovered.

After discussing alternatives, we decided that the best approach would be to use a small, in-memory Linux kernel for this node discovery process.  There are a number of alternatives available today when it comes to small, in-memory Linux kernels (Damn Small Linux, SliTaz, Porteus, and Puppy LinuX all come to mind), so we narrowed our choices down to just distributions with an total size smaller than 256MB (to speed up delivery of the image to the node) that were under active development, and that included a relatively recent Linux kernel (i.e. distributions built using a v3.0.x Linux kernel).  As an additional constraint, we knew that we would be using Facter during the node discovery process, so we searched for distributions that included pre-built versions of Ruby and the system-level commands that Facter uses (like dmidecode).  Finally, we knew that we would want to build custom extensions for our Microkernel (perhaps even commercial versions of these extensions) so we looked at distributions that provided an easy mechanism for building custom extensions and that were licensed under a “commercial friendly” open-source license.

Once we applied all of these constraints to the various distributions that we had were comparing with each other, there was one distribution that clearly stood out on the list, and that distribution was Tiny Core Linux.  Tiny Core Linux (or TCL) easily met all of our constraints (even a few constraints that we hadn’t thought of initially):

  1. TCL is very small (the “Core” distribution is an ISO that is only 8MB in size) and is designed to run completely in memory  (the default configuration assumes no local storage exists and only takes up about 20MB of system memory when fully booted)
  2. TCL is built using a (very) recent kernel; as of the time of this writing (the latest release of TCL uses a v3.0.21 Linux kernel and, at the time that this was written, that release was posted less than two weeks ago), so we knew that it would provide support for most of the hardware that we were likely to see.
  3. TCL can easily be extended (either during the boot process or dynamically, while the kernel is running) by installing TCL Extensions (which we will call TCEs for short).  An extensive set of pre-built TCEs are available for download and installation (including Ruby).  The complete set of extensions can be found here.
  4. It is relatively simple to build your own TCE mirror, allowing for download and installation of TCEs from a local server (rather than having to pull down the extensions you need across the network).
  5. Tools exist to build your own TCEs if you can’t find a pre-build TCE for a package that you might need.
  6. The licensing terms under which TCL is available (GPLv2) are relatively “commercial friendly”, allowing for later development of commercial extensions for the Microkernel (as long as those extensions are not bundled directly into the ISO).  This would not be the case if a distribution that used a GPLv3 license were used instead.

Now that we had selected the distribution that we were going to use to build our Microkernel, it was time to turn our attention to the additional components that we would be deploying within that distribution to support the node discovery process.

Components that make up the Razor Microkernel

In order to successfully perform node discovery, a number of standard TCL extensions (and their dependencies) are installed during the Microkernel boot process:

  • ruby.tcz – an extension that provides everything needed to run Ruby (v1.8.7) within the Microkernel ; all of the services written for the Microkernel are Ruby-based services, and this package provides the framework needed to run those services (and the classes they depend on).
  • bash.tcz – an extension containing the ‘bash’ shell; installed in case the ‘bash’ shell is needed (out of the box, only the ‘ash’ shell is provided by the TCL “Core” distribution)
  • dmidecode.tcz – an extension containing the dmidecode UNIX command; this command is used by the Facter (and, as such, by the Microkernel Controller) during the node discovery process
  • scsi-3.0.21-tinycore.tcz – an extension that provides the tools, drivers, and kernel modules needed to access SCSI disks; without this extension any SCSI disks attached to the node are not visible to the Microkernel
  • lshw.tcz – an extension containing the lshw UNIX command; this command is used by the Microkernel Controller during the discovery process
  • firmware-bnx2.tcz – an extension that provides the firmware files necessary to access the network using a Broadcom NetXtreme II networking card during the system boot process; without this extension the network cannot be accessed using this type of NIC (which is fairly common on some newer servers).
  • openssh.tcz – an extension containing the OpenSSH daemon; this extension is only included in “development” Microkernel images, on a “production” Microkernel image this package is not included (to prevent unauthorized access to the underlying systems via SSH).

These extensions (which we’ll refer to as the “built-in extensions”) are set up to automatically install during the boot process and, as such, are readily available during the Microkernel setup and initialization process.

In addition to these “built-in extensions”, the Razor Microkernel also downloads and installs a set of “additional extensions”.  These additional extensions are downloaded and installed from a TCE mirror (rather than being installed from a local directory in the Microkernel filesystem) at the end of the Microkernel boot process (rather than during the boot process).  In the current release of the Razor Microkernel, there is only one “additional extension” that might be installed during the system initialization process, an extension that installs the Open VM Tools package (this extension is only installed if when the Microkernel is deployed to a VM running in a VMware-related environment).

Additional extensions can also be provided by an external TCE mirror (perhaps even by the Razor Server itself), and it is a simple configuration change (on the Razor Server) to point the Microkernel at different TCE mirror containing additional extensions that it should install.  If additional extensions are installed from an external TCE mirror, they will be installed in addition to (not instead of) those that are installed from the internal TCE mirror after the boot process completes.

As part of the Microkernel boot process, the Ruby Gems package is also installed (“from source”, using a gzipped tarfile that is bundled into the ISO itself).  Once this package is installed, several “Ruby Gems” are then installed as part of this same system initialization process.  Currently, this list of gems includes the following four gems:

  1. daemons – a gem that provides the capability to wrap existing Ruby classes/scripts as daemon processes (that can be started, stopped, restarted, etc.); this gem is used primarily to wrap the Razor Microkernel Controller as a daemon process.
  2. facter – provides us with access to Facter, a cross-platform Ruby library that is used by the Razor Microkernel to gather together many of the “facts” about the systems that it is deployed to (other “facts” are discovered using the lshw and lscpu UNIX commands).
  3. json_pure – provides the functionality needed to parse/construct JSON requests, which is critical when interacting with the Razor Server; the json_pure gem is used because it is purely Ruby based, so we don’t have to install any additional packages (like we would have to do if we were to use the more “performant”, but partly C-based, json gem instead).
  4. stomp – used by the MCollective daemon to provide external access to its agents via an ActiveMQ message queue

Which gems are actually installed is determined using a list that is “burned into the Microkernel ISO”.  The list itself is actually a part of the Razor-Microkernel project; and the gems are meant to be downloaded (from a local gem repository) during the process of building the ISO (although currently this list is used to bundle a fixed set of gems into the ISO during the Microkernel ISO build process).

The final components that make up the Razor Microkernel are a set of key services that are started automatically during system initialization.  This set of services includes the following:

  1. The Microkernel Controller – a Ruby-based daemon process that interacts with the Razor Server via HTTP
  2. The Microkernel TCE Mirror – a WEBrick instance that provides a completely internal web-server that can be used to obtain TCL extensions that should be installed once the boot process has completed. As was mentioned previously, the only extension that is currently provided by this mirror is the Open VM Tools extension (and its dependencies).
  3. The Microkernel Web Server – a WEBrick instance that can be used to interact with the Microkernel Controller via HTTP; currently this server is only used by the Microkernel Controller itself to save configuration changes it might receive from the Razor Server as part of the “checkin response” (this action actually triggers a restart of the Microkernel Controller by this web server instance), but in the future we feel that this server is also the most-likely interaction point between the MCollective and the Microkernel Controller.
  4. The MCollective Daemon – as was mentioned previously, this process is not currently used, but it is available for future use
  5. The OpenSSH Daemon – only installed and running if we are in a “development” Microkernel; in a “production” Microkernel this daemon process is not started (in fact, the package containing this daemon process isn’t even installed, as was noted above).

Once the system is fully initialized, the components that are running (and the connections between them) look something like this:


Often, when we talk about the Razor Microkernel, we’re actually referring to the Microkernel Controller that is running within the Razor Microkernel (since that’s the component that interacts directly with the Razor Server) but, as is shown in this diagram, there are actually several services that all work together to provide the full functionality of the complete Razor Microkernel.

Interactions with the Razor Server

There are two basic operations that the Microkernel Controller performs when interacting with the Razor Server:

  1. Node Checkin – during this process, the Razor Microkernel “checks in” with the Razor Server to determine what, if anything, the Razor Server would like that Microkernel instance to do
  2. Node Registration – during this process, the Razor Microkernel reports the meta-data that it has gathered about the platform it is deployed on (its node) to the Razor Server; this meta-data is then used by the Razor Server to determine what should be done with that node.

The Node Checkin process is periodic (with the timing defined as part of the Razor Server configuration).  The Razor Microkernel simply sends a “checkin request” to the Razor Server every N seconds, and the Razor Server looks for that node in the list of nodes that is managing.  Based on what the Razor Server finds, one of three things might happen:

  1. If it finds the node, and if the information for that node looks like it is up to date, then the Razor Server sends back an acknowledge command in its reply to this checkin request (an “acknowledge” command is basically a no-op).
  2. If the node cannot be found, or if the information for that node looks like it might be out of date, then the Razor Server sends register command back to the Microkernel in the reply to that checkin request instead (and the Microkernel will start the process of node registration).
  3. Finally, if the node needs to be transitioned to a new state (to install a new OS onto the node, for example), the Razor Server can send back reboot command back to the Microkernel in the reply to the checkin request instead (and the Microkernel will reboot immediately).

If the node registration process is triggered (either because the Razor Server has sent back a register command in the response to a checkin request or because the Microkernel itself has detected that the current facts gathered during the node checkin process are different from the facts that it last reported to the Razor Server), then a new set of facts for that node are reported to the Razor Server in a Node Registration request.  This set of facts contains the latest information gathered by the Microkernel Controller (using Facter, combined with information gathered using the lshw and lscpu commands).


In this posting we have described what the Razor Microkernel is and we’ve shown how the Razor Microkernel is used by the Razor Server for node discovery.  We’ve shown how the Microkernel is constructed from the Tiny Core Linux “Core” distribution (the basis the Razor Microkernel) and also broken down the Microkernel in a bit more detail in order to show how the services running within the Microkernel are organized internally.

More Information

If you are looking for a more detailed view into the Razor Microkernel, we invite you to visit the Razor-Microkernel project page itself.  That project is now freely accessible through the Puppet Labs GitHub site, and can be found here.  This project contains the source code for the services that were described above, as well as the scripts that you need to build your own versions of the Razor Microkernel.  The project site also includes several Wiki pages that provide more detailed information about the Razor Microkernel than we can provide in a blog posting like this.  Of particular interest might be the following pair of pages:

  • An Overview of the Razor Microkernel – provides users with a high-level overview of the Razor Microkernel itself, including detailed discussions of the interactions between the Razor Microkernel and the Razor Server
  • Building a Microkernel ISO – describes the process of building your own Microkernel ISO (in detail) using the tools that are provided by the Razor-Microkernel project

Once again, we’d like to welcome you all to the new Razor-Microkernel project.  As always, comments and feedback are welcome.

Cloud, Coding, EMC, Featured, Manic Innovation Challenge, Tools

Lex Parsimoniae : Cloud Provisioning with a Razor

Razor_of_OckhamBlogging has been very difficult for me over the last 4 months. My move to the Office of the CTO within EMC changed much of what I did and left me searching for content I could write about. Most of what I was dealing with on a daily basis was either too early to mention or too secret to reveal.

Today, this changes with the release of a project I have spent the majority of my days and nights working on this year. Without long-worded wind up I am proud to announce the release of Razor, a cloud-provisioning tool to change the way we look at provisioning hardware for cloud stacks.

Razor is a software application, which is a combination of Ruby (main logic) and Node.js (API, Image Service) for rapidly provisioning operating systems and hypervisors for BOTH physical and virtual servers. It is designed to make standing up the base substrate underneath cloud deployments both simple and transactional.

Now at this point, many of you are thinking: “Great, another *cloud* provisioning tool.” And I don’t blame you at all. So what makes Razor different than many other tools out there like Cobbler, Dell’s Crowbar, or other deployment services? Just about everything.

The real answer to that question is related to the reason this project is named Razor. We based much of our design theory after Ockham’s razor. It is based on the belief that OS/hypervisor deployment should be simple, succinct, and incredibly flexible. Many products out there try to solve ALL the problems for every layer instead of focusing on their layer correctly. They try and make their software layer the most important piece. Razor is designed to enable other tools rather than replace them.

As part of this, Razor was designed to be extremely simple to extend and manage. Unlike other popular tools out there right now. Razor allows you to add support for an entirely new operating system with a single file. It allows you to create multiple versions of an operating system model by changing a few lines. And with this release it fully supports VMware’s ESXi 5, Centos 6, openSUSE 12, Ubuntu Oneiric & Precise, and Debian Wheezy with our first release.

But the ability to extend Razor is just part of the magic. Another critical design decision is how Razor links to upper-level configuration. Early on, the team I belong to was researching and exploring different newer provisioning tools. We found that many had very limited support. Others were just glorified scripts. And some were even linked to DevOps tools (awesome) but chose a design where they wrapped a state machine around the DevOps state machine (not so awesome). And I won’t even start on the horrible installers we ran into. If it takes 60-120 minutes of manual work to setup a DevOps integrated tool – then you are not getting the point behind DevOps.
We ended up at a point where we said to ourselves, “If I could have a new cloud provisioning tool, what would it look like?”. And we came up with some core ideas:

  1. Adding new OS or Hypervisor distributions should be simple – Mentioned this above, but many tools require major work to extend.
  2. Must be event-driven instead of user-driven – Many tools claim automation but require a user to have to select the 24 servers and push a button. We wanted Razor to enable users to create policy that automatically accomplishes what is needed when given physical or virtual hardware.
  3. Should have powerful vendor-agnostic discovery or physical or virtual compute resources – A powerful tool is useful whether it is a 5 year old HP server, KVM virtual machine, or brand new Cisco UCS Blade. It should be able to discover, understand, and provision to any of these and all of these based on a user’s needs.
  4. It should scale well – No monolithic structure. You should be able to run one instance or 50 instances without issue. This is a major reason behind why we chose Node.js for the API and Image Service layers. Event-driven and fast.
  5. It should directly integrate with DevOps toolsets to allow for cloud configuration – The biggest and most important requirement. And unlike tools that wrap and cripple a DevOps tool – Razor should integrate with them without affecting their ability to scale or manage resources.
  6. The control structure must support REST interface control out of the box – If you are going to build a system for automation- make sure it can be automated by another system.

With these requirements Razor was born. So, let me walk you through how Razor works and why this is so powerful.


razor_discoveryRazor uses a powerful discovery mechanism that has a single purpose: find out what a compute node is made of. With Razor we designed what we call the MicroKernel (known as the MK) for Node discovery. When a server is booted your DHCP server will point the server to a static PXE provided by Razor. This PXE file will point the server at the Razor API and automatically pull down the MK and load it. The MK is tiny, around 20MB in size and is an in-memory Linux kernel that will boot, inventory, contact the Razor server, and register the node with Razor. It will then sit idle and check-in with a lightweight ping waiting for Razor to tell it what to do. The control link between Razor and the MK on the nodes is via REST to Razor’s Node.js API.

The MK then uses Puppet Lab’s Facter to gather information on every piece of hardware as well as what kind of server (virtual or physical) and even what kind of virtualization it is on (VMware, KVM, Xen). The MK sends this information back to Razor and even updates, should you change hardware on the fly. The end result is that Razor can automatically discover and know the makeup of hundreds of physical or virtual servers. It will know what CPU version, server vendor, how many NICs, how many physical disks, and much much more. All of which becomes very important soon. This information is available for every node within Razor (screenshot) via the CLI or REST interface.


The next step is taking this inventory of nodes and classifying and carving into something useful for deployment. This is where tagging comes in. In Razor you have a construct called a Tag Rule. A Tag Rule applies a Tag to a Node (discovered compute node). A Tag rule contains qualifying rules called Matchers. It may seem a little complex but is actually incredibly simple.

razor_taggingLet’s say you have 64 servers. 16 of them are brand new Cisco UCS blades. 18 of them are HP servers about 3 years old. And finally you have 24 old Dell servers that are quite dated. What you want to do is separate these servers by type so you can deploy and configure them differently. You want to put the Cisco UCS blades into a vSphere cluster for running Cloud Foundry. You want to take the HP servers and stand up an OpenStack test bed. And since the Dell’s are a bit dated, you want to provision them as development servers running Ubuntu for use by a development team.

Part of the beauty of Razor’s discovery mechanism is that it has already gathered all the information you need. Each Node contains attributes for both vendor and product name from the MK registration.

Tagging allows you to group these servers by applying a common tag. You create a Tag Rule called ‘CiscoUCS’. And then you add a single Matcher to that rule that says: if ‘vendor’ equals ‘Cisco’ then apply the Tag. Immediately every Node Razor has that matches that rule with be tagged: ‘Cisco’. Likewise you can setup tag rules for the HP and Dell servers. You can also tag on things like how much memory or CPU version and create helpful tags like ‘big_server’, ‘medium_server’, or ‘small_server’. And Tags stack also. So you can create multiple Tag Rules that can apply and classify complex servers. You can have a Node with [‘Cisco’,’big_server’,’4_nics’,’cluster_01’,’Dallas’] describing size, location, and grouping. Tag Rules also allow you to insert attributes into the Tag. So you can create rules that automatically name like ‘memory_96GB’ for one server and ‘memory_48GB’ for another.


razor_provisioningSo in our example use case we now have taken our 64 servers and applied a bunch of useful Tags to them. We now need to make those Tags useful and do something based on them. This is where Policy in Razor comes into play. A Policy in Razor is a rule that takes a Model (more on this in a second) and applies it to a Node (remember, discovered servers) based on matching against Tags. This is incredibly simple to setup. In our case we would have a Model called ‘ESXi5Standard’ which deploys VMware’s vSphere hypervisor to a Node. We would create a new Policy that says: if a Node matches the Tags ‘Cisco’,’big_server’,’cluster_01’ then apply my ‘ESXi5Standard’ model to this Node.

Now what makes this very very cool is this is completely automatic. As the Node checks-in from the MK, the Razor Engine checks the Policies to see if there is a match. Policies work like a firewall rule list. It starts at the top and moves down till it finds a match. When it finds that match it applies the Policy and binds the Model as the Active Model to the Node. This immediately starts applying whatever the Model is to the Node. In our case above, each one of our 16 UCS servers would quickly reboot and begin installing ESXi 5 to each node. What is more important to understand is that if only 1 Node was on when the rule was created only 1 would be installed. But as long as the Policy is enabled you could turn on the remaining 15 Nodes as you want and have them bind to the same Model and move on to become part of the vSphere Cluster.

In our use cases we would build 3 Policies. One for our UCS blades, one to install Redhat on our HP servers for OpenStack, and one to install Ubuntu Precise on our Dell servers. Any servers not fitting the Policies remain idle. If you were handed 16 more UCS blades and wanted them to deploy to the same cluster, you would just have to turn them on and let Razor continue to enforce the appropriate Policy.


Let me take a second to describe how Models work. You have two components to the Model structure; the Model Template and the Model itself. The Model Template is one or many files that describe how to do something. They are actually very simple to write and Razor comes with a bunch of Model Templates for installing a ton of common stuff. The Model is an instance of a Model Template plus some metadata required (like password, license key, hostname, etc). You may want to install ESXi or Ubuntu differently depending on if the destination is production, development or based on factors like location. So, you have the ability to use the Ubuntu Precise Model Template to create a Model for UbuntuProduction and a Model for UbuntuDevelopment with different settings for things like username, password, domain, etc. Or, you can use one Model for all and let upper-level tools like DevOps manage the configuration differences. I won’t go into how to create your own Model Template in this blog but even the Model Template can be customized for the same OS but different needs.


So far I have covered how Razor allows for dynamic provisioning of OS/Hypervisor layers. But, if I stopped here then all I would have done is talked about a slightly better mousetrap. The creation of Razor was based on the principle of the simplest solution being the most elegant. And the point of deploying Ubuntu or vSphere is to host something at a higher level. That is where Brokers come into play and where the real magic of Razor is important.

If I want to deploy something like OpenStack I want to do it with a system that is designed to do it right. We looked at DevOps products like Puppet Labs and realized they are much better at managing configurations for cloud stacks. So by design Razor is integrated to enable the Handoff of a provisioning Node to a system that will manage it long-term.

To do this, Razor uses a Broker Plugin. A Broker is an external system like a Puppet Master from Puppet Labs that will properly configure a Node for its true purpose in life. Out of the box we have worked hand-in-hand with Puppet Labs to include a Broker Plugin for Puppet that enables both agent handoff (Linux variants) and proxy handoff (vSphere ESXi). There are a couple really important things to point out here. First, we don’t wrap Puppet or attempt to control the Puppet Master from Razor (like the other guys do). Razor’s purpose in life is to get the Node attached to the right Model and get it to a state where it can give the Node to Puppet. It delivers all the metadata it gathered along the way also including tags. But once Puppet gives the thumbs up – Razor is done.

When a Broker like Puppet receives the Node, it can use the tags passed with it to make decisions on the configuration. You can link similarly tagged ESXi Nodes into the same cluster. You can setup one node as a Swift Proxy and the next 5 as Swift Object servers based on tagging. The important thing here is that Puppet is able to consume the hardware details and classification and in sequence, turn provisioned Nodes into stacks of application and services.

Which means in the end- when everything is setup, Razor and Puppet can take groups of servers and turn them into deployed services automatically. They can scale them incrementally. With the way binding works with Razor, you can even re-provision quickly.

This blog post is getting long enough as it is. I will leave much of the configuration details for the videos below. I won’t cover how the entire control structure is completely available via REST API (proper GET, POST, PUT, DELETE). I won’t mention the slick Image Service razor_hand_offthat allows you to load ISO’s into Razor and choose which Models to attach to. I will even skip the lightning fast Node.js layer which dynamically serves the API and the Image Service for all Nodes. And I won’t mention the detailed logging you get with provisioning tasks including the ability to define your own state machine within a Model.

But I will mention the best part about the Razor announcement. EMC has decided to donate the Razor code for release under Puppet Labs community with an open-source Apache license. We feel strongly that releasing Razor as open-source software for enabling the next-generation of cloud computing and integrated proper DevOps toolsets is critical to the community. We are looking forward to the cloud community helping us create something that can benefit everyone.

At this point you may be asking, how you can start using Razor. Unlike some other tools, Razor is incredibly simple to install and run. You can manually download the required pieces and clone from the github repo. But the slick cloud-magician way is to actually use Puppet and deploy Razor quickly and easily. Because what good is a DevOps integrated tool if you cannot deploy it with DevOps toolset?

Here is a quick video by Nan Liu of Puppet Labs showing how easy it is to install Razor via Puppet on Ubuntu Precise.

This video is a quick example of deployment with ESXi Models and handoff to Puppet Labs for automatic install of vSphere and configuration of clusters and virtual machines.

And finally here are some critical links to get started on working with Razor.

I  will be following up this blog post with videos on how to use each component of Razor and examples of using the REST API, and much, much more.

Feel free to leave comments and questions here. But please use the resources Puppet Labs has setup for the community projects as well.

Thanks for reading this long post – and have fun cutting some servers up,


Manic Innovation Challenge, Scripts

UBERLinkedTwit : Chambers special

So my buddy posted the following tweet today:

And since I had some calls today I decided to kick off a project to do just that.

So here is a simple github link for UBERLinkedTwit:

Basically a command line ruby script which using the LinkedIn gem to auth, pull, and grab twitter ID’s for your connections. I haven’t rigged it up to auto-follow using the Twitter gem yet. But that is pretty easy using the Twitter gem examples.

Hit the github project, get LinkedIn developer keys, read the README, and have fun. Free free to extend and I will pull into main project.








Cloud, Life/Work, Manic Innovation Challenge, Tools

Manic Innovation Challenge(Feb) : UBER Twitter Stats

UPDATE : As of last week UBER Twitter Stats are offline. The account used for replying to requests was tagged as spam and blocked. I don’t plan on moving to a new account. Instead I am going to work on a replacement which should rock even more.

Continuing my pursuit of the Manic Innovation Challenge I am proud to release my newest *dumb* idea: UBER Twitter Stats!

NEW UPDATE – Based on some tips (thanks: Brian Katz @bmkatz) some of the details haved changed below to make UBER Twitter Stats work a little easier. The same old command style still works. But the newer one is much easier.

animalWritten in 100% Ruby and running in the cloud, UBER Twitter allows you to ask me (technically my cloud-like proxy @myubertwit) for interesting recent stats about your Twitter account. It is really quite simple.

Send a tweet to my app account (@myubertwit) with the text: “<command>”. With the command being one of the following:

  1. My Word Count – Will reply with the top 20 words you used recently. This automatically strips out very common words. Shortcut: ‘mwc’
  2. Mention Word Count – This will reply with the same as the above but for tweets that mention or are to you. Shortct: ‘mmwc’
  3. Who I Mention – This will list the top 20 people you talk to or mention in your recent tweets. Shortcut: ‘wim’
  4. Who Mentions Me – This will reply back with the top 20 people who have mentioned you the most lately. Shortcut: ‘wmm’

If you get stuck just send @myubertwit ‘help’ to get the instructions back in a tweet.

Depending on timing response may take up to a minute. Also, 90% of this work was written while I was chilling in a cigar bar with some friends. So while I do have *some* error handling, it ain’t much Smile. It should handle Twitter API limits and does a decent job of trying to iterate and gather as many tweets as possible. For some high volume users (Like @Beaker) it gets pretty close to a 1,000 tweets per request to analyze.

I WILL be releasing the source for this soon. I have to strip out some private stuff first. And this week I will be attending VMware Partner Exchange, Cloud Connect, and one other event along with a ton of meetings. Which means it might be a week or two before I have time to post to Github for everyone.

Feedback is king – let me know if you like it or if you have any good ideas to add!