I've been working in the virtualisation and cloud space for a few years now, but for many of the experiments I want to do, having access to hardware that is local and visible to me makes thinking about the stuff I'm exploring easier. This is especially the case when wanting to experiment with dynamic hardware situations.
The Circumference 25 is a cluster-in-box designed to host up to 8 Raspberry Pi 3 Model B+ fronted by an UDOO X86. A crowd supply site is in progress to build C25 and C100 boxes that provide everything but the computers for a data centre in a box that you can put on a desk.
Courtesy of Ground Electronics, I have a beta build of the C25 to test its usefulness for OpenStack-related experiments. Mine came with the UDOO and 8 Raspberry PIs already in place. Looks like the one on the left:
The UDOO uses an Intel Celeron N3160 2.24 Ghz, which has 4 cores, 4GB of RAM, a 250 GB SSD, and 3 gigabit ethernet ports.
The UDOO provides an Arduino-based interface to a backplane that
allows software driven (command line and Python library) control and
observation of the 8 Pis. Commands like
compute 3 off.
Each Raspberry Pi is a 3 Model B+. They are connected to two ethernet switches, as well as the backplane.
The backplane provides access to a serial console, info about the individual devices and the "datacentre", and power control.
With the software control of power it is easy to model the addition and removal of nodes from the cluster.
This is a great way for me to experiment with real hardware failures and replacements in an OpenStack installation and to observe how those changes will impact the Placement service.
Each Raspberry Pi can operate as a small nova-compute, with everything else running on the UDOO. My initial experiments have been promising but I ran into some issues with quickly creating a working compute node.
Initially I started by installing a limited devstack on the UDOO. This resulted in a working but very small OpenStack cloud, with the UDOO (named C25-03) as the sole hypervisor.
Next I tried adding one of the Raspberry Pis as an additional hypervisor. While I did eventually get this to work (cc1 is one of the eight pis)
c25@C25-03:~/src/devstack$ openstack hypervisor list +----+---------------------+-----------------+---------------+-------+ | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | +----+---------------------+-----------------+---------------+-------+ | 1 | C25-03 | QEMU | 192.168.1.171 | up | | 2 | cc1 | QEMU | 10.1.1.1 | up | +----+---------------------+-----------------+---------------+-------+
there were issues that I will need to resolve for the next round:
nova-computeon the pi itself is terrifically slow and because of the way nova distributes itself, installation requests a lot of Python modules that aren't required for
nova-computeitself. One way to improve this would be to create a single filesystem with nova-compute installed on it that each pi can net boot. In additiona, being able to describe solely the requirements of
nova-compute, rather than all of nova, would be a helpful step with this.
Once running, the default video and logging modes that nova uses when using libvirt to build the VM did not work with the builds of qemu and libvirt that are available as debian ARMv7 packages. I was able to hack around this by changing the nova code. I would prefer a cleaner solution.
Getting networking to work effectively will require further work. For me, at least, networking always requires further work.
Discovering and resolving these kinds of issues is an excellent way to evolve OpenStack and makes the Circumference a useful tool, even before I'm able to do the "actual work" (whatever that may become).
When I return home from OpenStack summit I'm looking forward to working with the box more and will post additional reports on my experiments. What I hope to explore is controlling the backplane to add and remove compute nodes via code, faking network and power disruptions, and confirming or denying that the Placement service properly handles the changing resources. The Circumference provides a lot of hardware complexity (for example, multiple physical networks) and possibility in a small container that will make the experiments more useful. These experiments can be modeled in tests but experience has proven time and again that tests and real hardware are never quite the same and new learnings abound.