One of my main concerns these days with OpenStack development, as a culture, is how infrequently some developers, notably me, actually do anything remotely real with OpenStack. This is a clear indicator (to me) of two issues we could work to resolve:
-
the corporate sponsorship style of OpenStack development can lead to some developers who are over-tasked but under-experienced making design and development decisions that may be divorced from reality
-
it can sometimes require too much effort to play with OpenStack in a concrete yet exploratory fashion
Or maybe that's just me.
In any case, I decided today was a day where I should do some multinode devstack as a bit of a refresher and should write what I did down for future reference.
Doing this is not to suggest that devstack counts as real OpenStack, but it does provide an opportunity to play and have visibility of many of the working parts.
I chose a blog posting that google revealed to me as my starting point: Devstack multinode with neutron and vagrant is from a few years ago, but provides a reasonable conceptual framework to get the ball rolling. The intent here is not to automate the process, but to observe it (of course that observation makes it clear why people want to automate it).
I'm working from a 16GB Mac Mini with VirtualBox and Vagrant. The following steps are not meant to be authoritative, they just happen to be what I used that manged to work. There may very well be better ways to do things. This is often the case. Please let me if you have ideas.
Step One: Have Some VMs
Following the model in Ajaya's blog post, we build two VMs from a Vagrantfile. (There's an additional Vagrantfile which automates the building devstack parts described below, but not the network adjustments.) One as a controller + compute, one as a compute node only. The third host is not required, at least the way my VirtualBox is set up.
$ vagrant up
$ vagrant ssh controller
Step Two: Get the Controller Hosting
Get devstack and stack a reasonable basic local.conf. I disable horizon, cinder, tempest and dstat because we don't need those things for these experiments.
$ git clone https://github.com/openstack-dev/devstack.git
$ cd devstack
$ <copy local.conf here>
$ ./stack.sh
One way to check that things are up in a sane state is to see if there are appropriate inventories reported by the resource tracker:
$ echo "select * from inventories;" | mysql -t nova_api
Step Two and a Half: Diddle the Networking
(This is the part that I for some reason think should "just work" but manual intervention is required. Much of it is non-obvious if you aren't experienced in this area or if you are in a hurry and don't want to concern yourself with these details. Unfortunately by not doing so you often end up with stuff that doesn't work for no apparent reason.)
We created this host with two NICs, we need to get the second one involved as our external network. In these modern times the name of that interface isn't predictable so:
$ ip -a -4 address |grep -B 1 '200.10' # find the ethernet interface
$ sudo ovs-vsctl add-port br-ex enp0s9 # let ovs know about it
Then we need to add some security rules, but need to add them to the
default
security group for the project that we are currently in. What
that is depends on how you've authenticated yourself to your
devstack cloud. I always do . openrc admin admin
, which may not be
most secure but gives me the most capabilities. It means my
"project" is "admin". The other common option in devstack is "demo".
We need to find the id of the security group for the project:
$ SECURITY_GROUP=$(openstack security group list \
--project admin -f value -c ID)
and then add rules for ping and ssh:
$ openstack security group rule create --proto icmp \
--dst-port 0 $SECURITY_GROUP
$ openstack security group rule create --proto tcp \
--dst-port 22 $SECURITY_GROUP
Devstack created a private-subnet
and a public-subnet
. Let's
boot a vm on the private-subnet
to make sure that stuff is
working:
$ IMAGE=$(openstack image list -f value -c Name) \
# there's only one image in default devstack these days
$ NET_ID=$(openstack subnet show private-subnet -f value -c network_id)
$ openstack server create --flavor c1 --image $IMAGE \
--nic net-id=$NET_ID x1
Check that it is running:
$ openstack server show x1 # remember the assigned internal IP address
Add a floating IP:
$ openstack port list # retrieve the port id of that address
$ openstack floating ip create public \
# create a floating ip on the public net
$ openstack floating ip set \
--port 39c9b1fc-39e8-42e0-97aa-5a84bdfd550a 192.168.200.14
The controller host currently has no route to that IP, but we should be able to see it from the physical host:
$ ssh cirros@192.168.200.14 # password is `cubswin:)`
That ought to work.
(From a placement standpoint you can do echo "select * from
allocations;" | mysql -t nova_api
and gaze upon the allocations
that have been made for the instance you created. If you used the
c1
flavor there will be no disk allocations.)
Step Three: Add the Compute Node
Now that we have a working mini-cloud, we can add an additional compute node. From the physical host:
$ vagrant ssh compute1
Here were going to set up a simpler devstack which uses the controller as its service host. Thus we need a different local.conf:
$ git clone https://github.com/openstack-dev/devstack.git
$ cd devstack
$ <copy local.conf here>
$ ./stack.sh
When this is done, if all went well there are a few ways to know that it had some degree of success. From the controller node:
$ echo 'select * from resource_providers;' |mysql -t nova_api \
# will have two resource providers
$ nova service-list # should list two nova-compute
$ nova hypervisor-list # this lists one but it should be two!
cells v2 is now default in OpenStack so we need an additional step (also on the controller node):
$ nova-manage cell_v2 discover_hosts
$ nova hypervisor-list # now it's two!
Now let's launch multiple new instances. Depending on how many
you've already started, you may hit a quota problem here. If so,
lower the number for min
and max
:
$ openstack server create --flavor c1 --image $IMAGE \
--nic net-id=$NET_ID --min 9 --max 9 m
At this point neutron will spank your controller host, especially if memory is tight. Once things have quieted down you can list those hosts that ended up on the new compute node:
$ openstack server list --host compute1
And we can repeat the floating ip dance from above to make one of those hosts remotely reachable:
$ openstack port list \
# retrieve the port id of one of the hosts
$ openstack floating ip create public \
# create a floating ip on the public net
$ openstack floating ip set \
--port d1daa7ab-3f2d-432d-977e-5bd8d2d4337b 192.168.200.3
From the physical host we can ssh to it:
$ ssh cirros@192.168.200.3
But there's still one thing that's not working. The VMs hosted by
the devstack can't reach out to the rest of the internet. Were I on
a Linux physical host this is relatively straightforward to fix with
an iptables
rule that allows SNAT
but since I'm on OS X at the
moment it's complicated enough that I'll leave that as an exercise
for some other time (or some generous soul might explain in the
comments).
You can find the files I used for this, including a Vagrantfile that automates the devstack build parts on GitHub.