Update: If you want to experiment with the exact code that was used to write this blog posting it is the code at this commit. The branch has since evolved to address some of scaling issues described within. A post about that is forthcoming.
This is the fourth in a series of posts about experimenting with OpenStack's placement service in a container. In the previous episode, I was working on further limiting the modules being imported into the service. Today I report on two recent efforts:
- Running placement standalone with its own sqlite database within the container, and without keystone auth.
- Running the resulting container in kubernetes with a reasonable HTTP setup, as a learning exercise.
These two goals are somewhat at odds with one another: Having the database in the container is useful for throwaway testing but defeats the usefulness of using kubernetes: there's no shared storage so a deployment with four replicas is really 4 different services. So the outcome here is to run only one container, but sufficient things were learned so that if/when there is a Playground 5 it should be possible to scale to infinity and beyond.
For the time being this work in being done on a branch of
placedock.
I eventually intend to consolidate this back to an intermediate
image from which different variants (uwsgi protocol based shared
storage, for use with a running devstack; http-based shared storage;
http-based throwaway storage) can be built. As a reminder: at no
point should this Dockerfile
or any of the associated code be
considered correct. It's fodder for experimentation and learning.
Standalone Throwaway Storage
I intend, at some point, to write a somewhat tongue in cheek exploration of using the placement service to manage the care and feeding of cats. Something that operates as both a tool for getting my mind right about what placement is, in its essence, as well as a kind of tutorial for coming at it from a different angle. To make it useful I want it to be easy for the reader to play along. A container is one way to accomplish that but by default placement wants to share a database with nova, use mysql, and engage with the keystone identity service. That's heavy.
Making a lighter container required some changes:
- Getting rid of the VOLUME that was sharing configuration settings into the container. Better to just copy them in.
- Set the database connection string to
sqlite
and a file, local to the container. - Figure out how to create the database tables.
The third step proved time consuming. As one of the original goals
was to limit the amount of nova modules being imported, it was not
simply a case of calling the same
code
used by the nova-manage api_db sync
command. Instead calling the
migrate
module directly was the way to go. Teasing out the correct
arguments required some trial and error.
Then it turned out that one of the
migrations
imports nova.objects.keypair
to get at a static string constant.
This results in all the nova objects being imported, setting off a
cascade of dependencies.
Simply deleting the offending migration doesn't work because the
migration system will not accept gaps in the numbered files. So I
dynamically rewrite the file to have an empty upgrade
method.
Another, more complex-in-action but tidier-in-result, operation might be to copy and renumber the migration files of only those tables that are used by the placement service. I didn't try this because I was in a hurry and we're going to have to address placement database management as part of extraction. This will likely change much of how things are done, including the base schema.
The resulting database migration code is in sync.py and it works. The resulting container, fronted by either apache2 or nginx behaves as desired. I ran many concurrent requests against it and discovered a bug with some database sync checks.
Now With Kubernetes!
Once I got that working, I wanted to make it work with kubernetes. minikube makes this pretty easy and the instructions associated with minikube and kubernetes itself are good. If you're on Linux, and using nested virtualization you may run into difficulties. I had enough that I decided to think about that some other time, only after I learned a lot more about virsh and esxi that I did before today.
As explained above, the main reason is for learning, because the scaling possibilities are lost by the throwaway database. It does, however, make for a pretty quick set up and explore, so I've written the steps here in case others would like to follow.
Assuming you've got a running minikube
and you've got a checkout of the
self-contained-authless
branch
git clone -b self-contained-authless \
git@github.com:cdent/placedock.git && \
cd placedock
you can build, deploy and run the service with the following steps
(in bootstrap.sh
in the repo):
# use the right docker, the one in the minikube
eval `minikube docker-env`
# build the image via Dockerfile
docker build -t placedock:1.0 .
# create the deployment
kubectl apply -f deployment.yaml
# expose it
kubectl expose deployment placement-deployment --type=LoadBalancer
# get the URL
PLACEMENT=`minikube service placement-deployment --url`
# Get the version doc
curl -H 'x-auth-token: admin' $PLACEMENT
echo
type gabbi-run && gabbi-run -v all $PLACEMENT -- gabbi.yaml
It even runs some gabbi tests if you've got gabbi installed.
The most important change in the container is the way in which the placement WSGI application is run. In previous iterations a uwsgi protocol listener was run and needed to be fronted by an HTTP server. Based on some reading, this isn't ideal in a kubernetes scenario, as a LoadBalancer service can take care of things for you. In that kind of setup, the uwsgi server in the container should listen on HTTP.
If everything goes well (you get a version document and maybe some passing gabbi tests) you now have a running placement service on which you create resource providers, set inventory, etc.
When you're done, the cleanup.sh
script can tidy up the mess:
eval `minikube docker-env`
kubectl delete service placement-deployment
kubectl delete deployment placement-deployment
docker rmi placedock:1.0 -f
Please leave some comments if you have questions or there's a better way to do some of the things I've done. Next time we'll do some autoscaling with a shared database.