Index ¦ Archives ¦ Atom

Placement Container Playground

I've been experimenting with the OpenStack placement service in a container recently. This is a part of exploratory research to see what will be involved in doing one or both of the following:

  • Making a lightweight, but working, install of the placement service that includes as few imports and as small a configuration as possible with minimal knobs.

  • Eventually extracting placement from the large and convoluted nova repo into its own thing.

What follows are some notes, mostly to myself but published here in case others find them useful, about playing with putting placement in a docker-managed container and peeling off some of the useless bits.

Note: The reason for doing this is not to create a working container for placement. If that were the case I'd be collaborating with Kolla and OpenStack Ansible people. No, the goal here is to do things the manual, crufty, hard way to expose issues and learn some stuff.

I tried to keep track of myself in a git repo, called placedock, where, when I remembered, I would commit some state. I tested myself as I went by having a running devstack where the placement service from devstack was replaced by the one managed by the container. In both cases the requests to placement are proxied through apache2 using mod_proxy_uwsgi so all I had to was change the target of the proxy. Testing was driven by the gabbi-tempest plugin. This worked very well.

The first commit provides just a Dockerfile that uses an Alpine base, installs some binary requirements that allow a pip install -e of nova to succeed.

The nova used there is master plus a long running change that allows the database connection for placement to be explicitly for placement (as [placement_database]/connection in the configuration file). Note that just because there's a different connection configuration setting doesn't mean it has to be a different database. In my tests placement data continues to be stored in the nova_api database.

The second commit results in a running placement service but it uses a very large nova.conf with plenty of stuff that is not needed. Because of a placement bug (now fixed), extra steps of linking the conf file within the container are required. That was cleared in a later commit.

The commit that trims the nova.conf is nice because it removes a lot of meaningless cruft and results in a reasonable conf file that can be the same across several instancesj of the placement service:

[DEFAULT]
debug = True

[placement_database]
connection = mysql+pymysql://root:secret@192.168.1.76/nova_api?charset=utf8

[keystone_authtoken]
memcached_servers = 192.168.1.76:11211
project_domain_name = Default
project_name = service
user_domain_name = Default
password = secret
username = nova
auth_type = password
service_token_roles_required = True
auth_url = http://192.168.1.76/identity
www_authenticate_uri = http://192.168.1.76/identity

Even this is longer than it really should be. Ideally all of this could be communicated via environment variables passed to the container and no file would be needed at all. There are other options too.

The most fun commit is the one that trims the python modules. Nova's requirements.txt is 65 lines long and the cascade of transitive dependencies is huge. This change gets the explicit requirements down to less than 30, but the transitive dependencies remain huge because of the package structure in Nova. It's fairly common in nova for __init__.py to contain imports, which means any packages within suffer those imports too. This makes fine sense if there is only one monolithic entry-point, but in nova there are several different api services and command line tools, running in different processes, not all of which need all that stuff.

There are some short term fixes linked from that bug, but the long term fix to get a lightweight placement is to either change the __init__.py files that impact it, or (much better) extract to its own repo. Extraction comes with its own challenges but there may be some tricks to make it less painful (more to be written about that soon).

The final commit in the repo (thus far) switches the uwsgi configuration to use a TCP socket instead of a unix socket. This makes it possible to distribute multiple placement containers across multiple hosts and address then from a reverse proxy. The commit message describes how things are hooked up to apache2 (the http server usually used with devstack).

Things to do Next

  • Make it smaller. It ought to be possible to incrementally chip away at the imports, but it is quite a tangled mess.
  • Copy the config files into the containers rather than sharing them over a volume.
  • Go a step further and explore the work on etcd backends for oslo.config. This wont help the uwsgi config, but that ought to be fairly static. The dynamic configuration values are database connection strings, and authtoken settings.
  • Try it with more than two placement services (works fine with two).
  • Try it with multiple database servers (galera?).

© Chris Dent. Built using Pelican. Theme by Giulio Fidente on github.