This is part two of a series about the server scheduler in OpenStack Nova. See part one.
Warning: This document describes code that is (as of December 2015) in development. Some of it is not yet released. Assuming everything goes well the full
RequestObjectwill be available as part of the Mitaka release.
The Nova scheduler runs as a separate service (
the suite of Nova services. That service presents a narrow RPC
interface: a few different methods to update information used by the
scheduler to make decisions (to be considered in a later post) and a
select_destinations method which returns a list of candidate hosts
on which to "schedule" or "place" the requested instances.
In the modern scheduler,
select_destinations is passed a
which represents the number of instances a user agent requires
and the constraints which must be satisfied. Those constraints can
be simple scalar data but are often nested complex objects. The
current fields are described below. These are hand-wavey overviews
for the sake of brevity. For details follow the link above or see
- A description of the requirements
of the image being used as the base for the requested instances.
These can be fairly mundane (e.g.
min_diskrequired) or rather esoteric (specifications of hardware features provided by the host, see ImageMetaProps for more information).
- A description of the desired NUMA setup.
- A list of the required PCI devices.
- The project of which these instances will be a part.
- The target availability zone (different power supplies, racks and the like)
- An object representing the flavor to be used for the instances.
The object contains information such as the amount of RAM, number of
virtual cpus and size of root disk required. In the future this
information will most likely be carried within the
RequestSpecas individual entries to enhance the flexibility of the requirements and decouple flavors to be a part of just the external API.
- The number of instances required.
- Hosts to ignore; don't put instances on these hosts.
- Hosts or nodes where instances must be placed. The difference between a host and a node is subtle and will not be addressed here. When not using Ironic, nodes don't usually come into the picture.
- An object describing whether scheduling should be retried upon failure, the number of times the request for instance has tried to be fulfilled, and the hosts tried. Once a configurable maximum tries has been attempted, fail. This information is also used while the instances are being built.
SchedulerLimitsobject. Haven't fully figured this out yet but it has something to do with managing resource limits in relation to the extent to which resources can be overcommitted on a host. These limits are created during the scheduling process and then used when requesting a specific host to build an instance on a node.
- An object referencing a logical group of instances that share
anti-affinitypolicy. This makes it possible to state instances should or should not be on the same host.
- A set of key and value pairs which provide a general but perhaps overly fecund method of passing information (hints) to various filters.
- The identifier of the instance being used to set aspects of this request spec.
select_destinations on the Scheduler Manager calls
select_destinations on the configured driver (often the
filter_scheduler). The next step is to gather information about
the available hosts and filter them. That's in part three.