Index ¦ Archives ¦ Atom

Unpacking the Nova Scheduler (part 2 of ?)

This is part two of a series about the server scheduler in OpenStack Nova. See part one.

Warning: This document describes code that is (as of December 2015) in development. Some of it is not yet released. Assuming everything goes well the full RequestObject will be available as part of the Mitaka release.

The Nova scheduler runs as a separate service (nova-scheduler) within the suite of Nova services. That service presents a narrow RPC interface: a few different methods to update information used by the scheduler to make decisions (to be considered in a later post) and a select_destinations method which returns a list of candidate hosts on which to "schedule" or "place" the requested instances.

In the modern scheduler, select_destinations is passed a RequestSpec which represents the number of instances a user agent requires and the constraints which must be satisfied. Those constraints can be simple scalar data but are often nested complex objects. The current fields are described below. These are hand-wavey overviews for the sake of brevity. For details follow the link above or see the originating spec.

image
A description of the requirements of the image being used as the base for the requested instances. These can be fairly mundane (e.g. min_ram and min_disk required) or rather esoteric (specifications of hardware features provided by the host, see ImageMetaProps for more information).
numa_topology
A description of the desired NUMA setup.
pci_requests
A list of the required PCI devices.
project_id
The project of which these instances will be a part.
availability_zone
The target availability zone (different power supplies, racks and the like)
flavor
An object representing the flavor to be used for the instances. The object contains information such as the amount of RAM, number of virtual cpus and size of root disk required. In the future this information will most likely be carried within the RequestSpec as individual entries to enhance the flexibility of the requirements and decouple flavors to be a part of just the external API.
`num_instances'
The number of instances required.
ignore_hosts
Hosts to ignore; don't put instances on these hosts.
force_hosts and force_nodes
Hosts or nodes where instances must be placed. The difference between a host and a node is subtle and will not be addressed here. When not using Ironic, nodes don't usually come into the picture.
retry
An object describing whether scheduling should be retried upon failure, the number of times the request for instance has tried to be fulfilled, and the hosts tried. Once a configurable maximum tries has been attempted, fail. This information is also used while the instances are being built.
limits
SchedulerLimits object. Haven't fully figured this out yet but it has something to do with managing resource limits in relation to the extent to which resources can be overcommitted on a host. These limits are created during the scheduling process and then used when requesting a specific host to build an instance on a node.
instance_group
An object referencing a logical group of instances that share an affinity or anti-affinity policy. This makes it possible to state instances should or should not be on the same host.
scheduler_hints
A set of key and value pairs which provide a general but perhaps overly fecund method of passing information (hints) to various filters.
instance_uuid
The identifier of the instance being used to set aspects of this request spec.

select_destinations on the Scheduler Manager calls select_destinations on the configured driver (often the filter_scheduler). The next step is to gather information about the available hosts and filter them. That's in part three.

© Chris Dent. Built using Pelican. Theme by Giulio Fidente on github.