OpenStack Compute Services
OpenStack Compute & Its Services
In order to provide compute facility to cloud users OpenStack Compute node should be separately installed on separate servers. OpenStack compute node contains nova-compute service and the network agent called Neutron in order to provide compute feature along with network services to end users.
OpenStack compute nodes are those where Virtual machines are created and installed and user uses them.
Compute Service Components:
Compute Services in OpenStack are made up of multiple services working together which takes request, launch and manage the virtual machine. Following are the compute services that runs on OpenStack compute nodes.
Nova-api: This service interacts with User API calls to manage compute instances. It communicates with other compute services via message bus.
Nova-scheduler: This services listen request for new instances on message bus and then selects the best compute node for virtual machine.
Nova-compute: This service starts and terminates the virtual machine , and this service runs on compute nodes and listen this request on message bus.
Nova-conductor: When any instance requires access to databases , it uses nova-conductor , which provides compute call to access database.
In OpenStack compute node , any hypervisor can be used to make it ready to host virtual machines. To make compute node ready with hypervisor, it supports VMware ESXi, QEMU, UML, Xen, Hyper-V, LXC, bare-metal and lately Docker.
Most of the nova-compute node runs KVM as hypervisor, as it is best suited for workloads that are using libvirt. KVM is the default hypervisor for Open Stack you can check this from /etc/nova/nova.conf
For proper error-free hypervisor usage, check weather KVM module are loaded from compute node.
Docker Container provides the isolated user space to host any application. The containers uses the same kernel and same host Operating systems. Containers are providing an encapsulation mechanism that captures the user space configuration and dependencies of an application. This encapsulated application runtime environment can be packaged into portable images. The advantage of this approach is that an application can be delivered along with its dependency and configuration as a self-contained image.
With the help of Docker, any enterprise can host or deploy the application in self-sufficient containers. Docker has large number of containers than is managed on single machine. Dockers provide containers that are not replacement of virtual machine but its containers are very fast and very helpful to deploy any application.
Docker also saves the state of container as an image that can be shared through central image registry which can be shared across different cloud environment.
Compute Cloud Segregation:
OpenStack nova services also helps to segregate cloud resources with various methods, these methods are discussed below:
Concepts of Availability zones is used where servers/compute resources are grouped together in one racks or in different location to avoid fault. With Availability zones configured, end user can still launch instance by choosing different Availability Zones.
AZ can be configured by following commands while editing the file:
And update default_availability_zone value. Once it is updated compute node will be restarted.
When hosts or several compute nodes are aggregated or grouped which provide similar feature and services we say as host aggregation. Host Aggregate can be created by attaching set of metadata to computed nodes.
As example: if we need VM with high CPU and RAM , then we can group the Compute nodes with High RAM and CPU and end user can create VM on that compute host aggregates.
Each Nova region has its own compute nodes and has its own Nova API endpoints. In Openstack cloud different Nova regions also share same keystone service for authentication and advertising the Nova API endpoints. When ever end users wants the nova service , they will have to select the region where virtual machine are to be installed.
Segregation of Workloads:
We can try to understand the workload segregation by following example:
Lets suppose you want to launch or create more than one virtual machine and also you want to place them on same compute nodes. Or you want your application on VM with high availability mode. In this you can use workload segregation.
To use workload segregation, the Nova filter scheduler must be configured with Affinity filters. Add ServerGroupAffinityFilter and ServerGroupAntiAffinityFilter to the list of scheduler filters:
Use the Nova client to create server groups. The server group can be created with an affinity or anti-affinity-based policy as shown here:
The affinity policy places the virtual machines on the same compute node while the anti-affinity policy forces the virtual machines onto different compute nodes.
To start the virtual machines associated with a server group, use the --hint group=svr-grp1-uuid command with the Nova client:
This will make sure that the virtual machines, vm1 and vm2, are placed in the same compute node.
Other Storage Options:
There are other storage option available for compute nodes:
External shared File Storage:
When the hard disk of instance are kept externally and are not local to compute nodes, we say it as External storage.
It has many advantages:
- Instance recovery if compute node fails
- Shared external storage for other installation purpose.
- Heavy I/O disk usage which affects other VMs
- Due to latency , there is performance issue.
Internal non-shared File Storage
When hard disk or each instance are taken from storage which are locally on compute nodes
- No effect of heavy I/O which cause performance issue on VM
- Performance increases due to increase in disk I/O
- Inability to scale the storage capacity if additional storage are needed.
- Creates Problem While migrating VM from one compute nodes to another
Nova Scheduling Process:
Nova scheduling is one of the critical steps in the process of launching the virtual machine. It involves the process of selecting the best candidate compute node to host a virtual machine. The default scheduler used for placing the virtual machine is the filter scheduler that uses a scheme of filtering and weighting to find the right compute node for the virtual machine. The scheduling process consists of going through the following steps:
- The virtual machine flavor itself describes the kind of resources that must be provided by the hosting compute node.
- All the candidates must pass through a filtering process to make sure they provide adequate physical resources to host the new virtual machine. Any compute node not meeting the resource requirements is filtered out.
- Once the compute nodes pass the filtering process, they go through a process of weighting that ranks the compute nodes according to the resource availability.
The filter scheduler uses a pluggable list of filters and weights to calculate the best compute node to host an instance. Changing the list of filters or weights can change the scheduler behavior. Setting the value of scheduler_default_filters can do this.