OpenStack Architecture & Design Consideration

In this Module we will learn the OpenStack Architecture & Design Consideration so that we can see what OpenStack services are and how it is designed to create owe own cloud environment.

Open Stack is the Orchestration solution and mostly used in Private Cloud deployment from medium to large enterprise infrastructure. Today in cloud computing environment which provide infrastructure services like Software as Service (SaaS), Platform as Service (Paas) and Infrastructure as Service (IaaS) and with the help of OpenStack we can move from traditional datacenter Model to next generation Cloud Computing model where administrators and Operators can deliver fully automated infrastructure with in a minute.

With Openstack Training labs course you will learn to provide programmable, Scalable and multi-Tenant next generation Datacenter and will be able to provide all SaaS, PaaS, and Iaas feature to clients.

Course Pedagogy

This Openstack Training labs Course Pedagogy will help you to learn the following concepts on OpenStack Section

  • Learn OpenStack clustering and it’s Services
  • Learn OpenStack compute and its Services
  • What are OpenStack Storage Services
  • Learn about OpenStack networking services
  • How to use OpenStack virtual network with routers
  • Learn about OpenStack Monitoring
  • Create Subnets and Instance via GUI & CLI on OpenStack
  • Configure VLAN based network for OpenVSwitch
  • Configure Flat & External Network via OpenStack
  • Configure VXALN overlay network via OpenStack
  • Configure instance with specific IP address
  • Configure Routing on OpenStack
  • Configure Security policy & Firewall Services on OpenStack
  • Configure Multiple IP address on Instance.

OpenStack Architecture:

OpenStack is combination of Various Services which work closely and together to provide cloud computing infrastructure to end user. Open Stack has been released in various versions over time like Juna, Folson, Grizzly, Mitka, Kilo etc.

We will learn what services Openstack provides which makes it ideal for private cloud computing environment one by one.

Keystone (Identity Management):

This Service is used to provide the authentication and authorization to tenants in Openstack. When different Openstack wants to communicates to each other they must be authorized by keystone which ensure the right user or services are going to use the particular service.

Keystone uses various method for authentication like username/ password or token/authentication based method.

Keystone also helps to integrate third party authentication and authorization system like LDAP or PAM (Pluggable Authentication Module).

Swift –Object Storage:

OpenStack users can use Swift service for storage purpose. It provides Object based storage service and data can be accessible through REST API from Object Store.

Object Stores splits the data in to smaller chunks and stores it in separate containers and these containers or copies are spread across cluster of storage nodes which further provides HA, auto-recovery and scalability.

Cinder – Block Storage:

Cinder Service provide persistence block storage and which is very much used in providing storage to Virtual machine. These storage to VM can be used as Hard disks. The Cinder provides following features to open stack users.

  • Create or delete the Storage Volume
  • Attaching or detaching Volume to VM
  • Creating or Deleting Snapshot of Volume.
  • Cloning of Volumes
  • Creating Volumes from snapshots.

Manila –File Share:

OpenStack also provides the File share features to Openstack users. It provides the storage as remote file systems just like NFS or SAMBA (used in Linux Machines). It provides the Multi-access feature in which multi-VM can access the same File System to store data.

Glance – Image Registry:

Whenever a Virtual Machine to be launched it requires image (Windows, Linux, Ubuntu, etc.). The Glance Service in OpenStack provides registry of images and Metadata that is used to launch the VM.

Based on your hypervisor various images formats is supported, as example image for KVM/Qemu, VMWARE, and XEN etc.

We can say metadata is information of any virtual machine like Kernel, Disk image, disk format. These information is available to users from REST API.

Nova –Compute Service:

Open stack provide compute service via Nova and with the help of Nova it manages the Virtual machine. End users in open stack communicates with nova-api to create instances via OPENSTACK API or EC2 API.

Nova-compute is the worker daemon which creates and terminates VM instance via different hypervisor like XenServer, VMware etc.

Nova-Network accepts networking task from queue and perform or implements the networking components ( Neutron has replaces the nova-networking services) .

Nova-Scheduler : It takes VM instance request from queue and then perform  the task or schedules the task based on work.

Neutron – Networking & Architecture:

Neutron provides network connectivity between Openstack service. It allows users to create their own networks and connect the server interface to that network.

Neutron has three main components:

  • Neutron server: Accepts the API request and route it to appropriate Neutron plugins for work.
  • Neutron Plugins: They are called workers and perform actual works for the orchestration of backend devices like creating/deleting network, sub netting, IP addressing etc.
  • Neutron agents: These are agents which runs on compute or network nodes. They receives the commands from neutrons plugins on neutrons server, based on command they perform network changes on individual compute or network nodes. There are different type of agents like layer 2 agents from layer2 connectivity to nodes and Layer 3 agents provides routing and NAT service only on network Nodes.

Neutron provides the following core resources for network connectivity:

  • Ports: It is refers as the virtual ports on virtual switch where host/instances or network services are connected to network.
  • Networks: It is just like L2 segments and is looked as virtual switch which are implemented by Open vSwitch, other virtual switch software or Linus Bridging tool.
  • Subnet: It is just like a certain block of IP address associated to network.

Neutron service also provide services like Designing Private IP Subnet , External Network , Floating IPs for NAT , Load Balancing as Service(LBaaS) , Firewall as Service(FWaaS), or Virtual Private Network as a Service(VPNaaS).

Ceilometer – Telemetry:

It is the metering service in Openstack and is used to determine how much resources has been utilized by a user for how much time , these all reports are used to generates the bills for customer.

Ceilometer collects data from OpenStack resources like VM, Disks, routers, networks etc. for resource utilization report.

Horizon Dashboard:

Horizon dashboard is the web service which Openstack provides and this contains all different services integrated that user can use. When any service is used from Horizon its sends request to that service via API call and display the result based on what is returned from that request.

Message Queue:

Message Queue provides a central hub to pass the message between different components of the service which are being or going to be used for any user request. With the Message queue, it buffers request and provide the unicast based communication to services.

OpenStack Working:

Let’s understand how open stack works and perform any work from start to end in a very simple manner:

Step1: Authentication is performed from keystone service via user based credential.

Step2: Once user is authenticated, service catalog is provided by keystone which contains information about Open stack service and Endpoints.

To see OpenStack catalog use following commands:

$ openstack catalog list

Step3: Once User is authenticated, user can now talk to an API node like OpenStack API and EC2 API.

fig 0.1

Step4: Once API is called, Instance Scheduler comes in to picture who manages the launching of instance on individual nodes and keeps tracks of resources available.

When any VM is launched, different services starts working together following is the brief workflow based on API calls in Openstack:

  • Call to identity service for authentication
  • Generation of tokens which will be used for API calls
  • Contact to image service to retrieve the base image
  • Request to compute API for compute services
  • Process to compute service calls to determine security groups and keys
  • Calls to Network service calls to determine available network
  • Choosing hypervisor node by compute scheduler service
  • Calling block storage service API to allocate Volume to instance
  • Calls to Network Service API to allocate network resources to instance.
  • Launching of instance and running the instance services.

OpenStack deployment design method:

There are three ways which are followed in one after one which helps us to deploy Openstack in any environment.

Conceptual Model Design:

This model determines what services will be required for any cloud or enterprise which Openstack can provide.

Image, Storage, Compute, Network, telemetry, Identity, Dashboard, Orchestration.

Logical Model Design:

In this model we determine how many nodes are required for running the Openstack services.

In mid or large Enterprise, we use three types of Nodes like Openstack Controller, Network Nodes and Compute Node and their size and number can be different to provide HA.

In this Model we determine what network is required for Open stack Services and other tunneled network like GRE, VXLAN etc.

Which OpenStack service will be used for Network Services like Nova or Neutron?

Below Network types are also determined for proper network Design:

  • Tenant Network Design
  • Management and API network Design
  • Storage Network Design
  • External Network Design

Estimating hardware for Openstack deployment:

For Any Open Stack Deployment we will now calculate the size of physical resources required:

For Estimating learning we follow these assumption:

CPU calculation:

  • 100 virtual machines
  • No CPU oversubscribing
  • GHz per physical core = 2.6 GHz
  • Physical core hyper-threading support = use factor 2
  • GHz per VM (AVG compute units) = 2 GHz
  • GHz per VM (MAX compute units) = 16 GHz
  • Intel Xeon E5-2648L v2 core CPU = 10
  • CPU sockets per server = 2

The formula for calculating the total number of CPU cores is as follows:

(number of VMs x number of GHz per VM) / number of GHz per core : (100 * 2) / 2.6 = 76.92

We have 76 CPU cores for 100 VMs.

The formula for calculating the number of core CPU sockets is as follows:

Total number of sockets / number of sockets per server : 76 / 10 = 7.6

We will need 8 sockets

The formula for calculating the number of socket servers is as follows:

Total number of sockets / Number of sockets per server: 8 / 2 = 4

You will need around seven to eight dual socket servers.

The number of virtual machines per server with eight dual socket servers is calculated as follows:

We can deploy 25 virtual machines per server : 100 / 4 = 25

Number of virtual machines / number of servers

Memory calculations:

Based on the previous example, 25 VMs can be deployed per compute node. Memory sizing is also important to avoid making unreasonable resource allocations.

Let's make an assumption list (keep in mind that it always depends on your budget and needs):

  • 2 GB RAM per VM
  • 8 GB RAM maximum dynamic allocations per VM
  • Compute nodes supporting slots of: 2, 4, 8, and 16 GB sticks

RAM available per compute node: 8 * 25 = 200 GB

Considering the number of sticks supported by your server, you will need around 256 GB installed. Therefore, the total number of RAM sticks installed can be calculated in the following way:

Total available RAM / MAX Available RAM-Stick size : 256 / 16 = 16

Network calculations

To fulfill the plans that were drawn for reference, let's have a look at our assumptions:

  • 200 Mbits/second is needed per VM
  • Minimum network latency

To do this, it might be possible to serve our VMs by using a 10 GB link for each server, which will give:

10,000 Mbits/second / 25VMs = 400 Mbits/second

Storage Calculation:

Considering the previous example, you need to plan for an initial storage capacity per server that will serve 25 VMs each.

A simple calculation, assuming 100 GB ephemeral storage per VM, will require a space of 25*100 = 2.5 TB of local storage on each compute node.

You can assign 250 GB of persistent storage per VM to have 25*250 = 5 TB of persistent storage per compute node.

Note: ( Refer before Purchase )

  • We don't offer Any Hands-On labs for practice in this course.
  • Lab discussed here contains different Scenerios, task & Its recorded Solutions. 
  • Content of each page is 30-40% visible for Customer verification about content.
  • Before any purchase , verify content then proceed,VLT is in progress,No refund Policy. 
  • For More Detail : Mail , FAQ & TC page.


  • DC

    Very good content


Please login here to comment.