EMAIL SUPPORT

dclessons@dclessons.com

LOCATION

NZ

OpenStack Storage Services

OpenStack Storage Services

Posted on Jan 13, 2020 (0)

OpenStack Block Object and File Share Openstack storage

Storage Option:

Thera are two types of storage option mostly available to provide storage solution to compute nodes and further to instances.

  • Ephemeral Storage
  • Persistent Storage

Ephemeral Storage:

This is non-persistence storage, in which as soon as VM is terminated, it loses the associated disk. In working as soon as any VM is booted, glance image downloaded on compute node and this image is used as first disk for nova instance which further provides ephemeral storage. As the name suggests, anything stored will be lost as soon as VM is terminated.

Persistent Storage:

This type of storage means, even though the VM is powered off the storage and its data will be available. Persistent storage is divided in to three options:

  • Object Storage
  • File Storage
  • Block Storage

What is object storage:

In object storage, data is stored in from of objects via RESTful HTTP API. By this type of storage, even though nodes fails, data are not lost and these type of storage can be scaled infinitely.

  • In the object Storage, Data are stored as binary large objects (blobs) with multiple replicas on Object Storage servers.
  • Object Storage are accessed using an API called REST or SOAP. And the data cannot be accessed by file protocol such as BFS, SMB or CIFS.
  • These type of storage are not suitable for high performance requirement or the system that frequently changes data in database.

Swift: Object Storage

Swift is the Object storage service provided by Open Stack and was developed by joint efforts of NASA and Rack Space.

There are following benefits of using Swift as object storage solution:

  • Highly Scalable
  • On-Demand storage solution
  • Elastic in nature as storage can be increased or decreased

Swift Architecture:

Swift Architecture is distributed in nature, and it also prevent the single point of failure and can be scaled horizontally. There are following Swift components as discussed below:

  • Swift Proxy Server: This server, accepts the incoming request either via HTTP or OpenStack object API. It accepts the request for file upload, metadata updation, and container creation.
  • Proxy Server: This server is mostly deployed in memcached to improve performance and mostly used for caching
  • Account Server: This server manages the account that defines the object and its associated storage service. It also maintains the list of containers associated to account.
  • Container Server: Container is basically user defined storage area , with in swift account. It maintains the list of objects stored in particular container. It is just like folders in windows systems.
  • Object Server: It manages the object in containers. This server defines where and how actual data and its metadata are stored and every object belongs to a container.

In swift object storage solution, when we need to search, retrieve and index any data on object storage device, Meta data is used. These metadata are stored with object itself in key/value pairs.

How Swift is physically designed:

By default, a swift cluster storage has design considers the replica if three. Once data is written it si spread across other two redundant replicas.

Swifts isolates failures, and to achieve this, swift defines the new hierarchy that helps abstract the logical organization of data from physical one.

Region: As Swift is a geographically distributed environment, data can be sent to multiple nodes that are placed in to different regions. Swift to support the read/write function, it favor the data that are closer to read and while write, data is written locally and then is transferred to rest of regions.

Zones: Unser Regions come zones, which defines availability level that swift provides. To create zone, grouping of storage resources such as rack or storage node is done.

Storage nodes: Storage server are referred as Storage nodes and a group of storage nodes froms the cluster that runs the swift processes and stores the account, containers and object data and its associated Meta data.

Storage Device: It is smallest unit of storage in swift data stack. It can be internal storage node device or connected via external link to collection of disk in drive enclosure.

Swift Ring:

In Swift the logical layout of object data is mapped to a path based on the account, container and object hierarchy. In the context of OpenStack, the account maps to the tenant. Each tenant can have multiple containers, which are like folders in the filesystem, and finally the object belongs to a container just like a file belongs to a folder in convention filesystem based storage.

The Swift, ring maps the logical layout of data from account, container, and object to a physical location on the cluster. Swift maintains one ring per storage construct that is there are separate rings maintained for account, container, and object. The Swift proxy finds the appropriate ring to determine the location of the storage construct, for example, to determine the location of a container, the Swift proxy locates the container ring.

The rings are built by using an external tool called the swift-ring-builder. The ring builder tool performs the inventory of Swift storage cluster and divides them into slots called partitions.

A frequently asked question is how many partitions should the ring have? It is recommended that each storage drive be divided into 100 partitions. For example, using 100 disks for the object storage cluster requires having 10000 partitions:

The following is the generic format of the ring builder command:

#swift-ring-builder create

The can be one of account.builder, container.builder, or object.builder. The number of partitions is approximated to the closest power of 2 to get the part power of the cluster. If we have, for example, 50 disks with 100 partitions, we approximate the part power to be 13, which gives a value of 8192. It is recommended that the approximation be rounded to the higher side.

It is recommended to have 3 replicas of each partition in the cluster. The determines the time in an hour during which only one replica of a partition can be moved. the following is an example of the account file command line build:

#swift-ring-builder account.builder create 13 3 1

Once the ring is built, the devices must be added to the ring and initiate a rebalance using the following Swift command line tools:

# swift-ring-builder add z-:/_
# swift-ring-builder rebalance

The data stored in Swift is mapped into these partitions. The full path of the data itself determines the partition to which the data belongs. This is done by determining the MD5 hash of the object path as follows:

md5("/account/container/object")

Only a certain part of this hash is used as an index to place the object into a partition. Swift maintains replicas of partitions and disperses them in to different zones.

How to design Swift Hardware:

To design the swift hardware , we take the flowing sample as base .

  • For 100 TB of object storage
  • Cluster replica of 3
  • The Swift filesystem is XFS
  • A hard drive of 5 TB
  • 50 hard drive slots per chassis

Assuming a cluster of 3 replicas, the total storage capacity can be calculated in the following way:

100 * 3 replicas = 300 TB

Considering the factor of metadata overhead, the total raw storage capacity can be calculated by rounding to the nearest decimal number as follows:

300 * 1.0526 = 315 TB

To determine the number of hard drives that are required:

[315 / 5]  =  63 drives

Total number of storage nodes requites with following calculation:

63/50 = 1.2  =  2 nodes

Swift Network:  Each Swift Storage has swift network feature associated in to it.  Swift Network is extended in to following:

Front Cluster Network: This network is used to provide communication between Proxy servers and External Clients who are generating requests and then forwards the traffic for external API access of the cluster.

Storage Cluster Network: It allows the communication between storage nodes and proxy servers. It also helps in inter-node communication across several racks in same regions.

Replication Network: This network is used for replication of data between storage nodes.

Block Storage Service: Cinder

It is the persistent storage which is provided to Virtual machine in OpenStack. In this a certain block of storage is exposed to instance which can now stores the data permanently. It looks like just a C: or D: drive as looks in Windows machine locally. This block storage which are attached to instances should be partitioned and then formatted by any suitable filesystem and then is mounted on any VM.

Cinder supports following storage protocols for storage data : iSCSI, NFS and FC. Cinder can also limit the quotas for each tenant usage. You can see the default quota by following command:

#cinder quota-defaults < name of tenenat >

You can also limit the quota for tenant by following commands :

# cinder quota-update --volumes 50 Tenant-1
# cinder quota-update --gigabytes 1000 Tenant-1
# cinder quota-update --snapshots 50 Tenant-1
# cinder quota-show Tenant-1

Cinder service uses following components:

  • Cinder API Server
  • Cinder Scheduler
  • Cinder Volume Server

Here Cinder API communicates to outside via REST Interface. This API receives the request for managing volumes. Cinder Volume servers are those nodes that host the volumes.

Whereas Schedulers chooses the volume server for hosting new Volume when it receives the request from API and which is requested by End-Users.

Attaching Cinder volume to Nova-Compute can be done via Command Line. See the example:

Create a Cinder volume by specifying the volume name and its size:

# cinder create --display_name volume1 1

  • The default volume driver in Cinder is LVM over iSCSI. The volume create command creates a logical volume (LV) in the volume group (VG) cinder-volumes.
  • Next, use the volume-attach command to attach the Cinder volume to a Nova instance. The volume-attach command must be provided with the Nova instance id, the Cinder volume id, and the device name that will be presented inside the virtual machine:

# nova volume-attach Server_ID Volume_ID Device_Name

This command creates an iSCSI Qualified Name (IQN) to present the LV created in the last step to the compute node running the Nova instance.

  • The last step is to mark the volume as available to the Nova instance itself. This can be achieved by using the libvirt library. Libvirt presents the iSCSI drive as an extra block device to the virtual machine

Share Storage Service: Manilla

File sharing or storage is provided by Manilla project under OpenStack. This Type of service is also a persistent storage for OpenStack tenants. With This storage, multiple users can access file same time. It can be seen as NAS solution which presents file share to client systems. Lots of file sharing protocols like NFS and CIFS is supported by Manilla project.

Following are the components of Manila:

  • Manila API Server: REST Interface and is responsible for handling client request for creating and managing new file shares
  • Manilla Scheduler: It select the right server to host new file share
  • Manilla Data Service: It takes care of share migration and backup
  • Manilla Share Server: This host new Storage share requested by ant OpenStack tenant.

Staring using Manila requires in the first place the creation of a default share type:

# manila type-create default_share_type True

When configured with driver_handles_share_servers, the network and subnet-id of the neutron network that will be attached to the share server must be provided. This can be found using the neutron net-show command line:

# manila share-network-create
--name storage_net1
--neutron-net-id
--neutron-subnet-id

Next, create a share using the following command:

# manila create NFS 1 --name share1 --share-network storage_net1

Finally, create access rules for the share:

# manila access-allow share1 ip 0.0.0.0/0 --access-level rw

Now the file shares can be accessed from the virtual machine over NFS. To get the mount NFS source run the following command:

# manila share-export-location-list share1

 

Comment

    You are will be the first.

LEAVE A COMMENT

Please login here to comment.