Which type of storage openstack provides?

Introduction

There are two types of storage:

Ephemeral Storage - By default users do not have access to any form of persistent storage. The disks associated with VMs are ephemeral dissapearing when a VM is terminated.

Persistent Storage - persistent means that the storage resource is always available regardless of the state of a running instance.

Openstack support three types of persistent storage:

  1. Object Storage (Swift project)
  2. Block Storage (Cinder project)
  3. File Storage (Manila project)

Object Storage

Object storage is implemented in OpenStack by the Object Storage service (swift). Users access binary objects through a REST API. If your intended users need to archive or manage large datasets, you should provide them with Object Storage service.

Block Storage

The Block Storage service supports multiple back ends in the form of drivers, most drivers allow the instance to have direct access to the underlying storage hardware’s block device. This helps increase the overall read/write IO.

Because these volumes are persistent, they can be detached from one instance and re-attached to another instance and the data remains intact.

Create a Volume ( storage permanent)

Howto

$ openstack volume create --help

Ex:

$ openstack volume create --size 100 \
  --description "Test 100Gb" my_100_vol

$ openstack volume show my_100_vol -c id

+-------+--------------------------------------+
| Field | Value                                |
+-------+--------------------------------------+
| id    | ce56c6ba-f87a-4cfa-9b1d-b77b782022dd |
+-------+--------------------------------------+

Attaching the volume to a running instance

  1. Select a volume ID with status "available"
  2. Select a running instance
  3. Attach the volume
$ openstack volume list -c Name -c Status -c Size | grep available
| test_               | available |   100 |
| vol_10              | available |     8 |


$  openstack server list -C Name -c Status -c Networks --user xxxxxx
+--------------+--------+---------------------------------------+
| Name         | Status | Networks                              |
+--------------+--------+---------------------------------------+
| my_test2     | ACTIVE | demo-net=192.168.1.3                  |
+--------------+--------+---------------------------------------+
$ openstack server add volume  --device /dev/vdb \
7a5d8a54-150d-4b3e-a815-7405e9c76d59 eec10c75-689c-433c-8762-d67208184d55

$ openstack server show my_test2 -c 'volumes_attached'

+------------------+-------------------------------------------+
| Field            | Value                                     |
+------------------+-------------------------------------------+
| volumes_attached | id='eec10c75-689c-433c-8762-d67208184d55' |
+------------------+-------------------------------------------+

Create Instance with attached volume ( storage permanent)

Get a volume ID

$ openstack volume list -c ID -c Name| grep my_100_vol

| ce56c6ba-f87a-4cfa-9b1d-b77b782022dd | my_100_vol |
$  openstack server create  --flavor m1.medium \
        --image Fedora-27 --nic net-id=$OS_NET \
        --block-device-mapping vdb=ce56c6ba-f87a-4cfa-9b1d-b77b782022dd \ 
        --security-group default --key-name jhondoe-key \
        test_my_100 

+-----------------------------+-----------------------+
| Field                       | Value                 |
+-----------------------------+-----------------------+
| OS-DCF:diskConfig           | MANUAL                | 
| OS-EXT-AZ:availability_zone |                       |
| OS-EXT-STS:power_state      | NOSTATE               |
| OS-EXT-STS:task_state       | scheduling            |
| OS-EXT-STS:vm_state         | building              |
| OS-SRV-USG:launched_at      | None                  |
| OS-SRV-USG:terminated_at    | None                  |
| accessIPv4                  |                       |
| accessIPv6                  |                       |
| addresses                   |                       |
| adminPass                   | vJh9X4PKtam4          |
| config_drive                |                       |
| created                     | 2018-04-07T17:29:17Z  |
| flavor                      | m1.medium (3)         |
| hostId                      |                       |
| name                        | test_my_100           |
| progress                    | 0                     |
| volumes_attached            |                       |
+-----------------------------+-----------------------+
$ openstack volume show my_100_vol -c status
+--------+--------+
| Field  | Value  |
+--------+--------+
| status | in-use |
+--------+--------+

Installing a file system in a volume

In the recently created vm ( !!! )

    $sudo su -
    root> fdisk /dev/vdb
    ...
    ...
    ...
    ...
    Created a new partition 1 of type 'Linux' and of size 100 GiB.
    ...
    /dev/vdb1        2048 209715199 209713152  100G 83 Linux
    ...
    Writing superblocks and filesystem accounting information: done

Note: In a very large volume you must use "parted" instead "fdisk"

Create a file system

$ mkfs.xfs /dev/vdb1

Mount the new file system

$ mkdir /mnt/vol1

$ mount /dev/vdb1 /mnt/vol1
Ex:
    $ df -h
    Filesystem      Size  Used Avail Use% Mounted on
    devtmpfs        2.0G     0  2.0G   0% /dev
    tmpfs           2.0G     0  2.0G   0% /dev/shm
    tmpfs           2.0G  496K  2.0G   1% /run
    tmpfs           2.0G     0  2.0G   0% /sys/fs/cgroup
    /dev/vda1        40G  791M   37G   3% /
    tmpfs           396M     0  396M   0% /run/user/1000
    /dev/vdb1        98G   61M   93G   1% /mnt/vol1

When you recreate a new the vm, you only do

  1. create the mount repertory
  2. mount the volume

Resize a file system

For resize a file system you must in the order

  1. Resize the Cinder Volume
  2. Resize the partition containing the file system
  3. Resize the file system
Resize the Cinder Volume

Umount the file system

$ sudo umount /mnt
$ exit

Detach the volume

$ openstack server remove volume 83232561-0be8-4b33-8a14-3b5bd834ee87 b811b277-8918-4e66-8e7c-1ee976d49a63

$ openstack volume list

+--------------------------------------+------------+-----------+------+
| ID                                   | Name       | Status    | Size |
+--------------------------------------+------------+-----------+------+
| 39f400b5-e224-427e-b86a-64ea1d29c21f | to_delete3 | available |   20 |
| c1395e36-ccb0-4e5b-ae50-877ac1a7c90c | to_delete2 | available |  500 |
| 2f111391-8352-4799-aa6b-ffe96f8dc02b | to_delete  | available |  500 |
| 6d6e65b7-3148-42f7-94ae-48051632641f | vol_jcc_03 | available |  500 |
| 60b46b5c-668a-42fd-9ba4-29ee13d2d20f | vol_jcc_02 | available |  500 |
| a4c79fc5-8042-4517-b9d4-3fc1e92c22f5 | vol_jcc_00 | available |  500 |
| b811b277-8918-4e66-8e7c-1ee976d49a63 | 50_vol_jcc | available |   50 |
| cacb3aa3-90fe-4c34-b5ce-98d1e89da449 | 10_vol_jcc | available |   10 |
+--------------------------------------+------------+-----------+------+

Resize the volume

$ openstack volume set b811b277-8918-4e66-8e7c-1ee976d49a63 --size 100

$ openstack volume list

+--------------------------------------+------------+-----------+------+
| ID                                   | Name       | Status    | Size |
+--------------------------------------+------------+-----------+------+
| 39f400b5-e224-427e-b86a-64ea1d29c21f | to_delete3 | available |   20 |
| c1395e36-ccb0-4e5b-ae50-877ac1a7c90c | to_delete2 | available |  500 |
| 2f111391-8352-4799-aa6b-ffe96f8dc02b | to_delete  | available |  500 |
| 6d6e65b7-3148-42f7-94ae-48051632641f | vol_jcc_03 | available |  500 |
| 60b46b5c-668a-42fd-9ba4-29ee13d2d20f | vol_jcc_02 | available |  500 |
| a4c79fc5-8042-4517-b9d4-3fc1e92c22f5 | vol_jcc_00 | available |  500 |
| b811b277-8918-4e66-8e7c-1ee976d49a63 | 50_vol_jcc | available |  100 |
| cacb3aa3-90fe-4c34-b5ce-98d1e89da449 | 10_vol_jcc | available |   10 |
+--------------------------------------+------------+-----------+------+

Atach the volume

$ openstack server add volume  --device /dev/vdb \
83232561-0be8-4b33-8a14-3b5bd834ee87 b811b277-8918-4e66-8e7c-1ee976d49a63

$ ssh -i ~/.ssh/os_jhondoe.pem centos@134.158.21.40

$ sudo su -

$ mount /dev/vdb1 /mnt
Resize the partition containing the file system
$ df -h | grep mnt
/dev/vdb1    50G   11G  40G  22% /mnt

$ umount /mnt

$ parted /dev/vdb

GNU Parted 3.1
Using /dev/vdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p                                                                
Error: The backup GPT table is not at the end of the disk, as it should be.  This might mean that another operating system believes
the disk is smaller.  Fix, by moving the backup to the end (and removing the old backup)?
Fix/Ignore/Cancel? Fix                                                    
Warning: Not all of the space available to /dev/vdb appears to be used, you can fix the GPT to use all of the space (an extra
104857600 blocks) or continue with the current setting? 
Fix/Ignore? Fix                                                           
Model: Virtio Block Device (virtblk)
Disk /dev/vdb: 107GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags: 

Number  Start   End     Size    File system  Name     Flags
1      1049kB  53.7GB  53.7GB  xfs          primary

(parted) p                                                                
Model: Virtio Block Device (virtblk)
Disk /dev/vdb: 107GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags: 

Number  Start   End     Size    File system  Name     Flags
1      1049kB  53.7GB  53.7GB  xfs          primary

(parted) resizepart 1 100%
(parted) p                                                                
  Model: Virtio Block Device (virtblk)
  Disk /dev/vdb: 107GB
  Sector size (logical/physical): 512B/512B
  Partition Table: gpt
  Disk Flags: 

Number  Start   End    Size   File system  Name     Flags
1      1049kB  107GB  107GB  xfs          primary

(parted) q    
Resize the File System
$ sudo  mount /dev/vdb1 /mnt                                             
$ df -h | grep mnt
/dev/vdb1        50G   11G   40G  22% /mnt
$ sudo xfs_growfs /dev/vdb1
    ...
    ...
    data blocks changed from 13106688 to 26214139

$ df -h | grep mnt 
/dev/vdb1       100G   11G   90G  11% /mnt

File-based storage

The Shared File Systems service is persistent storage and can be mounted to any number of client machines. It can also be detached from one instance and attached to another instance without data loss. During this process the data are safe unless the Shared File Systems service itself is changed or removed.

It represents an allocation of a persistent, readable, and writable filesystems. Compute instances or clients outside access these filesystems.

Using a shared file storage

You ask for a share space and provide him with one ore more ip addresses to mount it.

$ manila create NFS 4 --name test5_jcc

Allow read only access to a snapshot for a given IP or range.

$ manila access-allow test5_jcc ip 134.158.21.33 --access-level ro

It must give you a shared mounting point.

$ manila show test5_jcc | grep path
path = 134.158.20.112:/var/lib/manila/mnt/share-120a99ae-d7cb-4e35-b23263aafc7f1821 

After that you can mount it in one or more vms.

$ sudo mkdir -f /mnt    

$ sudo mount -vt nfs4 134.158.20.112:/var/lib/manila/mnt/share-120a99ae-d7cb-4e35-b232-63aafc7f1821 /mnt
mount.nfs4: timeout set for Tue Jun 30 09:34:59 2020
mount.nfs4: trying text-based options 'vers=4.1,addr=134.158.20.112,clientaddr=192.168.2.24'

$ mount | grep mnt
134.158.20.112:/var/lib/manila/mnt/share-120a99ae-d7cb-4e35-b232-63aafc7f1821 on /mnt type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.168.2.24,local_lock=none,addr=134.158.20.112)

Check your share

$ sudo ls > /mnt/toto
-bash: /mnt/toto: Read-only file system

$ sudo umount  /mnt

Its posible to mount the share in a external host. The openstack admin must be notified to prepare the right ACL set giving access to shared zone.

More Storage concepts

design storage concepts