Let’s play with Cinder and RBD (part 2)

The idea is help you get familiar with what cinder can do and how rbd makes it happen and what it looks like on the backend.

  1. Create a volume
  2. Create a snapshot
  3. Create a volume from a snapshot
  4. Create a volume from another volume

Create volume

Let’s create a logical volume:

$ cinder create <size> --name volume1 --description cinder-volume

+--------------------------------+------------------------------------------------+
| Property | Value |
+--------------------------------+------------------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | None |
| created_at | 2016-07-29T00:21:49.000000 |
| description | None |
| encrypted | False |
| id | 1b681f9f-81f6-4965-ad89-28ffb10c1ede |
| metadata | {} |
| migration_status | None |
| multiattach | False |
| name | volume1 |
| os-vol-host-attr:host | vagrant-ubuntu-trusty-64.localdomain@ceph#ceph |
| os-vol-mig-status-attr:migstat | None |
| os-vol-mig-status-attr:name_id | None |
| os-vol-tenant-attr:tenant_id | 80463e7d9d8847169acd70b156ac3b61 |
| replication_status | disabled |
| size | 1 |
| snapshot_id | None |
| source_volid | None |
| status | available |
| updated_at | 2016-07-29T00:21:52.000000 |
| user_id | 4180c9a6469b480cbbf0c5e79dc478fb |
| volume_type | ceph |
+--------------------------------+------------------------------------------------+

Positional arguments:

  • <size> : Size of volume, in GiBs. (Required unless snapshot-id/source-volid is specified).
  • Check cinder help create for more info

Let’s verify with ~/sudo rbd ls volumes to check what rbd have:

vagrant@vagrant-ubuntu-trusty-64:~/devstack$ sudo rbd ls volumes
>bar
>volume-1b681f9f-81f6-4965-ad89-28ffb10c1ede

bar is a image created just with rbd, you can see that all the cinder volumes starts with ‘volume-<uuid>‘.

Create snapshot

So for rbd everything is thinly provisioned, and snapshots and clones use copy-on-write.

When you have a copy-on-write (cow) snapshot, it means it has a depency on the parent that it was created from. The only unique blocks in the snapshot or clone will be the blocks that have been modified (and thus copied) that makes snapshot creation very very fast (you only need to update metadata) and data doesn’t actually move or copy anywhere  …. instead it’s copied on demand, or on write !

This dependency means that you cannot, e.g. delete a volume that has snapshots
because that would make those snapshots unusable, like pulling the rug out from under them.

$ cinder snapshot-create <volume ID or name> --name snap1
$ cinder snapshot-create volume1 --name snap1
$ sudo rbd ls -l volumes
>NAME SIZE PARENT FMT PROT LOCK 
>bar 1024M 1 
>volume-1b681f9f-81f6-4965-ad89-28ffb10c1ede 1024M 2 
>volume-1b681f9f-81f6-4965-ad89-28ffb10c1ede@snapshot-6e9..d 1024M 2 yes 

You can see that has the same volume ID  with ..@snapshot-<ID snap>...

Create volume from snapshot

from: cinder snapshot-list get the snap ID.

$ cinder create --snapshot-id 6e93e928-2558-4f12-a9ab-12d25cd72dbd --name v-from-s

+--------------------------------+------------------------------------------------+
| Property | Value |
+--------------------------------+------------------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | None |
| created_at | 2016-07-29T00:40:00.000000 |
| description | None |
| encrypted | False |
| id | 18966249-f68b-4ed3-901e-7447a25dad03 |
| metadata | {} |
| migration_status | None |
| multiattach | False |
| name | v-from-s |
| os-vol-host-attr:host | vagrant-ubuntu-trusty-64.localdomain@ceph#ceph |
| os-vol-mig-status-attr:migstat | None |
| os-vol-mig-status-attr:name_id | None |
| os-vol-tenant-attr:tenant_id | 80463e7d9d8847169acd70b156ac3b61 |
| replication_status | disabled |
| size | 1 |
| snapshot_id | 6e93e928-2558-4f12-a9ab-12d25cd72dbd |
| source_volid | None |
| status | creating |
| updated_at | 2016-07-29T00:40:00.000000 |
| user_id | 4180c9a6469b480cbbf0c5e79dc478fb |
| volume_type | ceph |
+--------------------------------+------------------------------------------------+

Create a volume from another volume

Since we are cloning from a volume and not a snapshot, we must first create a snapshot of the source volume.

$ cinder create --source-volid 1b681f9f-81f6-4965-ad89-28ffb10c1ede --name v-from-v

+--------------------------------+------------------------------------------------+
| Property | Value |
+--------------------------------+------------------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | None |
| created_at | 2016-07-29T00:44:47.000000 |
| description | None |
| encrypted | False |
| id | 9f79be73-4df6-4ab1-ab70-02c91df96439 |
| metadata | {} |
| migration_status | None |
| multiattach | False |
| name | v-from-v |
| os-vol-host-attr:host | vagrant-ubuntu-trusty-64.localdomain@ceph#ceph |
| os-vol-mig-status-attr:migstat | None |
| os-vol-mig-status-attr:name_id | None |
| os-vol-tenant-attr:tenant_id | 80463e7d9d8847169acd70b156ac3b61 |
| replication_status | disabled |
| size | 1 |
| snapshot_id | None |
| source_volid | 1b681f9f-81f6-4965-ad89-28ffb10c1ede |
| status | creating |
| updated_at | 2016-07-29T00:44:47.000000 |
| user_id | 4180c9a6469b480cbbf0c5e79dc478fb |
| volume_type | ceph |
$ sudo rbd ls -l volumes
NAME SIZE PARENT FMT PROT LOCK 
bar 1024M 1 
volume-18966249-f68b-4ed3-901e-7447a25dad03 1024M volumes/volume-1b681f9f-81f6-4965-ad89-28ffb10c1ede@snapshot-6e93e928-2558-4f12-a9ab-12d25cd72dbd 2 
volume-1b681f9f-81f6-4965-ad89-28ffb10c1ede 1024M 2 
volume-1b681f9f-81f6-4965-ad89-28ffb10c1ede@snapshot-6e93e928-2558-4f12-a9ab-12d25cd72dbd 1024M 2 yes 
volume-1b681f9f-81f6-4965-ad89-28ffb10c1ede@volume-9f79be73-4df6-4ab1-ab70-02c91df96439.clone_snap 1024M 2 yes 
volume-9f79be73-4df6-4ab1-ab70-02c91df96439 1024M volumes/volume-1b681f9f-81f6-4965-ad89-28ffb10c1ede@volume-9f79be73-4df6-4ab1-ab70-02c91df96439.clone_snap 2 

In this case the clone volume has an volume-ID@volume-ID.clone_snap

READ MORE?

 

Advertisements

Author: enriquetaso

Outreachy intern. Really happy to be part of OpenStack community. LinuxChix Argentina member. Python Girl.

2 thoughts on “Let’s play with Cinder and RBD (part 2)”

  1. (If you want me to post in english, please tell me)

    Hola. Ante todo gracias por compartir esta información, estoy metido en mi proyecto de fin de grado peleándome todos los días con OpenStack desde hace mes y medio. De hecho en lo que más he de centrarme es justo en el objeto de este post: snapshots, aunque sin dejar de lado los backups en sí (full e incremental). Tu post ayuda a entender algunos aspectos, sobretodo el funcionamiento lógico, aunque en mi caso no use Ceph, así que gracias de nuevo.

    La duda que siempre me queda y que no encuentro tratada en ningún sitio (quizá porque esté buscando mal) es el papel de los metadatos de un volumen al realizar backups, o los metadatos del backup en sí. ¿Conoces alguna referencia que pueda consultar?.

    Un saludo y aquí tienes a un seguidor, novato de OpenStack, que va a estar atento a tus posts en el futuro :).

    Liked by 1 person

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s