查看rbd对象以及组成

作者: seamus 分类: ceph 发布时间: 2018-05-05 00:21

创建format 2格式的image 加上–image-format=2参数,下面直接看下rbd format2下有哪些对象

[root@node1 ~]# rbd create image –size 100M –image-format=2
[root@node1 ~]# rados ls -p rbd
rbd_directory
rbd_id.image
rbd_header.3289416b8b4567

**rbd_id.{image 名字}**

rbd_id对象的格式为:rbd\uid.{image name}head_hashpoolid

[root@node1 ~]# ceph osd map rbd rbd_id.image
osdmap e188 pool ‘rbd’ (0) object ‘rbd_id.image’ -> pg 0.1e6f8db0 (0.30) -> up ([2,0], p2) acting ([2,0], p2)

到osd2上 current下查看对象

[root@node2 ~]# cd /var/lib/ceph/osd/ceph-2/current/0.30_head/
[root@node2 0.30_head]# ll
total 4
-rw-r–r– 1 ceph ceph  0 Jan 31 15:32 __head_00000030__0
-rw-r–r– 1 ceph ceph 18 Mar 26 14:58 rbd\uid.image__head_1E6F8DB0__0

[root@node2 0.30_head]# cat rbd\\uid.image__head_1E6F8DB0__0
3289416b8b4567
cat查看到该对象存储的值为3289416b8b4567 ,也就是image这个块的id,与下面命令查看一致。其中文件最后_0表示的是pool id号  rbd池编号。

[root@node2 0.30_head]# rbd info image
rbd image ‘image’:
size 102400 kB in 25 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.3289416b8b4567
format: 2
features: layering
flags:

**rbd_directory**

rbd_directory对象的命名格式为:rbd\udirectoryhead_hashpoolid

[root@node1 0.46_head]# ceph osd map rbd rbd_directory
osdmap e188 pool ‘rbd’ (0) object ‘rbd_directory’ -> pg 0.30a98c1c (0.1c) -> up ([1,2], p1) acting ([1,2], p1)

[root@node1 0.46_head]# pwd
/var/lib/ceph/osd/ceph-1/current/0.46_head
[root@node1 0.46_head]# cd ../0.1c_head/
[root@node1 0.1c_head]# ll
total 0
-rw-r–r– 1 ceph ceph 0 Jan 31 15:32 __head_0000001C__0
-rw-r–r– 1 ceph ceph 0 Mar 26 14:58 rbd\udirectory__head_30A98C1C__0

这个对象里面包含对应存储池里所有的image的name和id的双向映射

[root@node1 0.1c_head]# rados -p rbd  listomapvals  rbd_directory
id_3289416b8b4567
value (9 bytes) :
00000000  05 00 00 00 69 6d 61 67  65                       |….image|
00000009

name_image
value (18 bytes) :
00000000  0e 00 00 00 33 32 38 39  34 31 36 62 38 62 34 35  |….3289416b8b45|
36 37                                                       |67|
00000012
可以看到id对应的块的名字image,以及name_image(块名)对应的id号。

**rbd_header.{image id}**

rbd_header对象的格式为:rbd\uheader.{image id}head_hashpoolid

[root@node1 ~]# ceph osd map rbd rbd_header.3289416b8b4567
osdmap e188 pool ‘rbd’ (0) object ‘rbd_header.3289416b8b4567’ -> pg 0.d49b9246 (0.46) -> up ([0,1], p0) acting ([0,1], p0)

[root@node1 ~]# cd /var/lib/ceph/osd/ceph-1/current/0.46_head/
[root@node1 0.46_head]# ll
total 0
-rw-r–r– 1 ceph ceph 0 Jan 31 15:33 __head_00000046__0
-rw-r–r– 1 ceph ceph 0 Mar 26 14:58 rbd\uheader.3289416b8b4567__head_D49B9246__0

记录rbd image的元数据,其内容包括size,order,object_prefix, snapseq, parent(克隆的image才有), snapshot{snap id}(各个快照的信息)。

通过listomapvals查看对象的各属性值(k/v)

[root@node1 0.46_head]# rados -p rbd listomapvals rbd_header.3289416b8b4567
features
value (8 bytes) :
00000000  01 00 00 00 00 00 00 00                           |……..|
00000008

object_prefix
value (27 bytes) :
00000000  17 00 00 00 72 62 64 5f  64 61 74 61 2e 33 32 38  |….rbd_data.328|
00000010  39 34 31 36 62 38 62 34  35 36 37                 |9416b8b4567|
0000001b

order
value (1 bytes) :
00000000  16                                                |.|
00000001

size
value (8 bytes) :
00000000  00 00 40 06 00 00 00 00                           |..@…..|
00000008

snap_seq
value (8 bytes) :
00000000  00 00 00 00 00 00 00 00                           |……..|
00000008

通过listomapkeys查看对象的key值

[root@node1 0.b_head]# rados -p rbd listomapkeys rbd_header.3289416b8b4567
features
object_prefix
order
size
snap_seq

**没有快照的image**

– object_prefix:对象的名字前缀

– order:用来计算block size的,比如22,那么块大小就是1<<22=4MB

– size:对象大小

– snap_seq:快照编号,没有快照的时候是0

**做过快照的image**

新增如下属性值

– snapshot_id:记录对应快照的信息

给image做快照

[root@node1 0.46_head]# rbd snap create rbd/image@snap_image
[root@node1 0.46_head]# rados -p rbd listomapvals rbd_header.3289416b8b4567
。。。
snap_seq
value (8 bytes) :
00000000  04 00 00 00 00 00 00 00                           |……..|
00000008

snapshot_0000000000000004
value (87 bytes) :
00000000  04 01 51 00 00 00 04 00  00 00 00 00 00 00 0a 00  |..Q………….|
00000010  00 00 73 6e 61 70 5f 69  6d 61 67 65 00 00 40 06  |..snap_image..@.|
00000020  00 00 00 00 01 00 00 00  00 00 00 00 01 01 1c 00  |…………….|
00000030  00 00 ff ff ff ff ff ff  ff ff 00 00 00 00 fe ff  |…………….|
00000040  ff ff ff ff ff ff 00 00  00 00 00 00 00 00 00 00  |…………….|
00000050  00 00 00 00 00 00 00                              |…….|
00000057

**rbd_data.{image id}.{offset}**

rbd_data的对象命名格式为:rbd\udata.{image id}.fragementhead(snap)_hashpoolid

rbd image的数据对象,存放具体的数据内容。

列出对象

[root@node1 0.b_head]# rados ls -p rbd | grep 3289
rbd_data.3289416b8b4567.0000000000000006
rbd_data.3289416b8b4567.0000000000000012
rbd_data.3289416b8b4567.0000000000000017
rbd_data.3289416b8b4567.0000000000000001
rbd_data.3289416b8b4567.0000000000000018
rbd_data.3289416b8b4567.000000000000000c
rbd_header.3289416b8b4567
rbd_data.3289416b8b4567.0000000000000000

对第一个对象查看osd存储路径

[root@node1 mnt]# ceph  osd  map rbd  rbd_data.3289416b8b4567.0000000000000006
osdmap e189 pool ‘rbd’ (0) object ‘rbd_data.3289416b8b4567.0000000000000006’ -> pg 0.2b7fe90b (0.b) -> up ([0,1], p0) acting ([0,1], p0)

[root@node1 mnt]# cd /var/lib/ceph/osd/ceph-1/current/0.b_head/
[root@node1 0.b_head]# ll
total 16
-rw-r–r– 1 ceph ceph     0 Jan 31 15:32 __head_0000000B__0
-rw-r–r– 1 ceph ceph 16384 Mar 26 18:05 rbd\udata.3289416b8b4567.0000000000000006__head_2B7FE90B__0

具体对象文件格式rbd\uid.块id.对象片段号—hash_poolid

发表回复