Ceph: Simple Ceph Pool Commands for Beginners


CEPH is a very well documented technology. Just check out the documentation for ceph at ceph.com. Pretty much everything that you want to know about CEPH is documented there. However, this also means that you possibly need to dig around just to remember a few simple commands.

Because of this, I have decided to put a few notes below on creating, deleting, and working with CEPH pools.

List Pools

You can list your existing pools with the command below. In this example I only have one pool called rdb with a pool number of 13.

[[email protected] ~]# ceph osd lspools
13 rbd,

Create a Pool

OK so now lets create a test pool, aptly named, test-pool. I will create it with 128 PGs (placement groups)
[[email protected] ~]# ceph osd pool create test-pool 128
pool ‘test-pool’ created

Placement Groups

Note however when I run my script called pg_per_osd.bash I see my new pool (number 14) actually was created with 384 PGs. What?
ceph pg dump | awk '
 /^pg_stat/ { col=1; while($col!="up") {col++}; col++ }
 /^[0-9a-f]+\.[0-9a-f]+/ { match($0,/^[0-9a-f]+/); pool=substr($0, RSTART, RLENGTH); poollist[pool]=0;
 up=$col; i=0; RSTART=0; RLENGTH=0; delete osds; while(match(up,/[0-9]+/)>0) { osds[++i]=substr(up,RSTART,RLENGTH); up = substr(up, RSTART+RLENGTH) }
 for(i in osds) {array[osds[i],pool]++; osdlist[osds[i]];}
 printf("pool :\t"); for (i in poollist) printf("%s\t",i); printf("| SUM \n");
 for (i in poollist) printf("--------"); printf("----------------\n");
 for (i in osdlist) { printf("osd.%i\t", i); sum=0;
 for (j in poollist) { printf("%i\t", array[i,j]); sum+=array[i,j]; poollist[j]+=array[i,j] }; printf("| %i\n",sum) }
 for (i in poollist) printf("--------"); printf("----------------\n");
 printf("SUM :\t"); for (i in poollist) printf("%s\t",poollist[i]); printf("|\n");
[[email protected] ~]# ./pg_per_osd.bash
dumped all in format plainpool :  13      14      | SUM
osd.26  48      36      | 84
osd.27  30      43      | 73
osd.19  41      38      | 79
osd.20  45      37      | 82
osd.21  42      53      | 95
osd.22  43      39      | 82
osd.23  50      38      | 88
osd.24  35      51      | 86
osd.25  50      49      | 99
SUM :   384     384     |
What has occurred is that my new pool (test-pool) was created with 128 PGs, however replica PGs were created based on the global setting (osd_pool_default_size = 3) in /etc/ceph/ceph.conf.
We can verify that this is the case by taking a closer look at our new pool, test pool.
Here we can verify that the pool was actually created with 128 PGs
[[email protected] ~]# ceph osd pool get test-pool pg_num
pg_num: 128
We can also check the pool’s setting to see how many replicas will be created. Note that the number is 3. Multiply 128 PGs by 3 replicas and you get 384.
[[email protected] ~]# ceph osd pool get test-pool size
size: 3
You can also take a sneak-peak at the minimum number of replicas that a pool can have before running in a degraded state.
[[email protected] ~]# ceph osd pool get test-pool min_size
min_size: 2
Other get/set commands are listed here.

Deleting A Pool

So this is a fun one as you have to be very specific to delete as pool, going as far as typing out the pool name twice and including an option that it would be very hard to accidentally type (however be careful with up arrow).
[[email protected] ~]# ceph osd pool delete test-pool test-pool –yes-i-really-really-mean-it
pool ‘test-pool’ removed
Print Friendly



Previous articleCEPH: TCP Performance Tuning
Next articleCeph: Troubleshooting Failed OSD Creation