Professional Documents
Culture Documents
10 Commands Every Ceph Administrator Should Know
10 Commands Every Ceph Administrator Should Know
If you want to quickly verify that your cluster is operating normally, use ceph
status to get a birds-eye view of cluster status (hint: typically, you want your
cluster to be active + clean). You can also watch cluster activity in real-time with
ceph -w; you'll typically use this when you add or remove OSDs and want to see
the placement groups adjust.
To check a cluster’s data usage and data distribution among pools, use ceph df.
This provides information on available and used storage space, plus a list of pools
and how much storage each pool consumes. Use this often to check that your
cluster is not running out of space.
When you need statistics for the placement groups in your cluster, use ceph pg
dump. You can get the data in JSON as well in case you want to use it for
automatic report generation.
Need to troubleshoot a cluster by identifying the physical data center, room, row
and rack of a failed OSD faster? Use ceph osd tree, which produces an ASCII art
CRUSH tree map with a host, its OSDs, whether they are up and their weight.
Use ceph osd create to add a new OSD to the cluster. If no UUID is given, it will be
set automatically when the OSD starts up. When you need to remove an OSD
from the CRUSH map, use ceph osd rm with the UUID.
6. Create or delete a storage pool: ceph osd pool create || ceph osd
pool delete
Create a new storage pool with a name and number of placement groups with
ceph osd pool create. Remove it (and wave bye-bye to all the data in it) with ceph
osd pool delete.
7. Repair an OSD: ceph osd repair
Added an awesome new storage device to your cluster? Use ceph tell to see how
well it performs by running a simple throughput benchmark. By default, the test
writes 1 GB in total in 4-MB increments.
Ideally, you want all your OSDs to be the same in terms of thoroughput and
capacity...but this isn't always possible. When your OSDs differ in their key
attributes, use ceph osd crush reweight to modify their weights in the CRUSH
map so that the cluster is properly balanced and OSDs of different types receive
an appropriately-adjusted number of I/O requests and data.
Ceph uses keyrings to store one or more Ceph authentication keys and capability
specifications. The ceph auth list command provides an easy way to keep track of
keys and capabilities
Brian Chang
Read full bio