Professional Documents
Culture Documents
Ceph Commands - MD
Ceph Commands - MD
Ceph Commands - MD
## Monitors
### Check monitor config
ceph daemon /var/run/ceph/ceph-mon.bvcephtest03.asok config show
## Placement Groups
### Debugging Placement groups
Dump view of placemeng groups (or just stuck ones)
ceph pg dump
ceph pg dump_stuck
Show locations
Remove it
Delete any entries in ceph.conf, like below, and restart the managers and monitors
[client.rgw.ixcephgw03.rgw0]
host = ixcephgw03
keyring = /var/lib/ceph/radosgw/ceph-rgw.ixcephgw03.rgw0/keyring
log file = /var/log/ceph/ceph-rgw-ixcephgw03.rgw0.log
rgw frontends = beast endpoint=10.64.119.10:8080
rgw thread pool size = 512
[global]
cluster network = 192.168.20.0/24
fsid = 898f6ff5-d1b9-4e95-ad0b-77dac5a943e3
mon host = [v2:192.168.20.11:3300,v1:192.168.20.11:6789],
[v2:192.168.20.13:3300,v1:192.168.20.13:6789],
[v2:192.168.20.21:3300,v1:192.168.20.21:6789]
mon initial members = ixcephgw01,ixcephgw03,ixcephosd01
osd pool default crush rule = -1
public network = 192.168.20.0/24
## Pools
ceph osd pool create cold_storage 2048 2048 erasure maven-ec-profile autoscale-
mode=on
Pools can have a crush rule attached to them to dictate where objects are stored.
In our case, we want our deplicated pool only to use SSDs. The following command
creates a standard replicated rule by changes the default root from 'all' to 'ssd'.
2.Delete the filesystem, this needs to be done before removing the pools due to the
application association
4.Remove auth
## Crush Rules
View crush rules