Ceph Commands - MD

You might also like

Download as txt, pdf, or txt
Download as txt, pdf, or txt
You are on page 1of 4

----

## Monitors
### Check monitor config
ceph daemon /var/run/ceph/ceph-mon.bvcephtest03.asok config show

## Placement Groups
### Debugging Placement groups
Dump view of placemeng groups (or just stuck ones)

ceph pg dump
ceph pg dump_stuck

Show locations

ceph pg map 14.3

Show placement groups per pool

ceph pg ls-by-pool device_health_metrics


## OSDs
#### To remove an OSD
Stop it

systemctl stop ceph-osd@0.service

Mark it out and down

[13/11 10:06:48] root@ixcephosd01 ~ # ceph osd out 0


marked out osd.0.
[13/11 10:07:47] root@ixcephosd01 ~ # ceph osd down 0
marked down osd.0.

Remove it

[13/11 10:08:08] root@ixcephosd01 ~ # ceph osd rm 0


removed osd.0

Remove from authorisation

[13/11 10:08:58] root@ixcephosd01 ~ # ceph auth del osd.0


updated

Cleanup any volume groups as required


## Radosgw
### Remove an instance of radosgw

Attempting a re-install. To remove the radosgw instances

Stop the services and remove the packages

systemctl stop ceph-radosgw@rgw.ixcephgw01.rgw0.service


yum list installed | grep ceph
yum remove ceph-radosgw.x86_64

Delete any entries in ceph.conf, like below, and restart the managers and monitors

[17/11 12:38:54] root@ixcephgw01 ~ # cat /etc/ceph/ceph.conf


[client.rgw.ixcephgw01.rgw0]
host = ixcephgw01
keyring = /var/lib/ceph/radosgw/ceph-rgw.ixcephgw01.rgw0/keyring
log file = /var/log/ceph/ceph-rgw-ixcephgw01.rgw0.log
rgw frontends = beast endpoint=10.64.119.10:8080
rgw thread pool size = 512
debug ms = 1
debug rgw = 20

[client.rgw.ixcephgw03.rgw0]
host = ixcephgw03
keyring = /var/lib/ceph/radosgw/ceph-rgw.ixcephgw03.rgw0/keyring
log file = /var/log/ceph/ceph-rgw-ixcephgw03.rgw0.log
rgw frontends = beast endpoint=10.64.119.10:8080
rgw thread pool size = 512

[global]
cluster network = 192.168.20.0/24
fsid = 898f6ff5-d1b9-4e95-ad0b-77dac5a943e3
mon host = [v2:192.168.20.11:3300,v1:192.168.20.11:6789],
[v2:192.168.20.13:3300,v1:192.168.20.13:6789],
[v2:192.168.20.21:3300,v1:192.168.20.21:6789]
mon initial members = ixcephgw01,ixcephgw03,ixcephosd01
osd pool default crush rule = -1
public network = 192.168.20.0/24

Delete any entries for ceph auth

[17/11 12:58:39] root@ixcephgw01 ~ # ceph auth del client.rgw.ixcephgw01.rgw0


updated
[17/11 12:58:52] root@ixcephgw01 ~ # ceph auth del client.rgw.ixcephgw03.rgw0
updated

Clear contents of /var/lib/ceph/rados

[17/11 13:00:15] root@ixcephgw01 ~ # rm -rf /var/lib/ceph/radosgw/

## Pools

### Find Pool stats and Config


rados df

### Create a Pool


The key attributes of the pools (for our use) are the replication rules, crush
rulesets and erasure coding profiles.
ceph osd pool create <pool_name> <pg_num> <pgp_num> <replicated or erausre>
<ec-profile (opt)> <crush rule (opt) autoscale-mode=on

ceph osd pool create cold_storage 2048 2048 erasure maven-ec-profile autoscale-
mode=on
Pools can have a crush rule attached to them to dictate where objects are stored.
In our case, we want our deplicated pool only to use SSDs. The following command
creates a standard replicated rule by changes the default root from 'all' to 'ssd'.

ceph osd crush rule create-replicated ssd_replicated_rule default host ssd


As of Ceph 15.0, erasure profiles can include a crush device classes and failure
domains, so they do not require a dedicated crush rule as well. Example below sets
the data and parity blocks (k&m), limits the placement to hdd and ensures data can
tolerate a host failure (each block will be written to a different host)
ceph osd erasure-code-profile set maven-ec-profile k=5 m=3 crush-device-
class=hdd crush-failure-domain=host

### To Delete a Pool


The monitors must allow it

ceph tell mon.\* injectargs --mon-allow-pool-delete=true

Delete the pool

ceph osd pool delete poolname poolname --yes-i-really-really-mean-it


## Placement Targets
### Delete an entire placement target
radosgw-admin zone placement rm --rgw-zone default --placement-id pcap-
placement
### Delete an attribute of the placement target
To delete the GLACIER storage class.

radosgw-admin zone placement rm --rgw-zone default --placement-id dev-


placement --data-pool cold_store --storage-class GLACIER --compression lz4
## MDS & Cephfs
### To Delete MDS and Filesystem
1.Stop the services and remove packages

systemctl stop ceph-mds@ixcephgw04.service


yum remove ceph-mds.x86_64

2.Delete the filesystem, this needs to be done before removing the pools due to the
application association

[19/11 08:40:10] root@ixcephosd02 ~ # ceph osd pool delete cephfs_data


cephfs_data --yes-i-really-really-mean-it
Error EBUSY: pool 'cephfs_data' is in use by CephFS
[19/11 08:41:59] root@ixcephosd02 ~ # ceph fs rm cephfs --yes-i-really-mean-it

3.Remove data and metadata pools

[19/11 08:43:44] root@ixcephosd02 ~ # ceph osd pool delete cephfs_data


cephfs_data --yes-i-really-really-mean-it
pool 'cephfs_data' removed
[19/11 08:43:54] root@ixcephosd02 ~ # ceph osd pool delete cephfs_metadata
cephfs_metadata --yes-i-really-really-mean-it
pool 'cephfs_metadata' removed

4.Remove auth

ceph auth del mds.ixcephgw02


ceph auth del mds.ixcephgw04

## Crush Rules
View crush rules

[20/11 12:40:06] root@ixcephosd02 ~ # ceph osd crush rule ls


replicated_rule
fast

Dump a crush rule


[20/11 12:40:39] root@ixcephosd02 ~ # ceph osd crush rule dump replicated_rule
{
"rule_id": 0,
"rule_name": "replicated_rule",
"ruleset": 0,
"type": 1,
"min_size": 1,
"max_size": 10,
"steps": [
{
"op": "take",
"item": -1,
"item_name": "default"
},
{
"op": "chooseleaf_firstn",
"num": 0,
"type": "host"
},
{
"op": "emit"
}

Ceph can distinguish based on device class as of ceph luminous. Previously, to


create a class specific storage rule within a cluster, the map would nneed to be
manually edited to contain two roots, each host would then appear multiple times,
as hosta-hdd, hosta-hdd etc. However, starting from ceph luminous, the OSD can
identify the device class and we can create crush rules using a single device class
with just one line
https://ceph.io/community/new-luminous-crush-device-classes/

ceph osd crush rule create-replicated <rule-name> <root> <failure domain>


<device class>
ceph osd crush rule create-replicated fast default host ssd
We can view classes and osds per class...

[20/11 12:45:13] root@ixcephgw01 ~ # ceph osd crush class ls


[
"ssd",
"hdd"
]
[20/11 12:46:36] root@ixcephgw01 ~ # ceph osd crush class ls-osd ssd
0
1
2
3

You might also like