Installation/de-commisioning
Installing a new ceph monitor
...
- Run "ceph-disk list" to see which disks are recognized by ceph
- "chown" all the ceph disk's to be owned by the "ceph" user and the "ceph" group
- Restart all the osd by running "ceph-disk activate <path-to-disk>" for each disk that "ceph-disk" lists as an "ceph data" disk.
Storage management
HDDs vs SSDs in hybrid clusters
One of our clusters (ceph@stack.it) are hybrid-clusters where some of the OSDs are HDDs and some are SSDs. In this cluster we tune crush to ensure that some pools are placed on SSDs while others are placed on HDDs. The tuning is done in three distinctive steps:
- Ensure that OSD's are classified correctly
- Create crush-maps for each osd-class
- Set the desired crush-map on a certain pool
Classify OSD's
Ceph tries to classify OSDs as SSD or HDD.Classes can be seen by using the command:
Code Block |
---|
root@cephmon1:~# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
....
3 hdd 0.90959 osd.3 up 1.00000 1.00000
.... |
It is unfortunatley not always able to classify them correctly, and in that case manual change is needed. To change the class of an OSD we need to remove the old class, and add a new one:
Code Block |
---|
root@cephmon1:~# ceph osd crush rm-device-class osd.3
root@cephmon1:~# ceph osd crush set-device-class ssd osd.3 |
Create new CRUSH-maps
Ceph provides sensible macros for creating crush-maps restricting the pool to a certain device-class. So, to create two CRUSH-maps, one for HDD and one for SSD can be done like so:
Code Block |
---|
root@cephmon1:~# ceph osd crush rule create-replicated hdd-only default host hdd
root@cephmon1:~# ceph osd crush rule create-replicated ssd-only default host ssd |
Assign CRUSH-maps to pools.
To see which CRUSH-maps are available, use the following command:
Code Block |
---|
root@cephmon1:~# ceph osd crush rule ls
replicated_ruleset
hdd-only
ssd-only |
To display which pools are assigned to which CRUSH-maps, the following command can be helpful:
Code Block |
---|
root@cephmon1:~# for p in $(ceph osd pool ls); do echo -n ${p}-; ceph osd pool get $p crush_rule; done rbd-crush_rule: replicated_ruleset images-crush_rule: replicated_ruleset volumes-crush_rule: replicated_ruleset .rgw.root-crush_rule: replicated_ruleset default.rgw.control-crush_rule: replicated_ruleset default.rgw.meta-crush_rule: replicated_ruleset default.rgw.log-crush_rule: replicated_ruleset default.rgw.buckets.index-crush_rule: replicated_ruleset default.rgw.buckets.data-crush_rule: hdd-only default.rgw.buckets.non-ec-crush_rule: replicated_ruleset |
To assign a new CRUSH-map to a pool, use the following command:
Code Block |
---|
root@cephmon1:~# ceph osd pool set <POOL> crush_rule <CRUSH-MAP> |