执行ceph –s 发现集群状态并非ok,具体信息如下:
[root@node1 ceph]# ceph -s
cluster:
id: b697e78a-2687-4291-93bf-42739e967bec
health: HEALTH_WARN
too few PGs per OSD (16 < min 30)
services:
mon: 3 daemons, quorum node1,node2,node3
mgr: node2(active), standbys: node3, node1
osd: 6 osds: 6 up, 6 in
rgw: 3 daemons active
data:
pools: 4 pools, 32 pgs
objects: 173 objects, 1.50KiB
usage: 6.05GiB used, 23.9GiB / 30.0GiB avail
pgs: 32 active+clean
由于是新配置的集群,只有默认四个pool
[root@node1 ceph]# ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
30.0GiB 23.9GiB 6.08GiB 20.29
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
.rgw.root 1 1003B 0 7.47GiB 3
default.rgw.control 2 0B 0 7.47GiB 7
default.rgw.meta 3 366B 0 7.47GiB 2
default.rgw.log 4 0B 0 7.47GiB 159
查看rbd pool的PGS
[root@node1 ceph]# ceph osd pool get .rgw.root pg_num
pg_num: 8
[root@node1 ceph]# ceph osd pool get default.rgw.control pg_num
pg_num: 8
[root@node1 ceph]# ceph osd pool get default.rgw.meta pg_num
pg_num: 8
[root@node1 ceph]# ceph osd pool get default.rgw.log pg_num
pg_num: 8
总pgs为4*8=32,因为是2副本的配置,所以当有6个osd的时候,每个osd上均分了32/6 *2=10.6个pgs,也就是出现了如上的错误 小于最小配置30个
解决办法:修改默认pool rbd的pgs
[root@node1 ceph]# defaul^Crgw.log
[root@node1 ceph]# ceph osd pool set .rgw.root pg_num 64
set pool 1 pg_num to 64
[root@node1 ceph]# ceph osd pool set default.rgw.control pg_num 64
set pool 2 pg_num to 64
[root@node1 ceph]# ceph osd pool set default.rgw.meta pg_num 64
set pool 3 pg_num to 64
[root@node1 ceph]# ceph osd pool set default.rgw.log pg_num 64
set pool 4 pg_num to 64
[root@node1 ceph]# ceph -s
cluster:
id: b697e78a-2687-4291-93bf-42739e967bec
health: HEALTH_WARN
4 pools have pg_num > pgp_num
services:
mon: 3 daemons, quorum node1,node2,node3
mgr: node2(active), standbys: node3, node1
osd: 6 osds: 6 up, 6 in
rgw: 3 daemons active
data:
pools: 4 pools, 256 pgs
objects: 171 objects, 1.05KiB
usage: 6.05GiB used, 23.9GiB / 30.0GiB avail
pgs: 256 active+clean
发现需要把pgp_num也一并修改,默认两个pg_num和pgp_num一样大小均为64,此处也将两个的值都设为128
[root@node1 ceph]# ceph osd pool set default.rgw.log pgp_num 64
set pool 4 pgp_num to 64
[root@node1 ceph]# ceph osd pool set default.rgw.meta pgp_num 64
set pool 3 pgp_num to 64
[root@node1 ceph]# ceph osd pool set default.rgw.control pgp_num 64
set pool 2 pgp_num to 64
[root@node1 ceph]# ceph health detail
HEALTH_OK
[root@node1 ceph]# ceph -s
cluster:
id: b697e78a-2687-4291-93bf-42739e967bec
health: HEALTH_WARN
6/516 objects misplaced (1.163%)
services:
mon: 3 daemons, quorum node1,node2,node3
mgr: node2(active), standbys: node3, node1
osd: 6 osds: 6 up, 6 in; 15 remapped pgs
rgw: 3 daemons active
data:
pools: 4 pools, 256 pgs
objects: 172 objects, 1.47KiB
usage: 6.07GiB used, 23.9GiB / 30.0GiB avail
pgs: 1.953% pgs not active
6/516 objects misplaced (1.163%)
213 active+clean
18 active+remapped+backfill_wait
10 active+remapped+backfilling
9 active+clean+remapped
5 peering
1 active+clean+scrubbing
io:
recovery: 0B/s, 1objects/s
最后查看集群状态,显示为OK,错误解决:
[root@node1 ceph]# ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
30.0GiB 23.9GiB 6.08GiB 20.29
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
.rgw.root 1 1003B 0 7.47GiB 3
default.rgw.control 2 0B 0 7.47GiB 7
default.rgw.meta 3 366B 0 7.47GiB 2
default.rgw.log 4 0B 0 7.47GiB 159
[root@node1 ceph]# ceph -s
cluster:
id: b697e78a-2687-4291-93bf-42739e967bec
health: HEALTH_OK
services:
mon: 3 daemons, quorum node1,node2,node3
mgr: node2(active), standbys: node3, node1
osd: 6 osds: 6 up, 6 in
rgw: 3 daemons active
data:
pools: 4 pools, 256 pgs
objects: 172 objects, 1.60KiB
usage: 6.08GiB used, 23.9GiB / 30.0GiB avail
pgs: 256 active+clean
[root@node1 ceph]# ceph health detail
HEALTH_OK
版权声明:本文内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 举报,一经查实,本站将立刻删除。
文章由极客之音整理,本文链接:https://www.bmabk.com/index.php/post/75737.html