分布式存储广泛用于云计算领域,Ceph作为流行的开源分布式存储系统成为OpenStack的首选后端存储。本文简要介绍Ceph的特点及功能,并安装部署Ceph集群环境。
1、Ceph概述
Ceph是开源的分布式文件系统,随着OpenStack在云计算领域的发展,Ceph也成为OpenStack的首选后端存储。
1.1 Ceph特性
-
接口统一:
-
支持三种存储接口:块存储、文件存储、对象存储。
-
支持自定义接口,支持多种语言驱动。
-
高性能:多副本设置,在读写操作时候能够高度并行化
-
摒弃了传统的集中式存储元数据寻址的方案,采用CRUSH算法,数据分布均衡,并行度高。
-
Ceph客户端读写数据直接与存储设备OSD交互。在块存储和对象存储中无需元数据服务器
-
能够支持上千个存储节点的规模,支持TB到PB级的数据。
-
高可用性:支持多份强一致性副本,副本能够垮主机、机架、机房、数据中心存放
-
副本数可以灵活控制,可以由管理员自行定义
-
通过CRUSH算法指定副本的物理存储位置以分隔故障域,支持数据强一致性
-
多种故障场景自动进行修复自愈。
-
没有单点故障,自动管理。
-
高可扩展性:包括系统规模和存储容量的可扩展,也包括随着系统节点增加进而数据访问带宽的线性增长
-
去中心化,采用Crush和HASH环等方法解决中心化问题
-
Ceph本身并没有主控节点,扩展灵活
-
随着节点增加而线性增长。
1.2 Ceph应用场景
-
对象存储(RADOSGW):提供RESTful接口,也提供多种编程语言绑定。兼容S3、Swift
-
块存储(RDB):由RBD提供,可以直接作为磁盘挂载,内置了容灾机制
-
文件系统(CephFS):提供POSIX兼容的网络文件系统CephFS,专注于高性能、大容量存储
2、Ceph功能组件
Ceph提供了RADOS、OSD、MON、Librados、RBD、RGW和Ceph FS等功能组件,但其底层仍然使用RADOS存储来支撑上层的那些组件。
2.1 核心组件
Ceph中包含几个重要的组件,包括Ceph OSD、Ceph Monitor和Ceph MDS:
-
Monitors:Ceph监视器,维护整个集群的健康状态,维护展示集群状态的各种图表,如OSD Map、Monitor Map、PG Map和CRUSH Map。并且存储当前版本信息以及最新更改信息,通过 “ceph mon dump”查看 monitor map。 -
MDS(Metadata Server):Ceph 元数据,主要保存的是Ceph文件系统的元数据。注意:ceph的块存储和ceph对象存储都不需要MDS。
-
OSD(Object Storage Device):即对象存储守护程序,负责存储数据、处理数据复制、恢复、回补、再平衡,并通过检查其他OSD守护进程的心跳来向Ceph Monitors 提供一些监控信息。当Ceph存储集群设定为有2个副本时,至少需要2个OSD守护进程,集群才能达到active+clean状态。在构建 Ceph OSD的时候,建议采用SSD 磁盘以及xfs文件系统来格式化分区,一般情况下一块硬盘对应一个OSD。
-
Client客户端:负责存储协议的接入,节点负载均衡
2.2 Ceph功能特性
-
RADOS:Reliable Autonomic Distributed Object Store。RADOS是ceph存储集群的基础。在ceph中,所有数据都以对象的形式存储,并且无论什么数据类型,RADOS对象存储都将负责保存这些对象,RADOS层可以确保数据始终保持一致。
-
librados:librados库,为应用程度提供访问接口,同时也为块存储、对象存储、文件系统提供原生的接口。
-
RADOSGW:网关接口,提供对象存储服务。它使用librgw和librados来实现允许应用程序与Ceph对象存储建立连接。并且提供S3 和 Swift(openstack) 兼容的RESTful API接口。
-
RBD:块设备,它能够自动精简配置并可调整大小,而且将数据分散存储在多个OSD上,librbd提供分布式的块存储设备接口
-
CephFS:Ceph文件系统,与POSIX兼容的文件系统,基于librados封装原生接口,MDS提供兼容POSIX的文件系统。
3、Ceph部署
3.1 基本信息
3.2 安装前准备
3.2.1 添加hosts
添加hosts文件实现集群主机名与主机名之间相互能够解析,分别打开各节点的/etc/hosts文件,加入四个节点ip与名称的对应关系:
192.168.112.10 tango-01
192.168.112.101 tango-centos01
192.168.112.102 tango-centos02
192.168.112.103 tango-centos03
3.2.2 设置免密登录
在ceph节点设置root用户免密登录
[root@tango-centos01 ~]$ ssh-keygen
[root@tango-centos01 ~]$ ssh-copy-id tango-centos01
[root@tango-centos01 ~]$ ssh-copy-id tango-centos02
[root@tango-centos01 ~]$ ssh-copy-id tango-centos03
3.2.3 关闭防火墙
检查防火墙已关闭
[root@tango-centos01 ~]# service firewalld status
Redirecting to /bin/systemctl status firewalld.service
● firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
Active: inactive (dead)
Docs: man:firewalld(1)
3.2.4 数据节点磁盘挂载
1)ceph节点服务器增加一块硬盘/dev/sdb用于测试, 创建目录并挂载到/usr/local/ceph/osd{0,1,2}
[root@tango-centos01 ~]# mkfs.xfs /dev/sdb
[root@tango-centos01 ~]# mkdir -p /usr/local/ceph/osd0
[root@tango-centos01 ~]# mount /dev/sdb /usr/local/ceph/osd0/
[root@tango-centos01 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sdb 10G 33M 10G 1% /usr/local/ceph/osd0
[root@tango-centos02 ~]# mkfs.xfs /dev/sdb
[root@tango-centos02 ~]# mkdir -p /usr/local/ceph/osd1
[root@tango-centos02 ~]# mount /dev/sdb /usr/local/ceph/osd1/
[root@tango-centos02 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sdb 10G 33M 10G 1% /usr/local/ceph/osd1
[root@tango-centos03 ~]# mkfs.xfs /dev/sdb
[root@tango-centos03 ~]# mkdir -p /usr/local/ceph/osd2
[root@tango-centos03 ~]# mount /dev/sdb /usr/local/ceph/osd2/
[root@tango-centos03 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sdb 10G 33M 10G 1% /usr/local/ceph/osd2
2)更新fstab,添加挂载路径
[root@tango-centos01 ~]# vi /etc/fstab
/dev/sdb /usr/local/ceph/osd0 xfs defaults 0 0
[root@tango-centos02 ~]# vi /etc/fstab
/dev/sdb /usr/local/ceph/osd0 xfs defaults 0 0
[root@tango-centos03 ~]# vi /etc/fstab
/dev/sdb /usr/local/ceph/osd0 xfs defaults 0 0
3)挂载目录授权
[root@tango-centos01 local]# chmod -R 777 /usr/local/ceph/osd0
[root@tango-centos02 local]# chmod -R 777 /usr/local/ceph/osd1
[root@tango-centos03 local]# chmod -R 777 /usr/local/ceph/osd2
否则初始化时候会提示错误,原因是因为创建的目录文件权限不足,所以需要先授权,每个节点都需要授权。
3.3 部署ceph集群
3.3.1 管理节点安装ceph-deploy工具
1)在各个节点增加yum源配置
[root@tango-centos01 ~]# vi /etc/yum.repos.d/ceph.repo
[Ceph]
name=Ceph packages for $basearch
baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/x86_64
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=http://mirrors.163.com/ceph/keys/release.asc
priority=1
[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=http://mirrors.163.com/ceph/keys/release.asc
priority=1
[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/SRPMS
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=http://mirrors.163.com/ceph/keys/release.asc
priority=1
2)更新yum源
[root@tango-centos01 ~]# yum clean all && yum list
3)在管理节点安装
[root@tango-centos01 ~]# yum -y install ceph-deploy
3.3.2 创建ceph集群
在管理节点上使用ceph-deploy创建ceph集群,设置tango-centos01为mon节点
[root@tango-centos01 ~]# cd /usr/local/ceph
[root@tango-centos01 ceph]# ceph-deploy new tango-centos01
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.39): /bin/ceph-deploy new tango-centos01
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] func : <function new at 0xe86a28>
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0xea2cb0>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] ssh_copykey : True
[ceph_deploy.cli][INFO ] mon : ['tango-centos01']
[ceph_deploy.cli][INFO ] public_network : None
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] cluster_network : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] fsid : None
[ceph_deploy.new][DEBUG ] Creating new cluster named ceph
[ceph_deploy.new][INFO ] making sure passwordless SSH succeeds
[tango-centos01][DEBUG ] connected to host: tango-centos01
[tango-centos01][DEBUG ] detect platform information from remote host
[tango-centos01][DEBUG ] detect machine type
[tango-centos01][DEBUG ] find the location of an executable
[tango-centos01][INFO ] Running command: /usr/sbin/ip link show
[tango-centos01][INFO ] Running command: /usr/sbin/ip addr show
[tango-centos01][DEBUG ] IP addresses found: [u'192.168.112.143', u'192.168.112.101', u'172.17.0.1', u'172.18.0.1']
[ceph_deploy.new][DEBUG ] Resolving host tango-centos01
[ceph_deploy.new][DEBUG ] Monitor tango-centos01 at 192.168.112.101
[ceph_deploy.new][DEBUG ] Monitor initial members are ['tango-centos01']
[ceph_deploy.new][DEBUG ] Monitor addrs are ['192.168.112.101']
[ceph_deploy.new][DEBUG ] Creating a random mon key...
[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...
[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...
[root@tango-centos01 ceph]# ll
total 12
-rw-r--r-- 1 root root 207 Jan 23 09:34 ceph.conf
-rw-r--r-- 1 root root 3075 Jan 23 09:34 ceph-deploy-ceph.log
-rw------- 1 root root 73 Jan 23 09:34 ceph.mon.keyring
3.3.3 修改副本数
配置文件的默认副本数从3改成2,这样只有两个osd也能达到active+clean状态,把下面这行加入到[global]段
[global]
fsid = a4dc4584-863a-4766-b725-d902d6f54f27
mon_initial_members = tango-centos01
mon_host = 192.168.112.101
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
osd_pool_default_size = 2
3.3.4 安装ceph
在管理节点执行命令在所有ceph节点安装ceph
[root@tango-centos01 ceph]$ ceph-deploy install tango-centos01 tango-centos02 tango-centos03
执行完成后查看ceph版本:
[root@tango-centos01 ceph]$ ceph --version
ceph version 10.2.11 (e4b061b47f07f583c92a050d9e84b1813a35671e)
[root@tango-centos02 ~]$ ceph --version
ceph version 10.2.11 (e4b061b47f07f583c92a050d9e84b1813a35671e)
[root@tango-centos03 ~]$ ceph --version
ceph version 10.2.11 (e4b061b47f07f583c92a050d9e84b1813a35671e)
3.3.5 创建并初始化监控节点
使用如下命令创建监控节点
[root@tango-centos01 ceph]$ ceph-deploy mon create tango-centos01
[root@tango-centos01 ceph]# ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.tango-centos01.asok mon_status
{
"name": "tango-centos01",
"rank": 0,
"state": "leader",
"election_epoch": 3,
"quorum": [
0
],
"outside_quorum": [],
"extra_probe_peers": [],
"sync_provider": [],
"monmap": {
"epoch": 1,
"fsid": "daff0d7d-d63d-48d7-ae8b-b70493240ad8",
"modified": "2022-01-23 10:00:54.091720",
"created": "2022-01-23 10:00:54.091720",
"mons": [
{
"rank": 0,
"name": "tango-centos01",
"addr": "192.168.112.101:6789/0"
}
]
}
}
3.3.6 收集节点的keyring文件
使用如下命令收集节点的keyring文件
[ceph@tango-centos01 ceph]$ ceph-deploy gatherkeys tango-centos01
[root@tango-centos01 ceph]# ll
total 156
-rw------- 1 root root 113 Jan 23 10:01 ceph.bootstrap-mds.keyring
-rw------- 1 root root 71 Jan 23 10:01 ceph.bootstrap-mgr.keyring
-rw------- 1 root root 113 Jan 23 10:01 ceph.bootstrap-osd.keyring
-rw------- 1 root root 113 Jan 23 10:01 ceph.bootstrap-rgw.keyring
-rw------- 1 root root 129 Jan 23 10:01 ceph.client.admin.keyring
3.3.7 创建激活OSD服务
1)创建OSD服务
[root@tango-centos01 ceph]$ ceph-deploy osd prepare tango-centos01:/usr/local/ceph/osd0 tango-centos02:/usr/local/ceph/osd1 tango-centos03:/usr/local/ceph/osd2
2)激活osd服务
[root@tango-centos01 ceph]$ ceph-deploy osd activate tango-centos01:/usr/local/ceph/osd0 tango-centos02:/usr/local/ceph/osd1 tango-centos03:/usr/local/ceph/osd2
[root@tango-centos01 osd0]# ll
total 5242924
-rw-r--r-- 1 root root 200 Jan 23 10:03 activate.monmap
-rw-r--r-- 1 ceph ceph 3 Jan 23 10:03 active
-rw-r--r-- 1 ceph ceph 37 Jan 23 10:01 ceph_fsid
drwxr-xr-x 4 ceph ceph 65 Jan 23 10:03 current
-rw-r--r-- 1 ceph ceph 37 Jan 23 10:01 fsid
-rw-r--r-- 1 ceph ceph 5368709120 Jan 23 10:32 journal
-rw------- 1 ceph ceph 56 Jan 23 10:03 keyring
-rw-r--r-- 1 ceph ceph 21 Jan 23 10:01 magic
-rw-r--r-- 1 ceph ceph 6 Jan 23 10:03 ready
-rw-r--r-- 1 ceph ceph 4 Jan 23 10:03 store_version
-rw-r--r-- 1 ceph ceph 53 Jan 23 10:03 superblock
-rw-r--r-- 1 ceph ceph 0 Jan 23 10:21 systemd
-rw-r--r-- 1 ceph ceph 10 Jan 23 10:03 type
-rw-r--r-- 1 ceph ceph 2 Jan 23 10:03 whoami
3.3.8 统一配置
使用ceph-deploy把配置文件和admin密钥拷贝到所有节点,这样每次执行ceph命令就不需要指定monitor地址和ceph.client.admin.keyring
[ceph@tango-centos01 ceph]$ ceph-deploy admin tango-centos01 tango-centos02 tango-centos03
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.39): /bin/ceph-deploy admin tango-centos01 tango-centos02 tango-centos03
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x277ab90>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] client : ['tango-centos01', 'tango-centos02', 'tango-centos03']
[ceph_deploy.cli][INFO ] func : <function admin at 0x26bade8>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to tango-centos01
[tango-centos01][DEBUG ] connection detected need for sudo
[tango-centos01][DEBUG ] connected to host: tango-centos01
[tango-centos01][DEBUG ] detect platform information from remote host
[tango-centos01][DEBUG ] detect machine type
[tango-centos01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to tango-centos02
[tango-centos02][DEBUG ] connection detected need for sudo
[tango-centos02][DEBUG ] connected to host: tango-centos02
[tango-centos02][DEBUG ] detect platform information from remote host
[tango-centos02][DEBUG ] detect machine type
[tango-centos02][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to tango-centos03
[tango-centos03][DEBUG ] connection detected need for sudo
[tango-centos03][DEBUG ] connected to host: tango-centos03
[tango-centos03][DEBUG ] detect platform information from remote host
[tango-centos03][DEBUG ] detect machine type
[tango-centos03][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
3.3.9 查看osd状态
1) 使用osd list查看osd状态
[ceph@tango-centos01 ceph]$ ceph-deploy osd list tango-centos01 tango-centos02 tango-centos03
[tango-centos03][INFO ] Running command: sudo /usr/sbin/ceph-disk list
[tango-centos03][INFO ] ----------------------------------------
[tango-centos03][INFO ] ceph-2
[tango-centos03][INFO ] ----------------------------------------
[tango-centos03][INFO ] Path /var/lib/ceph/osd/ceph-2
[tango-centos03][INFO ] ID 2
[tango-centos03][INFO ] Name osd.2
[tango-centos03][INFO ] Status up
[tango-centos03][INFO ] Reweight 1.0
[tango-centos03][INFO ] Active ok
[tango-centos03][INFO ] Magic ceph osd volume v026
[tango-centos03][INFO ] Whoami 2
[tango-centos03][INFO ] Journal path /usr/local/ceph/osd2/journal
[tango-centos03][INFO ] ----------------------------------------
3.3.10 部署mds服务
执行以下命令部署mds服务
[ceph@tango-centos01 ceph]$ ceph-deploy mds create tango-centos01
查看mds状态
[root@tango-centos01 ceph]# ceph mds stat
e2:, 1 up:standby
3.3.11 查看ceph集群状态
使用命令ceph -s查看集群状态
[root@tango-centos01 ceph]# ceph -s
cluster daff0d7d-d63d-48d7-ae8b-b70493240ad8
health HEALTH_WARN
64 pgs degraded
64 pgs stuck unclean
64 pgs undersized
monmap e1: 1 mons at {tango-centos01=192.168.112.101:6789/0}
election epoch 3, quorum 0 tango-centos01
osdmap e11: 2 osds: 2 up, 2 in
flags sortbitwise,require_jewel_osds
pgmap v18: 64 pgs, 1 pools, 0 bytes data, 0 objects
10454 MB used, 10006 MB / 20460 MB avail
64 active+undersized+degraded
至此,整个ceph集群部署完成!
参考资料:
-
https://blog.csdn.net/weixin_41843699/article/details/97411434
-
https://www.cnblogs.com/kevingrace/p/8387999.html
-
https://www.cnblogs.com/happy1983/p/9246379.html
-
http://docs.ceph.com/docs/master/#
-
《Ceph分布式存储实战》
原文始发于微信公众号(牧羊人的方向):分布式系列之分布式存储ceph初识
版权声明:本文内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 举报,一经查实,本站将立刻删除。
文章由极客之音整理,本文链接:https://www.bmabk.com/index.php/post/65214.html