一、Ceph概述: 概述:Ceph是根據(jù)加州大學(xué)Santa Cruz分校的Sage Weil的博士論文所設(shè)計開發(fā)的新一代自由軟件分布式文件系統(tǒng),其設(shè)計目標是良好的可擴展性(PB級別以上)、高性能、高可靠性。Ceph其命名和UCSC(Ceph 的誕生地)的吉祥物有關(guān),這個吉祥物是“Sammy”,一個香蕉色的蛞蝓,就是頭足類中無殼的軟體動物。這些有多觸角的頭足類動物,是對一個分布式文件系統(tǒng)高度并行的形象比喻。 其設(shè)計遵循了三個原則:數(shù)據(jù)與元數(shù)據(jù)的分離,動態(tài)的分布式的元數(shù)據(jù)管理,可靠統(tǒng)一的分布式對象存儲機制。 二、基本架構(gòu): 1.Ceph是一個高可用、易于管理、開源的分布式存儲系統(tǒng),可以在一套系統(tǒng)中同時提供對象存儲、塊存儲以及文件存儲服務(wù)。其主要由Ceph存儲系統(tǒng)的核心RADOS以及塊存儲接口、對象存儲接口和文件系統(tǒng)存儲接口組成; 2.存儲類型: ?塊存儲: 在常見的存儲中 DAS、SAN 提供的也是塊存儲、openstack的cinder存儲,例如iscsi的存儲; ?對象存儲: 對象存儲概念出現(xiàn)得晚,存儲標準化組織SINA早在2004年就給出了定義,但早期多出現(xiàn)在超大規(guī)模系統(tǒng),所以并不為大眾所熟知,相關(guān)產(chǎn)品一直也不溫不火。一直到云計算和大數(shù)據(jù)的概念全民強推,才慢慢進入公眾視野。前面說到的塊存儲和文件存儲,基本上都還是在專有的局域網(wǎng)絡(luò)內(nèi)部使用,而對象存儲的優(yōu)勢場景卻是互聯(lián)網(wǎng)或者公網(wǎng),主要解決海量數(shù)據(jù),海量并發(fā)訪問的需求。基于互聯(lián)網(wǎng)的應(yīng)用才是對象存儲的主要適配(當(dāng)然這個條件同樣適用于云計算,基于互聯(lián)網(wǎng)的應(yīng)用最容易遷移到云上,因為沒出現(xiàn)云這個名詞之前,他們已經(jīng)在上面了),基本所有成熟的公有云都提供了對象存儲產(chǎn)品,不管是國內(nèi)還是國外; 這種接口通常以 QEMU Driver 或者 Kernel Module 的方式存在,這種接口需要實現(xiàn) Linux 的 Block Device 的接口或者 QEMU 提供的 Block Driver 接口,如 Swift 、S3 以及 Gluster、Sheepdog,AWS 的 EBS,青云的云硬盤和阿里云的盤古系統(tǒng),還有 Ceph 的 RBD(RBD是Ceph面向塊存儲的接口); ?文件系統(tǒng)存儲: 與傳統(tǒng)的文件系統(tǒng)如 Ext4 是一個類型的,但區(qū)別在于分布式存儲提供了并行化的能力,如 Ceph 的 CephFS (CephFS是Ceph面向文件存儲的接口),但是有時候又會把 GlusterFS ,HDFS 這種非POSIX接口的類文件存儲接口歸入此類。當(dāng)然 NFS、NAS也是屬于文件系統(tǒng)存儲; ?總結(jié):對比; 3.Ceph基本架構(gòu): 三、架構(gòu)組件詳解: ?RADOS:所有其他客戶端接口使用和部署的基礎(chǔ)。由以下組件組成: 典型的RADOS部署架構(gòu)由少量的Monitor監(jiān)控器以及大量的OSD存儲設(shè)備組成,它能夠在動態(tài)變化的基于異質(zhì)結(jié)構(gòu)的存儲設(shè)備集群之上提供一種穩(wěn)定的、可擴展的、高性能的單一邏輯對象存儲接口。 ?Ceph客戶端接口(Clients) :Ceph架構(gòu)中除了底層基礎(chǔ)RADOS之上的LIBRADOS、RADOSGW、RBD以及Ceph FS統(tǒng)一稱為Ceph客戶端接口。簡而言之就是RADOSGW、RBD以及Ceph FS根據(jù)LIBRADOS提供的多編程語言接口開發(fā)。所以他們之間是一個階梯級過渡的關(guān)系。 3.Ceph FS :Ceph文件系統(tǒng)(CEPH FS)是一個POSIX兼容的文件系統(tǒng),使用Ceph的存 儲集群來存儲其數(shù)據(jù)。CEPH FS的結(jié)構(gòu)圖如下所示: 擴展理解地址:https://www.sohu.com/a/144775333_151779 四、Ceph數(shù)據(jù)存儲過程: Ceph存儲集群從客戶端接收文件,每個文件都會被客戶端切分成一個或多個對象,然后將這些對象進行分組,再根據(jù)一定的策略存儲到集群的OSD節(jié)點中,其存儲過程如圖所示: 五、Ceph的優(yōu)勢: 六、案例:搭建Ceph分布式存儲; 案例環(huán)境:
案例步驟: ?配置基礎(chǔ)環(huán)境: ?配置ntp時間服務(wù); ?分別在dlp節(jié)點、node1、node2節(jié)點、client客戶端節(jié)點上安裝Ceph程序; ?在dlp節(jié)點管理node存儲節(jié)點,安裝注冊服務(wù),節(jié)點信息; ?配置Ceph的mon監(jiān)控進程; ?配置Ceph的osd存儲進程; ?驗證查看ceph集群狀態(tài)信息: ?配置Ceph的mds元數(shù)據(jù)進程; ?配置Ceph的client客戶端; ?測試Ceph的客戶端存儲; ?錯誤整理; ?配置基礎(chǔ)環(huán)境: [root@dlp ~]# useradd dhhy [root@dlp ~]# echo "dhhy" |passwd --stdin dhhy [root@dlp ~]# cat < 192.168.100.101 dlp 192.168.100.102 node1 192.168.100.103 node2 192.168.100.104 ceph-client END [root@dlp ~]# echo "dhhy ALL = (root) NOPASSWD:ALL" >> /etc/sudoers.d/dhhy [root@dlp ~]# chmod 0440 /etc/sudoers.d/dhhy [root@node1~]# useradd dhhy [root@node1 ~]# echo "dhhy" |passwd --stdin dhhy [root@node1 ~]# cat < 192.168.100.101 dlp 192.168.100.102 node1 192.168.100.103 node2 192.168.100.104 ceph-client END [root@node1 ~]# echo "dhhy ALL = (root) NOPASSWD:ALL" >> /etc/sudoers.d/dhhy [root@node1 ~]# chmod 0440 /etc/sudoers.d/dhhy [root@node2 ~]# useradd dhhy [root@node2 ~]# echo "dhhy" |passwd --stdin dhhy [root@node2 ~]# cat < 192.168.100.101 dlp 192.168.100.102 node1 192.168.100.103 node2 192.168.100.104 ceph-client END [root@node2 ~]# echo "dhhy ALL = (root) NOPASSWD:ALL" >> /etc/sudoers.d/dhhy [root@node2 ~]# chmod 0440 /etc/sudoers.d/dhhy [root@ceph-client ~]# useradd dhhy [root@ceph-client ~]# echo "dhhy" |passwd --stdin dhhy [root@ceph-client ~]# cat < 192.168.100.101 dlp 192.168.100.102 node1 192.168.100.103 node2 192.168.100.104 ceph-client END [root@ceph-client ~]# echo "dhhy ALL = (root) NOPASSWD:ALL" >> /etc/sudoers.d/dhhy [root@ceph-client ~]# chmod 0440 /etc/sudoers.d/dhhy ?配置ntp時間服務(wù); [root@dlp ~]# yum -y install ntp ntpdate [root@dlp ~]# sed -i '/^server/s/^/#/g' /etc/ntp.conf [root@dlp ~]# sed -i '25aserver 127.127.1.0\nfudge 127.127.1.0 stratum 8' /etc/ntp.conf [root@dlp ~]# systemctl start ntpd [root@dlp ~]# systemctl enable ntpd [root@dlp ~]# netstat -utpln [root@node1 ~]# yum -y install ntpdate [root@node1 ~]# /usr/sbin/ntpdate 192.168.100.101 [root@node1 ~]# echo "/usr/sbin/ntpdate 192.168.100.101" >>/etc/rc.local [root@node1 ~]# chmod +x /etc/rc.local [root@node2 ~]# yum -y install ntpdate [root@node2 ~]# /usr/sbin/ntpdate 192.168.100.101 [root@node1 ~]# echo "/usr/sbin/ntpdate 192.168.100.101" >>/etc/rc.local [root@node1 ~]# chmod +x /etc/rc.local [root@ceph-client ~]# yum -y install ntpdate [root@ceph-client ~]# /usr/sbin/ntpdate 192.168.100.101 [root@ceph-client ~]# echo "/usr/sbin/ntpdate 192.168.100.101" >>/etc/rc.local [root@ceph-client ~]# chmod +x /etc/rc.local ?分別在dlp節(jié)點、node1、node2節(jié)點、client客戶端節(jié)點上安裝Ceph; [root@dlp ~]# yum -y install yum-utils [root@dlp ~]# yum-config-manager --add-repo https://dl./pub/epel/7/x86_64/ [root@dlp ~]# yum -y install epel-release --nogpgcheck [root@dlp ~]# cat < [Ceph] name=Ceph packages for \$basearch baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/\$basearch enabled=1 gpgcheck=0 type=rpm-md gpgkey=https://mirrors.163.com/ceph/keys/release.asc priority=1 [Ceph-noarch] name=Ceph noarch packages baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/noarch enabled=1 gpgcheck=0 type=rpm-md gpgkey=https://mirrors.163.com/ceph/keys/release.asc priority=1 [ceph-source] name=Ceph source packages baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/SRPMS enabled=1 gpgcheck=0 type=rpm-md gpgkey=https://mirrors.163.com/ceph/keys/release.asc priority=1 END [root@dlp ~]# ls /etc/yum.repos.d/ ####必須保證有默認的官網(wǎng)源,結(jié)合epel源和網(wǎng)易的ceph源,才可以進行安裝; bak CentOS-fasttrack.repo ceph.repo CentOS-Base.repo CentOS-Media.repo dl._pub_epel_7_x86_64_.repo CentOS-CR.repo CentOS-Sources.repo epel.repo CentOS-Debuginfo.repo CentOS-Vault.repo epel-testing.repo [dhhy@dlp ~]$ mkdir ceph-cluster ##創(chuàng)建ceph主目錄 [dhhy@dlp ~]$ cd ceph-cluster [dhhy@dlp ceph-cluster]$ sudo yum -y install ceph-deploy ##安裝ceph管理工具 [dhhy@dlp ceph-cluster]$ sudo yum -y install ceph --nogpgcheck ##安裝ceph主程序 [root@node1 ~]# yum -y install yum-utils [root@ node1 ~]# yum-config-manager --add-repo https://dl./pub/epel/7/x86_64/ [root@node1 ~]# yum -y install epel-release --nogpgcheck [root@node1 ~]# cat < [Ceph] name=Ceph packages for \$basearch baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/\$basearch enabled=1 gpgcheck=0 type=rpm-md gpgkey=https://mirrors.163.com/ceph/keys/release.asc priority=1 [Ceph-noarch] name=Ceph noarch packages baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/noarch enabled=1 gpgcheck=0 type=rpm-md gpgkey=https://mirrors.163.com/ceph/keys/release.asc priority=1 [ceph-source] name=Ceph source packages baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/SRPMS enabled=1 gpgcheck=0 type=rpm-md gpgkey=https://mirrors.163.com/ceph/keys/release.asc priority=1 END [root@node1 ~]# ls /etc/yum.repos.d/ ####必須保證有默認的官網(wǎng)源,結(jié)合epel源和網(wǎng)易的ceph源,才可以進行安裝; bak CentOS-fasttrack.repo ceph.repo CentOS-Base.repo CentOS-Media.repo dl._pub_epel_7_x86_64_.repo CentOS-CR.repo CentOS-Sources.repo epel.repo CentOS-Debuginfo.repo CentOS-Vault.repo epel-testing.repo [dhhy@node1 ~]$ mkdir ceph-cluster [dhhy@node1~]$ cd ceph-cluster [dhhy@node1 ceph-cluster]$ sudo yum -y install ceph-deploy [dhhy@node1 ceph-cluster]$ sudo yum -y install ceph --nogpgcheck [dhhy@node1 ceph-cluster]$ sudo yum -y install deltarpm [root@node2 ~]# yum -y install yum-utils [root@ node1 ~]# yum-config-manager --add-repo https://dl./pub/epel/7/x86_64/ [root@node2 ~]# yum -y install epel-release --nogpgcheck [root@node2 ~]# cat < [Ceph] name=Ceph packages for \$basearch baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/\$basearch enabled=1 gpgcheck=0 type=rpm-md gpgkey=https://mirrors.163.com/ceph/keys/release.asc priority=1 [Ceph-noarch] name=Ceph noarch packages baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/noarch enabled=1 gpgcheck=0 type=rpm-md gpgkey=https://mirrors.163.com/ceph/keys/release.asc priority=1 [ceph-source] name=Ceph source packages baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/SRPMS enabled=1 gpgcheck=0 type=rpm-md gpgkey=https://mirrors.163.com/ceph/keys/release.asc priority=1 END [root@node2 ~]# ls /etc/yum.repos.d/ ####必須保證有默認的官網(wǎng)源,結(jié)合epel源和網(wǎng)易的ceph源,才可以進行安裝; bak CentOS-fasttrack.repo ceph.repo CentOS-Base.repo CentOS-Media.repo dl._pub_epel_7_x86_64_.repo CentOS-CR.repo CentOS-Sources.repo epel.repo CentOS-Debuginfo.repo CentOS-Vault.repo epel-testing.repo [dhhy@node2 ~]$ mkdir ceph-cluster [dhhy@node2 ~]$ cd ceph-cluster [dhhy@node2 ceph-cluster]$ sudo yum -y install ceph-deploy [dhhy@node2 ceph-cluster]$ sudo yum -y install ceph --nogpgcheck [dhhy@node2 ceph-cluster]$ sudo yum -y install deltarpm [root@ceph-client ~]# yum -y install yum-utils [root@ node1 ~]# yum-config-manager --add-repo https://dl./pub/epel/7/x86_64/ [root@ceph-client ~]# yum -y install epel-release --nogpgcheck [root@ceph-client ~]# cat < [Ceph] name=Ceph packages for \$basearch baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/\$basearch enabled=1 gpgcheck=0 type=rpm-md gpgkey=https://mirrors.163.com/ceph/keys/release.asc priority=1 [Ceph-noarch] name=Ceph noarch packages baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/noarch enabled=1 gpgcheck=0 type=rpm-md gpgkey=https://mirrors.163.com/ceph/keys/release.asc priority=1 [ceph-source] name=Ceph source packages baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/SRPMS enabled=1 gpgcheck=0 type=rpm-md gpgkey=https://mirrors.163.com/ceph/keys/release.asc priority=1 END [root@ceph-client ~]# ls /etc/yum.repos.d/ ####必須保證有默認的官網(wǎng)源,結(jié)合epel源和網(wǎng)易的ceph源,才可以進行安裝; bak CentOS-fasttrack.repo ceph.repo CentOS-Base.repo CentOS-Media.repo dl._pub_epel_7_x86_64_.repo CentOS-CR.repo CentOS-Sources.repo epel.repo CentOS-Debuginfo.repo CentOS-Vault.repo epel-testing.repo [root@ceph-client ~]# yum -y install yum-plugin-priorities [root@ceph-client ~]# yum -y install ceph ceph-radosgw --nogpgcheck ?在dlp節(jié)點管理node存儲節(jié)點,安裝注冊服務(wù),節(jié)點信息; [dhhy@dlp ceph-cluster]$ pwd ##當(dāng)前目錄必須為ceph的安裝目錄位置 /home/dhhy/ceph-cluster [dhhy@dlp ceph-cluster]$ ssh-keygen -t rsa ##主節(jié)點需要遠程管理mon節(jié)點,需要創(chuàng)建密鑰對,并且將公鑰復(fù)制到mon節(jié)點 [dhhy@dlp ceph-cluster]$ ssh-copy-id dhhy@dlp [dhhy@dlp ceph-cluster]$ ssh-copy-id dhhy@node1 [dhhy@dlp ceph-cluster]$ ssh-copy-id dhhy@node2 [dhhy@dlp ceph-cluster]$ ssh-copy-id root@ceph-client [dhhy@dlp ceph-cluster]$ cat < Host dlp Hostname dlp User dhhy Host node1 Hostname node1 User dhhy Host node2 Hostname node2 User dhhy END [dhhy@dlp ceph-cluster]$ chmod 644 /home/dhhy/.ssh/config [dhhy@dlp ceph-cluster]$ ceph-deploy new node1 node2 ##初始化節(jié)點 [dhhy@dlp ceph-cluster]$ cat < osd pool default size = 2 END [dhhy@dlp ceph-cluster]$ ceph-deploy install node1 node2 ##安裝ceph ?配置Ceph的mon監(jiān)控進程; [dhhy@dlp ceph-cluster]$ ceph-deploy mon create-initial ##初始化mon節(jié)點 注解:node節(jié)點的配置文件在/etc/ceph/目錄下,會自動同步dlp管理節(jié)點的配置文件; ?配置Ceph的osd存儲; 配置node1節(jié)點的osd0存儲設(shè)備: [dhhy@dlp ceph-cluster]$ ssh dhhy@node1 ##創(chuàng)建osd節(jié)點存儲數(shù)據(jù)的目錄位置 [dhhy@node1 ~]$ sudo fdisk /dev/sdb n p 回車 回車 回車 p w [dhhy@node1 ~]$ sudo partx -a /dev/sdb [dhhy@node1 ~]$ sudo mkfs -t xfs /dev/sdb1 [dhhy@node1 ~]$ sudo mkdir /var/local/osd0 [dhhy@node1 ~]$ sudo vi /etc/fstab /dev/sdb1 /var/local/osd0 xfs defaults 0 0 :wq [dhhy@node1 ~]$ sudo mount -a [dhhy@node1 ~]$ sudo chmod 777 /var/local/osd0 [dhhy@node1 ~]$ sudo chown ceph:ceph /var/local/osd0/ [dhhy@node1 ~]$ ls -ld /var/local/osd0/ [dhhy@node1 ~]$ df -hT [dhhy@node1 ~]$ exit 配置node2節(jié)點的osd1存儲設(shè)備: [dhhy@dlp ceph-cluster]$ ssh dhhy@node2 [dhhy@node2 ~]$ sudo fdisk /dev/sdb n p 回車 回車 回車 p w [dhhy@node2 ~]$ sudo partx -a /dev/sdb [dhhy@node2 ~]$ sudo mkfs -t xfs /dev/sdb1 [dhhy@node2 ~]$ sudo mkdir /var/local/osd1 [dhhy@node2 ~]$ sudo vi /etc/fstab /dev/sdb1 /var/local/osd1 xfs defaults 0 0 :wq [dhhy@node2 ~]$ sudo mount -a [dhhy@node2 ~]$ sudo chmod 777 /var/local/osd1 [dhhy@node2 ~]$ sudo chown ceph:ceph /var/local/osd1/ [dhhy@node2~]$ ls -ld /var/local/osd1/ [dhhy@node2 ~]$ df -hT [dhhy@node2 ~]$ exit dlp管理節(jié)點注冊node節(jié)點: [dhhy@dlp ceph-cluster]$ ceph-deploy osd prepare node1:/var/local/osd0 node2:/var/local/osd1 ##初始創(chuàng)建osd節(jié)點并指定節(jié)點存儲文件位置 [dhhy@dlp ceph-cluster]$ chmod +r /home/dhhy/ceph-cluster/ceph.client.admin.keyring [dhhy@dlp ceph-cluster]$ ceph-deploy osd activate node1:/var/local/osd0 node2:/var/local/osd1 ##激活ods節(jié)點 [dhhy@dlp ceph-cluster]$ ceph-deploy admin node1 node2 ##復(fù)制key管理密鑰文件到node節(jié)點中 [dhhy@dlp ceph-cluster]$ sudo cp /home/dhhy/ceph-cluster/ceph.client.admin.keyring /etc/ceph/ [dhhy@dlp ceph-cluster]$ sudo cp /home/dhhy/ceph-cluster/ceph.conf /etc/ceph/ [dhhy@dlp ceph-cluster]$ ls /etc/ceph/ ceph.client.admin.keyring ceph.conf rbdmap [dhhy@dlp ceph-cluster]$ ceph quorum_status --format json-pretty ##查看Ceph群集詳細信息 { "election_epoch": 4, "quorum": [ 0, 1 ], "quorum_names": [ "node1", "node2" ], "quorum_leader_name": "node1", "monmap": { "epoch": 1, "fsid": "dc679c6e-29f5-4188-8b60-e9eada80d677", "modified": "2018-06-02 23:54:34.033254", "created": "2018-06-02 23:54:34.033254", "mons": [ { "rank": 0, "name": "node1", "addr": "192.168.100.102:6789\/0" }, { "rank": 1, "name": "node2", "addr": "192.168.100.103:6789\/0" } ] } } ?驗證查看ceph集群狀態(tài)信息: [dhhy@dlp ceph-cluster]$ ceph health HEALTH_OK [dhhy@dlp ceph-cluster]$ ceph -s ##查看Ceph群集狀態(tài) cluster 24fb6518-8539-4058-9c8e-d64e43b8f2e2 health HEALTH_OK monmap e1: 2 mons at {node1=192.168.100.102:6789/0,node2=192.168.100.103:6789/0} election epoch 6, quorum 0,1 node1,node2 osdmap e10: 2 osds: 2 up, 2 in flags sortbitwise,require_jewel_osds pgmap v20: 64 pgs, 1 pools, 0 bytes data, 0 objects 10305 MB used, 30632 MB / 40938 MB avail ##已使用、剩余、總?cè)萘?/span> 64 active+clean [dhhy@dlp ceph-cluster]$ ceph osd tree ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY -1 0.03897 root default -2 0.01949 host node1 0 0.01949 osd.0 up 1.00000 1.00000 -3 0.01949 host node2 1 0.01949 osd.1 up 1.00000 1.00000 [dhhy@dlp ceph-cluster]$ ssh dhhy@node1 ##驗證node1節(jié)點的端口監(jiān)聽狀態(tài)以及其配置文件以及磁盤使用情況 [dhhy@node1 ~]$ df -hT |grep sdb1 /dev/sdb1 xfs 20G 5.1G 15G 26% /var/local/osd0 [dhhy@node1 ~]$ du -sh /var/local/osd0/ 5.1G /var/local/osd0/ [dhhy@node1 ~]$ ls /var/local/osd0/ activate.monmap active ceph_fsid current fsid journal keyring magic ready store_version superblock systemd type whoami [dhhy@node1 ~]$ ls /etc/ceph/ ceph.client.admin.keyring ceph.conf rbdmap tmppVBe_2 [dhhy@node1 ~]$ cat /etc/ceph/ceph.conf [global] fsid = 0fcdfa46-c8b7-43fc-8105-1733bce3bfeb mon_initial_members = node1, node2 mon_host = 192.168.100.102,192.168.100.103 auth_cluster_required = cephx auth_service_required = cephx auth_client_required = cephx osd pool default size = 2 [dhhy@node1 ~]$ exit [dhhy@dlp ceph-cluster]$ ssh dhhy@node2 ##驗證node2節(jié)點的端口監(jiān)聽狀態(tài)以及其配置文件及其磁盤使用情況 [dhhy@node2 ~]$ df -hT |grep sdb1 /dev/sdb1 xfs 20G 5.1G 15G 26% /var/local/osd1 [dhhy@node2 ~]$ du -sh /var/local/osd1/ 5.1G /var/local/osd1/ [dhhy@node2 ~]$ ls /var/local/osd1/ activate.monmap active ceph_fsid current fsid journal keyring magic ready store_version superblock systemd type whoami [dhhy@node2 ~]$ ls /etc/ceph/ ceph.client.admin.keyring ceph.conf rbdmap tmpmB_BTa [dhhy@node2 ~]$ cat /etc/ceph/ceph.conf [global] fsid = 0fcdfa46-c8b7-43fc-8105-1733bce3bfeb mon_initial_members = node1, node2 mon_host = 192.168.100.102,192.168.100.103 auth_cluster_required = cephx auth_service_required = cephx auth_client_required = cephx osd pool default size = 2 [dhhy@node2 ~]$ exit ?配置Ceph的mds元數(shù)據(jù)進程; [dhhy@dlp ceph-cluster]$ ceph-deploy mds create node1 [dhhy@dlp ceph-cluster]$ ssh dhhy@node1 [dhhy@node1 ~]$ netstat -utpln |grep 68 (No info could be read for "-p": geteuid()=1000 but you should be root.) tcp 0 0 0.0.0.0:6800 0.0.0.0:* LISTEN - tcp 0 0 0.0.0.0:6801 0.0.0.0:* LISTEN - tcp 0 0 0.0.0.0:6802 0.0.0.0:* LISTEN - tcp 0 0 0.0.0.0:6803 0.0.0.0:* LISTEN - tcp 0 0 0.0.0.0:6804 0.0.0.0:* LISTEN - tcp 0 0 192.168.100.102:6789 0.0.0.0:* LISTEN - [dhhy@node1 ~]$ exit ?配置Ceph的client客戶端; [dhhy@dlp ceph-cluster]$ ceph-deploy install ceph-client ##提示輸入密碼,請輸入dhhy,溫柔點; [dhhy@dlp ceph-cluster]$ ceph-deploy admin ceph-client [dhhy@dlp ceph-cluster]$ ssh root@ceph-client [root@ceph-client ~]# chmod +r /etc/ceph/ceph.client.admin.keyring [root@ceph-client ~]# exit [dhhy@dlp ceph-cluster]$ ceph osd pool create cephfs_data 128 ##數(shù)據(jù)存儲池 pool 'cephfs_data' created [dhhy@dlp ceph-cluster]$ ceph osd pool create cephfs_metadata 128 ##元數(shù)據(jù)存儲池 pool 'cephfs_metadata' created [dhhy@dlp ceph-cluster]$ ceph fs new cephfs cephfs_data cephfs_metadata ##創(chuàng)建文件系統(tǒng) new fs with metadata pool 1 and data pool 2 [dhhy@dlp ceph-cluster]$ ceph fs ls ##查看文件系統(tǒng) name: cephfs, metadata pool: cephfs_data, data pools: [cephfs_metadata ] [dhhy@dlp ceph-cluster]$ ceph -s cluster 24fb6518-8539-4058-9c8e-d64e43b8f2e2 health HEALTH_WARN clock skew detected on mon.node2 too many PGs per OSD (320 > max 300) Monitor clock skew detected monmap e1: 2 mons at {node1=192.168.100.102:6789/0,node2=192.168.100.103:6789/0} election epoch 6, quorum 0,1 node1,node2 fsmap e5: 1/1/1 up {0=node1=up:active} osdmap e17: 2 osds: 2 up, 2 in flags sortbitwise,require_jewel_osds pgmap v54: 320 pgs, 3 pools, 4678 bytes data, 24 objects 10309 MB used, 30628 MB / 40938 MB avail 320 active+clean ?測試Ceph的客戶端存儲; [dhhy@dlp ceph-cluster]$ ssh root@ceph-client [root@ceph-client ~]# mkdir /mnt/ceph [root@ceph-client ~]# grep key /etc/ceph/ceph.client.admin.keyring |awk '{print $3}' >>/etc/ceph/admin.secret [root@ceph-client ~]# cat /etc/ceph/admin.secret AQCd/x9bsMqKFBAAZRNXpU5QstsPlfe1/FvPtQ== [root@ceph-client ~]# mount -t ceph 192.168.100.102:6789:/ /mnt/ceph/ -o name=admin,secretfile=/etc/ceph/admin.secret [root@ceph-client ~]# df -hT |grep ceph 192.168.100.102:6789:/ ceph 40G 11G 30G 26% /mnt/ceph [root@ceph-client ~]# dd if=/dev/zero of=/mnt/ceph/1.file bs=1G count=1 記錄了1+0 的讀入 記錄了1+0 的寫出 1073741824字節(jié)(1.1 GB)已復(fù)制,14.2938 秒,75.1 MB/秒 [root@ceph-client ~]# ls /mnt/ceph/ 1.file 192.168.100.102:6789:/ ceph 40G 13G 28G 33% /mnt/ceph [root@ceph-client ~]# mkdir /mnt/ceph1 [root@ceph-client ~]# mount -t ceph 192.168.100.103:6789:/ /mnt/ceph1/ -o name=admin,secretfile=/etc/ceph/admin.secret [root@ceph-client ~]# df -hT |grep ceph 192.168.100.102:6789:/ ceph 40G 15G 26G 36% /mnt/ceph 192.168.100.103:6789:/ ceph 40G 15G 26G 36% /mnt/ceph1 [root@ceph-client ~]# ls /mnt/ceph1/ 1.file 2.file ?錯誤整理: 1. 如若在配置過程中出現(xiàn)問題,重新創(chuàng)建集群或重新安裝ceph,那么需要將ceph集群中的數(shù)據(jù)都清除掉,命令如下; [dhhy@dlp ceph-cluster]$ ceph-deploy purge node1 node2 [dhhy@dlp ceph-cluster]$ ceph-deploy purgedata node1 node2 [dhhy@dlp ceph-cluster]$ ceph-deploy forgetkeys && rm ceph.* 2.dlp節(jié)點為node節(jié)點和客戶端安裝ceph時,會出現(xiàn)yum安裝超時,大多由于網(wǎng)絡(luò)問題導(dǎo)致,可以多執(zhí)行幾次安裝命令; 3.dlp節(jié)點指定ceph-deploy命令管理node節(jié)點配置時,當(dāng)前所在目錄一定是/home/dhhy/ceph-cluster/,不然會提示找不到ceph.conf的配置文件; 4.osd節(jié)點的/var/local/osd*/存儲數(shù)據(jù)實體的目錄權(quán)限必須為777,并且屬主和屬組必須為ceph; 5. 在dlp管理節(jié)點安裝ceph時出現(xiàn)以下問題 解決方法: 1.重新yum安裝node1或者node2的epel-release軟件包; 2.如若還無法解決,將軟件包下載,使用以下命令進行本地安裝; 6.如若在dlp管理節(jié)點中對/home/dhhy/ceph-cluster/ceph.conf主配置文件發(fā)生變化,那么需要將其主配置文件同步給node節(jié)點,命令如下: node節(jié)點收到配置文件后,需要重新啟動進程: 7.在dlp管理節(jié)點查看ceph集群狀態(tài)時,出現(xiàn)如下,原因是因為時間不一致所導(dǎo)致; 解決方法:將dlp節(jié)點的ntpd時間服務(wù)重新啟動,node節(jié)點再次同步時間即可,如下所示: 8.在dlp管理節(jié)點進行管理node節(jié)點時,所處的位置一定是/home/dhhy/ceph-cluster/,不然會提示找不到ceph.conf主配置文件;
|
|