免费高清特黄a大片,九一h片在线免费看,a免费国产一级特黄aa大,国产精品国产主播在线观看,成人精品一区久久久久,一级特黄aa大片,俄罗斯无遮挡一级毛片

分享

Ceph分布式存儲

 夜貓速讀 2022-06-17 發(fā)布于湖北

一、Ceph概述:

概述:Ceph是根據(jù)加州大學(xué)Santa Cruz分校的Sage Weil的博士論文所設(shè)計開發(fā)的新一代自由軟件分布式文件系統(tǒng),其設(shè)計目標是良好的可擴展性(PB級別以上)、高性能、高可靠性。Ceph其命名和UCSC(Ceph 的誕生地)的吉祥物有關(guān),這個吉祥物是“Sammy”,一個香蕉色的蛞蝓,就是頭足類中無殼的軟體動物。這些有多觸角的頭足類動物,是對一個分布式文件系統(tǒng)高度并行的形象比喻。 其設(shè)計遵循了三個原則:數(shù)據(jù)與元數(shù)據(jù)的分離,動態(tài)的分布式的元數(shù)據(jù)管理,可靠統(tǒng)一的分布式對象存儲機制。

二、基本架構(gòu):

1.Ceph是一個高可用、易于管理、開源的分布式存儲系統(tǒng),可以在一套系統(tǒng)中同時提供對象存儲、塊存儲以及文件存儲服務(wù)。其主要由Ceph存儲系統(tǒng)的核心RADOS以及塊存儲接口、對象存儲接口和文件系統(tǒng)存儲接口組成;

2.存儲類型:

?塊存儲:

在常見的存儲中 DAS、SAN 提供的也是塊存儲、openstack的cinder存儲,例如iscsi的存儲;

?對象存儲:

對象存儲概念出現(xiàn)得晚,存儲標準化組織SINA早在2004年就給出了定義,但早期多出現(xiàn)在超大規(guī)模系統(tǒng),所以并不為大眾所熟知,相關(guān)產(chǎn)品一直也不溫不火。一直到云計算和大數(shù)據(jù)的概念全民強推,才慢慢進入公眾視野。前面說到的塊存儲和文件存儲,基本上都還是在專有的局域網(wǎng)絡(luò)內(nèi)部使用,而對象存儲的優(yōu)勢場景卻是互聯(lián)網(wǎng)或者公網(wǎng),主要解決海量數(shù)據(jù),海量并發(fā)訪問的需求。基于互聯(lián)網(wǎng)的應(yīng)用才是對象存儲的主要適配(當(dāng)然這個條件同樣適用于云計算,基于互聯(lián)網(wǎng)的應(yīng)用最容易遷移到云上,因為沒出現(xiàn)云這個名詞之前,他們已經(jīng)在上面了),基本所有成熟的公有云都提供了對象存儲產(chǎn)品,不管是國內(nèi)還是國外;

這種接口通常以 QEMU Driver 或者 Kernel Module 的方式存在,這種接口需要實現(xiàn) Linux 的 Block Device 的接口或者 QEMU 提供的 Block Driver 接口,如 Swift 、S3 以及 Gluster、Sheepdog,AWS 的 EBS,青云的云硬盤和阿里云的盤古系統(tǒng),還有 Ceph 的 RBD(RBD是Ceph面向塊存儲的接口);

?文件系統(tǒng)存儲:

與傳統(tǒng)的文件系統(tǒng)如 Ext4 是一個類型的,但區(qū)別在于分布式存儲提供了并行化的能力,如 Ceph 的 CephFS (CephFS是Ceph面向文件存儲的接口),但是有時候又會把 GlusterFS ,HDFS 這種非POSIX接口的類文件存儲接口歸入此類。當(dāng)然 NFS、NAS也是屬于文件系統(tǒng)存儲;

?總結(jié):對比;

3.Ceph基本架構(gòu):

三、架構(gòu)組件詳解:

?RADOS:所有其他客戶端接口使用和部署的基礎(chǔ)。由以下組件組成: 
  OSD:Object StorageDevice,提供數(shù)據(jù)實體存儲資源;
  Monitor:維護整個Ceph集群中各個節(jié)點的心跳信息,維持整個集群的全局狀態(tài);
  MDS:Ceph Metadata Server,文件系統(tǒng)元數(shù)據(jù)服務(wù)節(jié)點。MDS也支持多臺機器分布       式的部署,以實現(xiàn)系統(tǒng)的高可用性。

典型的RADOS部署架構(gòu)由少量的Monitor監(jiān)控器以及大量的OSD存儲設(shè)備組成,它能夠在動態(tài)變化的基于異質(zhì)結(jié)構(gòu)的存儲設(shè)備集群之上提供一種穩(wěn)定的、可擴展的、高性能的單一邏輯對象存儲接口。 

?Ceph客戶端接口(Clients) :Ceph架構(gòu)中除了底層基礎(chǔ)RADOS之上的LIBRADOS、RADOSGW、RBD以及Ceph FS統(tǒng)一稱為Ceph客戶端接口。簡而言之就是RADOSGW、RBD以及Ceph FS根據(jù)LIBRADOS提供的多編程語言接口開發(fā)。所以他們之間是一個階梯級過渡的關(guān)系。 
1.RADOSGW : Ceph對象存儲網(wǎng)關(guān),是一個底層基于librados向客戶端提供RESTful接口的對象存儲接口。目前Ceph支持兩種API接口: 
S3.compatible:S3兼容的接口,提供與Amazon S3大部分RESTfuI API接口兼容的API接口。 
Swift.compatible:提供與OpenStack Swift大部分接口兼容的API接口。Ceph的
對象存儲使用網(wǎng)關(guān)守護進程(radosgw), radosgw結(jié)構(gòu)圖如圖所示: 
2.RBD :一個數(shù)據(jù)塊是一個字節(jié)序列(例如,一個512字節(jié)的數(shù)據(jù)塊)。基于數(shù)據(jù)塊存儲接口最常見的介質(zhì),如硬盤,光盤,軟盤,甚至是傳統(tǒng)的9磁道的磁帶的方式來存儲數(shù)據(jù)。塊設(shè)備接口的普及使得虛擬塊設(shè)備成為構(gòu)建像Ceph海量數(shù)據(jù)存儲系統(tǒng)理想選擇。 在一個Ceph的集群中, Ceph的塊設(shè)備支持自動精簡配置,調(diào)整大小和存儲數(shù)據(jù)。Ceph的塊設(shè)備可以充分利用 RADOS功能,實現(xiàn)如快照,復(fù)制和數(shù)據(jù)一致性。Ceph的RADOS塊設(shè)備(即RBD)通過RADOS協(xié)議與內(nèi)核模塊或librbd的庫進行交互。。RBD的結(jié)構(gòu)如圖所示: 

       3.Ceph FS Ceph文件系統(tǒng)(CEPH FS)是一個POSIX兼容的文件系統(tǒng),使用Ceph的存       儲集群來存儲其數(shù)據(jù)。CEPH FS的結(jié)構(gòu)圖如下所示: 

擴展理解地址:https://www.sohu.com/a/144775333_151779

四、Ceph數(shù)據(jù)存儲過程:

Ceph存儲集群從客戶端接收文件,每個文件都會被客戶端切分成一個或多個對象,然后將這些對象進行分組,再根據(jù)一定的策略存儲到集群的OSD節(jié)點中,其存儲過程如圖所示: 
圖中,對象的分發(fā)需要經(jīng)過兩個階段的計算:
1.對象到PG的映射。PG(PlaccmentGroup)是對象的邏輯集合。PG是系統(tǒng)向OSD節(jié)點分發(fā)數(shù)據(jù)的基本單位,相同PG里的對象將被分發(fā)到相同的OSD節(jié)點中(一個主OSD節(jié)點多個備份OSD節(jié)點)。對象的PG是由對象ID號通過Hash算法,結(jié)合其他一些修正參數(shù)得到的。 
2.PG到相應(yīng)的OSD的映射,RADOS系統(tǒng)利用相應(yīng)的哈希算法根據(jù)系統(tǒng)當(dāng)前的狀態(tài)以及PG的ID號,將各個PG分發(fā)到OSD集群中。

Ceph的優(yōu)勢: 
1
.Ceph的核心RADOS通常是由少量的負責(zé)集群管理的Monitor進程和大量的負責(zé)數(shù)據(jù)存儲的OSD進程構(gòu)成,采用無中心節(jié)點的分布式架構(gòu),對數(shù)據(jù)進行分塊多份存儲。具有良好的擴展性和高可用性。 
2. Ceph分布式文件系統(tǒng)提供了多種客戶端,包括對象存儲接口、塊存儲接口以及文件系統(tǒng)接口,具有廣泛的適用性,并且客戶端與存儲數(shù)據(jù)的OSD設(shè)備直接進行數(shù)據(jù)交互,大大提高了數(shù)據(jù)的存取性能。 
3.Ceph作為分布式文件系統(tǒng),其能夠在維護 POSIX 兼容性的同時加入了復(fù)制和容錯功能。從2010 年 3 月底,以及可以在Linux 內(nèi)核(從2.6.34版開始)中找到 Ceph 的身影,作為Linux的文件系統(tǒng)備選之一,Ceph.ko已經(jīng)集成入Linux內(nèi)核之中。Ceph 不僅僅是一個文件系統(tǒng),還是一個有企業(yè)級功能的對象存儲生態(tài)環(huán)境。

六、案例:搭建Ceph分布式存儲;

案例環(huán)境:

系統(tǒng)

IP地址

主機名(登錄用戶)

承載角色

Centos 7.4 64Bit 1708

192.168.100.101

dlp(dhhy)

admin-node

Centos 7.4 64Bit 1708

192.168.100.102

node1(dhhy)

添加一塊硬盤

mon-node

osd0-node

mds-node

Centos 7.4 64Bit 1708

192.168.100.103

node2(dhhy)

添加一塊硬盤

mon-node

osd1-node

Centos 7.4 64Bit 1708

192.168.100.104

ceph-client(root)

ceph-client

案例步驟:

?配置基礎(chǔ)環(huán)境:

?配置ntp時間服務(wù);

?分別在dlp節(jié)點、node1、node2節(jié)點、client客戶端節(jié)點上安裝Ceph程序;

?在dlp節(jié)點管理node存儲節(jié)點,安裝注冊服務(wù),節(jié)點信息;

?配置Ceph的mon監(jiān)控進程;

?配置Ceph的osd存儲進程;

?驗證查看ceph集群狀態(tài)信息:

?配置Ceph的mds元數(shù)據(jù)進程;

?配置Ceph的client客戶端;

?測試Ceph的客戶端存儲;

?錯誤整理;

?配置基礎(chǔ)環(huán)境:

[root@dlp ~]# useradd dhhy

[root@dlp ~]# echo "dhhy" |passwd --stdin dhhy

[root@dlp ~]# cat <>/etc/hosts

192.168.100.101 dlp

192.168.100.102 node1

192.168.100.103 node2

192.168.100.104 ceph-client

END

[root@dlp ~]# echo "dhhy ALL = (root) NOPASSWD:ALL" >> /etc/sudoers.d/dhhy

[root@dlp ~]# chmod 0440 /etc/sudoers.d/dhhy

[root@node1~]# useradd dhhy

[root@node1 ~]# echo "dhhy" |passwd --stdin dhhy

[root@node1 ~]# cat <>/etc/hosts

192.168.100.101 dlp

192.168.100.102 node1

192.168.100.103 node2

192.168.100.104 ceph-client

END

[root@node1 ~]# echo "dhhy ALL = (root) NOPASSWD:ALL" >> /etc/sudoers.d/dhhy

[root@node1 ~]# chmod 0440 /etc/sudoers.d/dhhy

[root@node2 ~]# useradd dhhy

[root@node2 ~]# echo "dhhy" |passwd --stdin dhhy

[root@node2 ~]# cat <>/etc/hosts

192.168.100.101 dlp

192.168.100.102 node1

192.168.100.103 node2

192.168.100.104 ceph-client

END

[root@node2 ~]# echo "dhhy ALL = (root) NOPASSWD:ALL" >> /etc/sudoers.d/dhhy

[root@node2 ~]# chmod 0440 /etc/sudoers.d/dhhy

[root@ceph-client ~]# useradd dhhy

[root@ceph-client ~]# echo "dhhy" |passwd --stdin dhhy

[root@ceph-client ~]# cat <>/etc/hosts

192.168.100.101 dlp

192.168.100.102 node1

192.168.100.103 node2

192.168.100.104 ceph-client

END

[root@ceph-client ~]# echo "dhhy ALL = (root) NOPASSWD:ALL" >> /etc/sudoers.d/dhhy

[root@ceph-client ~]# chmod 0440 /etc/sudoers.d/dhhy

?配置ntp時間服務(wù);

[root@dlp ~]# yum -y install ntp ntpdate

[root@dlp ~]# sed -i '/^server/s/^/#/g' /etc/ntp.conf

[root@dlp ~]# sed -i '25aserver 127.127.1.0\nfudge 127.127.1.0 stratum 8' /etc/ntp.conf

[root@dlp ~]# systemctl start ntpd

[root@dlp ~]# systemctl enable ntpd

[root@dlp ~]# netstat -utpln

[root@node1 ~]# yum -y install ntpdate

[root@node1 ~]# /usr/sbin/ntpdate 192.168.100.101

[root@node1 ~]# echo "/usr/sbin/ntpdate 192.168.100.101" >>/etc/rc.local

[root@node1 ~]# chmod +x /etc/rc.local

[root@node2 ~]# yum -y install ntpdate

[root@node2 ~]# /usr/sbin/ntpdate 192.168.100.101

[root@node1 ~]# echo "/usr/sbin/ntpdate 192.168.100.101" >>/etc/rc.local

[root@node1 ~]# chmod +x /etc/rc.local

[root@ceph-client ~]# yum -y install ntpdate

[root@ceph-client ~]# /usr/sbin/ntpdate 192.168.100.101

[root@ceph-client ~]# echo "/usr/sbin/ntpdate 192.168.100.101" >>/etc/rc.local

[root@ceph-client ~]# chmod +x /etc/rc.local

?分別在dlp節(jié)點、node1、node2節(jié)點、client客戶端節(jié)點上安裝Ceph;

[root@dlp ~]# yum -y install yum-utils                                  

[root@dlp ~]# yum-config-manager --add-repo https://dl./pub/epel/7/x86_64/

[root@dlp ~]# yum -y install epel-release --nogpgcheck

[root@dlp ~]# cat <>/etc/yum.repos.d/ceph.repo

[Ceph]

name=Ceph packages for \$basearch

baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/\$basearch

enabled=1

gpgcheck=0

type=rpm-md

gpgkey=https://mirrors.163.com/ceph/keys/release.asc

priority=1

[Ceph-noarch]

name=Ceph noarch packages

baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/noarch

enabled=1

gpgcheck=0

type=rpm-md

gpgkey=https://mirrors.163.com/ceph/keys/release.asc

priority=1

[ceph-source]

name=Ceph source packages

baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/SRPMS

enabled=1

gpgcheck=0

type=rpm-md

gpgkey=https://mirrors.163.com/ceph/keys/release.asc

priority=1

END

[root@dlp ~]# ls /etc/yum.repos.d/                                   ####必須保證有默認的官網(wǎng)源,結(jié)合epel源和網(wǎng)易的ceph源,才可以進行安裝;

bak                    CentOS-fasttrack.repo  ceph.repo

CentOS-Base.repo       CentOS-Media.repo      dl._pub_epel_7_x86_64_.repo

CentOS-CR.repo         CentOS-Sources.repo    epel.repo

CentOS-Debuginfo.repo  CentOS-Vault.repo      epel-testing.repo
[root@
dlp ~]# su - dhhy

[dhhy@dlp ~]$ mkdir ceph-cluster                                                        ##創(chuàng)建ceph主目錄

[dhhy@dlp ~]$ cd ceph-cluster

[dhhy@dlp ceph-cluster]$ sudo yum -y install ceph-deploy                     ##安裝ceph管理工具

[dhhy@dlp ceph-cluster]$ sudo yum -y install ceph --nogpgcheck       ##安裝ceph主程序

[root@node1 ~]# yum -y install yum-utils

[root@ node1 ~]# yum-config-manager --add-repo https://dl./pub/epel/7/x86_64/

[root@node1 ~]# yum -y install epel-release --nogpgcheck

[root@node1 ~]# cat <>/etc/yum.repos.d/ceph.repo

[Ceph]

name=Ceph packages for \$basearch

baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/\$basearch

enabled=1

gpgcheck=0

type=rpm-md

gpgkey=https://mirrors.163.com/ceph/keys/release.asc

priority=1

[Ceph-noarch]

name=Ceph noarch packages

baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/noarch

enabled=1

gpgcheck=0

type=rpm-md

gpgkey=https://mirrors.163.com/ceph/keys/release.asc

priority=1

[ceph-source]

name=Ceph source packages

baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/SRPMS

enabled=1

gpgcheck=0

type=rpm-md

gpgkey=https://mirrors.163.com/ceph/keys/release.asc

priority=1

END

[root@node1 ~]# ls /etc/yum.repos.d/                                   ####必須保證有默認的官網(wǎng)源,結(jié)合epel源和網(wǎng)易的ceph源,才可以進行安裝;

bak                    CentOS-fasttrack.repo  ceph.repo

CentOS-Base.repo       CentOS-Media.repo      dl._pub_epel_7_x86_64_.repo

CentOS-CR.repo         CentOS-Sources.repo    epel.repo

CentOS-Debuginfo.repo  CentOS-Vault.repo      epel-testing.repo
[root@
node1 ~]# su - dhhy

[dhhy@node1 ~]$ mkdir ceph-cluster

[dhhy@node1~]$ cd ceph-cluster

[dhhy@node1 ceph-cluster]$ sudo yum -y install ceph-deploy

[dhhy@node1 ceph-cluster]$ sudo yum -y install ceph --nogpgcheck

[dhhy@node1 ceph-cluster]$ sudo yum -y install deltarpm

[root@node2 ~]# yum -y install yum-utils

[root@ node1 ~]# yum-config-manager --add-repo https://dl./pub/epel/7/x86_64/

[root@node2 ~]# yum -y install epel-release --nogpgcheck

[root@node2 ~]# cat <>/etc/yum.repos.d/ceph.repo

[Ceph]

name=Ceph packages for \$basearch

baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/\$basearch

enabled=1

gpgcheck=0

type=rpm-md

gpgkey=https://mirrors.163.com/ceph/keys/release.asc

priority=1

[Ceph-noarch]

name=Ceph noarch packages

baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/noarch

enabled=1

gpgcheck=0

type=rpm-md

gpgkey=https://mirrors.163.com/ceph/keys/release.asc

priority=1

[ceph-source]

name=Ceph source packages

baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/SRPMS

enabled=1

gpgcheck=0

type=rpm-md

gpgkey=https://mirrors.163.com/ceph/keys/release.asc

priority=1

END

[root@node2 ~]# ls /etc/yum.repos.d/                                   ####必須保證有默認的官網(wǎng)源,結(jié)合epel源和網(wǎng)易的ceph源,才可以進行安裝;

bak                    CentOS-fasttrack.repo  ceph.repo

CentOS-Base.repo       CentOS-Media.repo      dl._pub_epel_7_x86_64_.repo

CentOS-CR.repo         CentOS-Sources.repo    epel.repo

CentOS-Debuginfo.repo  CentOS-Vault.repo      epel-testing.repo
[root@
node2 ~]# su - dhhy

[dhhy@node2 ~]$ mkdir ceph-cluster

[dhhy@node2 ~]$ cd ceph-cluster

[dhhy@node2 ceph-cluster]$ sudo yum -y install ceph-deploy

[dhhy@node2 ceph-cluster]$ sudo yum -y install ceph --nogpgcheck

[dhhy@node2 ceph-cluster]$ sudo yum -y install deltarpm

[root@ceph-client ~]# yum -y install yum-utils

[root@ node1 ~]# yum-config-manager --add-repo https://dl./pub/epel/7/x86_64/

[root@ceph-client ~]# yum -y install epel-release --nogpgcheck

[root@ceph-client ~]# cat <>/etc/yum.repos.d/ceph.repo

[Ceph]

name=Ceph packages for \$basearch

baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/\$basearch

enabled=1

gpgcheck=0

type=rpm-md

gpgkey=https://mirrors.163.com/ceph/keys/release.asc

priority=1

[Ceph-noarch]

name=Ceph noarch packages

baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/noarch

enabled=1

gpgcheck=0

type=rpm-md

gpgkey=https://mirrors.163.com/ceph/keys/release.asc

priority=1

[ceph-source]

name=Ceph source packages

baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/SRPMS

enabled=1

gpgcheck=0

type=rpm-md

gpgkey=https://mirrors.163.com/ceph/keys/release.asc

priority=1

END

[root@ceph-client ~]# ls /etc/yum.repos.d/                                   ####必須保證有默認的官網(wǎng)源,結(jié)合epel源和網(wǎng)易的ceph源,才可以進行安裝;

bak                    CentOS-fasttrack.repo  ceph.repo

CentOS-Base.repo       CentOS-Media.repo      dl._pub_epel_7_x86_64_.repo

CentOS-CR.repo         CentOS-Sources.repo    epel.repo

CentOS-Debuginfo.repo  CentOS-Vault.repo      epel-testing.repo

[root@ceph-client ~]# yum -y install yum-plugin-priorities

[root@ceph-client ~]# yum -y install ceph ceph-radosgw --nogpgcheck

?在dlp節(jié)點管理node存儲節(jié)點,安裝注冊服務(wù),節(jié)點信息;

[dhhy@dlp ceph-cluster]$ pwd                                                 ##當(dāng)前目錄必須為ceph的安裝目錄位置

/home/dhhy/ceph-cluster

[dhhy@dlp ceph-cluster]$ ssh-keygen -t rsa                                   ##主節(jié)點需要遠程管理mon節(jié)點,需要創(chuàng)建密鑰對,并且將公鑰復(fù)制到mon節(jié)點

[dhhy@dlp ceph-cluster]$ ssh-copy-id dhhy@dlp

[dhhy@dlp ceph-cluster]$ ssh-copy-id dhhy@node1

[dhhy@dlp ceph-cluster]$ ssh-copy-id dhhy@node2

[dhhy@dlp ceph-cluster]$ ssh-copy-id root@ceph-client

[dhhy@dlp ceph-cluster]$ cat <>/home/dhhy/.ssh/config

Host dlp

   Hostname dlp

   User dhhy

Host node1

   Hostname node1

   User dhhy

Host node2

   Hostname node2

   User dhhy

END

[dhhy@dlp ceph-cluster]$ chmod 644 /home/dhhy/.ssh/config

[dhhy@dlp ceph-cluster]$ ceph-deploy new node1 node2                                   ##初始化節(jié)點

[dhhy@dlp ceph-cluster]$ cat <>/home/dhhy/ceph-cluster/ceph.conf

osd pool default size = 2

END

[dhhy@dlp ceph-cluster]$ ceph-deploy install node1 node2                            ##安裝ceph

?配置Ceph的mon監(jiān)控進程;

[dhhy@dlp ceph-cluster]$ ceph-deploy mon create-initial                                   ##初始化mon節(jié)點

注解:node節(jié)點的配置文件在/etc/ceph/目錄下,會自動同步dlp管理節(jié)點的配置文件;

?配置Ceph的osd存儲;

配置node1節(jié)點的osd0存儲設(shè)備:

[dhhy@dlp ceph-cluster]$ ssh dhhy@node1                                   ##創(chuàng)建osd節(jié)點存儲數(shù)據(jù)的目錄位置

[dhhy@node1 ~]$ sudo fdisk /dev/sdb

n  p  回車    回車    回車   p    w

[dhhy@node1 ~]$ sudo partx -a /dev/sdb

[dhhy@node1 ~]$ sudo mkfs -t xfs /dev/sdb1

[dhhy@node1 ~]$ sudo mkdir /var/local/osd0

[dhhy@node1 ~]$ sudo vi /etc/fstab

/dev/sdb1 /var/local/osd0 xfs defaults 0 0

:wq

[dhhy@node1 ~]$ sudo mount -a

[dhhy@node1 ~]$ sudo chmod 777 /var/local/osd0

[dhhy@node1 ~]$ sudo chown ceph:ceph /var/local/osd0/

[dhhy@node1 ~]$ ls -ld /var/local/osd0/

[dhhy@node1 ~]$ df -hT

[dhhy@node1 ~]$ exit

配置node2節(jié)點的osd1存儲設(shè)備:

[dhhy@dlp ceph-cluster]$ ssh dhhy@node2

[dhhy@node2 ~]$ sudo fdisk /dev/sdb

n  p  回車    回車    回車   p    w

[dhhy@node2 ~]$ sudo partx -a /dev/sdb

[dhhy@node2 ~]$ sudo mkfs -t xfs /dev/sdb1

[dhhy@node2 ~]$ sudo mkdir /var/local/osd1

[dhhy@node2 ~]$ sudo vi /etc/fstab

/dev/sdb1 /var/local/osd1 xfs defaults 0 0

:wq

[dhhy@node2 ~]$ sudo mount -a

[dhhy@node2 ~]$ sudo chmod 777 /var/local/osd1

[dhhy@node2 ~]$ sudo chown ceph:ceph /var/local/osd1/

[dhhy@node2~]$ ls -ld /var/local/osd1/

[dhhy@node2 ~]$ df -hT

[dhhy@node2 ~]$ exit

dlp管理節(jié)點注冊node節(jié)點:

[dhhy@dlp ceph-cluster]$ ceph-deploy osd prepare node1:/var/local/osd0 node2:/var/local/osd1       ##初始創(chuàng)建osd節(jié)點并指定節(jié)點存儲文件位置

[dhhy@dlp ceph-cluster]$ chmod +r /home/dhhy/ceph-cluster/ceph.client.admin.keyring

[dhhy@dlp ceph-cluster]$ ceph-deploy osd activate node1:/var/local/osd0 node2:/var/local/osd1

       ##激活ods節(jié)點

[dhhy@dlp ceph-cluster]$ ceph-deploy admin node1 node2              ##復(fù)制key管理密鑰文件到node節(jié)點中

[dhhy@dlp ceph-cluster]$ sudo cp /home/dhhy/ceph-cluster/ceph.client.admin.keyring /etc/ceph/

[dhhy@dlp ceph-cluster]$ sudo cp /home/dhhy/ceph-cluster/ceph.conf /etc/ceph/

[dhhy@dlp ceph-cluster]$ ls /etc/ceph/

ceph.client.admin.keyring  ceph.conf  rbdmap

[dhhy@dlp ceph-cluster]$ ceph quorum_status --format json-pretty              ##查看Ceph群集詳細信息

{

    "election_epoch": 4,

    "quorum": [

        0,

        1

    ],

    "quorum_names": [

        "node1",

        "node2"

    ],

    "quorum_leader_name": "node1",

    "monmap": {

        "epoch": 1,

        "fsid": "dc679c6e-29f5-4188-8b60-e9eada80d677",

        "modified": "2018-06-02 23:54:34.033254",

        "created": "2018-06-02 23:54:34.033254",

        "mons": [

            {

                "rank": 0,

                "name": "node1",

                "addr": "192.168.100.102:6789\/0"

            },

            {

                "rank": 1,

                "name": "node2",

                "addr": "192.168.100.103:6789\/0"

            }

        ]

    }

}

?驗證查看ceph集群狀態(tài)信息:

[dhhy@dlp ceph-cluster]$ ceph health

HEALTH_OK

[dhhy@dlp ceph-cluster]$ ceph -s                                          ##查看Ceph群集狀態(tài)

    cluster 24fb6518-8539-4058-9c8e-d64e43b8f2e2

     health HEALTH_OK

     monmap e1: 2 mons at {node1=192.168.100.102:6789/0,node2=192.168.100.103:6789/0}

            election epoch 6, quorum 0,1 node1,node2

     osdmap e10: 2 osds: 2 up, 2 in

            flags sortbitwise,require_jewel_osds

      pgmap v20: 64 pgs, 1 pools, 0 bytes data, 0 objects

            10305 MB used, 30632 MB / 40938 MB avail              ##已使用、剩余、總?cè)萘?/span>

                  64 active+clean

[dhhy@dlp ceph-cluster]$ ceph osd tree

ID WEIGHT  TYPE NAME      UP/DOWN REWEIGHT PRIMARY-AFFINITY

-1 0.03897 root default                                    

-2 0.01949     host node1                                  

 0 0.01949         osd.0       up  1.00000          1.00000

-3 0.01949     host node2                                  

 1 0.01949         osd.1       up  1.00000          1.00000

[dhhy@dlp ceph-cluster]$ ssh dhhy@node1                                          ##驗證node1節(jié)點的端口監(jiān)聽狀態(tài)以及其配置文件以及磁盤使用情況 

[dhhy@node1 ~]$ df -hT |grep sdb1

/dev/sdb1                   xfs        20G  5.1G   15G   26% /var/local/osd0          

[dhhy@node1 ~]$ du -sh /var/local/osd0/

5.1G       /var/local/osd0/

[dhhy@node1 ~]$ ls /var/local/osd0/

activate.monmap  active  ceph_fsid  current  fsid  journal  keyring  magic  ready  store_version  superblock  systemd  type  whoami

[dhhy@node1 ~]$ ls /etc/ceph/

ceph.client.admin.keyring  ceph.conf  rbdmap  tmppVBe_2

[dhhy@node1 ~]$ cat /etc/ceph/ceph.conf

[global]

fsid = 0fcdfa46-c8b7-43fc-8105-1733bce3bfeb

mon_initial_members = node1, node2

mon_host = 192.168.100.102,192.168.100.103

auth_cluster_required = cephx

auth_service_required = cephx

auth_client_required = cephx

osd pool default size = 2

[dhhy@node1 ~]$ exit

[dhhy@dlp ceph-cluster]$ ssh dhhy@node2                                          ##驗證node2節(jié)點的端口監(jiān)聽狀態(tài)以及其配置文件及其磁盤使用情況

[dhhy@node2 ~]$ df -hT |grep sdb1

/dev/sdb1                   xfs        20G  5.1G   15G   26% /var/local/osd1

[dhhy@node2 ~]$ du -sh /var/local/osd1/

5.1G       /var/local/osd1/

[dhhy@node2 ~]$ ls /var/local/osd1/

activate.monmap  active  ceph_fsid  current  fsid  journal  keyring  magic  ready  store_version  superblock  systemd  type  whoami

[dhhy@node2 ~]$ ls /etc/ceph/

ceph.client.admin.keyring  ceph.conf  rbdmap  tmpmB_BTa

[dhhy@node2 ~]$ cat /etc/ceph/ceph.conf

[global]

fsid = 0fcdfa46-c8b7-43fc-8105-1733bce3bfeb

mon_initial_members = node1, node2

mon_host = 192.168.100.102,192.168.100.103

auth_cluster_required = cephx

auth_service_required = cephx

auth_client_required = cephx

osd pool default size = 2

[dhhy@node2 ~]$ exit

?配置Ceph的mds元數(shù)據(jù)進程;

[dhhy@dlp ceph-cluster]$ ceph-deploy mds create node1

[dhhy@dlp ceph-cluster]$ ssh dhhy@node1

[dhhy@node1 ~]$ netstat -utpln |grep 68

(No info could be read for "-p": geteuid()=1000 but you should be root.)

tcp        0      0 0.0.0.0:6800            0.0.0.0:*               LISTEN      -                  

tcp        0      0 0.0.0.0:6801            0.0.0.0:*               LISTEN      -                  

tcp        0      0 0.0.0.0:6802            0.0.0.0:*               LISTEN      -                  

tcp        0      0 0.0.0.0:6803            0.0.0.0:*               LISTEN      -                  

tcp        0      0 0.0.0.0:6804            0.0.0.0:*               LISTEN      -                  

tcp        0      0 192.168.100.102:6789    0.0.0.0:*               LISTEN      -

[dhhy@node1 ~]$ exit

?配置Ceph的client客戶端;

[dhhy@dlp ceph-cluster]$ ceph-deploy install ceph-client                            ##提示輸入密碼,請輸入dhhy,溫柔點;

[dhhy@dlp ceph-cluster]$ ceph-deploy admin ceph-client

[dhhy@dlp ceph-cluster]$ ssh root@ceph-client

[root@ceph-client ~]# chmod +r /etc/ceph/ceph.client.admin.keyring

[root@ceph-client ~]# exit

[dhhy@dlp ceph-cluster]$ ceph osd pool create cephfs_data 128              ##數(shù)據(jù)存儲池

pool 'cephfs_data' created

[dhhy@dlp ceph-cluster]$ ceph osd pool create cephfs_metadata 128       ##元數(shù)據(jù)存儲池

pool 'cephfs_metadata' created

[dhhy@dlp ceph-cluster]$ ceph fs new cephfs cephfs_data cephfs_metadata       ##創(chuàng)建文件系統(tǒng)

new fs with metadata pool 1 and data pool 2

[dhhy@dlp ceph-cluster]$ ceph fs ls                                                 ##查看文件系統(tǒng)

name: cephfs, metadata pool: cephfs_data, data pools: [cephfs_metadata ]

[dhhy@dlp ceph-cluster]$ ceph -s

    cluster 24fb6518-8539-4058-9c8e-d64e43b8f2e2

     health HEALTH_WARN

            clock skew detected on mon.node2

            too many PGs per OSD (320 > max 300)

            Monitor clock skew detected

     monmap e1: 2 mons at {node1=192.168.100.102:6789/0,node2=192.168.100.103:6789/0}

            election epoch 6, quorum 0,1 node1,node2

      fsmap e5: 1/1/1 up {0=node1=up:active}

     osdmap e17: 2 osds: 2 up, 2 in

            flags sortbitwise,require_jewel_osds

      pgmap v54: 320 pgs, 3 pools, 4678 bytes data, 24 objects

            10309 MB used, 30628 MB / 40938 MB avail

                 320 active+clean

?測試Ceph的客戶端存儲;

[dhhy@dlp ceph-cluster]$ ssh root@ceph-client

[root@ceph-client ~]# mkdir /mnt/ceph

[root@ceph-client ~]# grep key /etc/ceph/ceph.client.admin.keyring |awk '{print $3}' >>/etc/ceph/admin.secret

[root@ceph-client ~]# cat /etc/ceph/admin.secret

AQCd/x9bsMqKFBAAZRNXpU5QstsPlfe1/FvPtQ==

[root@ceph-client ~]# mount -t ceph 192.168.100.102:6789:/  /mnt/ceph/ -o name=admin,secretfile=/etc/ceph/admin.secret

[root@ceph-client ~]# df -hT |grep ceph

192.168.100.102:6789:/      ceph       40G   11G   30G   26% /mnt/ceph

[root@ceph-client ~]# dd if=/dev/zero of=/mnt/ceph/1.file bs=1G count=1

記錄了1+0 的讀入

記錄了1+0 的寫出

1073741824字節(jié)(1.1 GB)已復(fù)制,14.2938 秒,75.1 MB/秒

[root@ceph-client ~]# ls /mnt/ceph/

1.file
[root@ceph-client ~]# df -hT |grep ceph

192.168.100.102:6789:/      ceph       40G   13G   28G   33% /mnt/ceph

[root@ceph-client ~]# mkdir /mnt/ceph1

[root@ceph-client ~]# mount -t ceph 192.168.100.103:6789:/  /mnt/ceph1/ -o name=admin,secretfile=/etc/ceph/admin.secret

[root@ceph-client ~]# df -hT |grep ceph

192.168.100.102:6789:/      ceph       40G   15G   26G   36% /mnt/ceph

192.168.100.103:6789:/      ceph       40G   15G   26G   36% /mnt/ceph1

[root@ceph-client ~]# ls /mnt/ceph1/

1.file  2.file

?錯誤整理:

1. 如若在配置過程中出現(xiàn)問題,重新創(chuàng)建集群或重新安裝ceph,那么需要將ceph集群中的數(shù)據(jù)都清除掉,命令如下;

[dhhy@dlp ceph-cluster]$ ceph-deploy purge node1 node2

[dhhy@dlp ceph-cluster]$ ceph-deploy purgedata node1 node2

[dhhy@dlp ceph-cluster]$ ceph-deploy forgetkeys && rm ceph.*

2.dlp節(jié)點為node節(jié)點和客戶端安裝ceph時,會出現(xiàn)yum安裝超時,大多由于網(wǎng)絡(luò)問題導(dǎo)致,可以多執(zhí)行幾次安裝命令;

3.dlp節(jié)點指定ceph-deploy命令管理node節(jié)點配置時,當(dāng)前所在目錄一定是/home/dhhy/ceph-cluster/,不然會提示找不到ceph.conf的配置文件;

4.osd節(jié)點的/var/local/osd*/存儲數(shù)據(jù)實體的目錄權(quán)限必須為777,并且屬主和屬組必須為ceph;

5. 在dlp管理節(jié)點安裝ceph時出現(xiàn)以下問題

解決方法:

1.重新yum安裝node1或者node2的epel-release軟件包;

2.如若還無法解決,將軟件包下載,使用以下命令進行本地安裝;

6.如若在dlp管理節(jié)點中對/home/dhhy/ceph-cluster/ceph.conf主配置文件發(fā)生變化,那么需要將其主配置文件同步給node節(jié)點,命令如下:

node節(jié)點收到配置文件后,需要重新啟動進程:

7.在dlp管理節(jié)點查看ceph集群狀態(tài)時,出現(xiàn)如下,原因是因為時間不一致所導(dǎo)致;

解決方法:將dlp節(jié)點的ntpd時間服務(wù)重新啟動,node節(jié)點再次同步時間即可,如下所示:

8.在dlp管理節(jié)點進行管理node節(jié)點時,所處的位置一定是/home/dhhy/ceph-cluster/,不然會提示找不到ceph.conf主配置文件;

    轉(zhuǎn)藏 分享 獻花(0

    0條評論

    發(fā)表

    請遵守用戶 評論公約

    類似文章 更多