Deploy Ceph Hammer on Centos 7

0
256

Ceph is a distributed object store and file system designed to provide excellent performance, reliability and scalability. Ceph was made possible by a global community of passionate storage engineers and researchers.

Ceph is open source and freely-available, and it always will be.

Ceph – the architectural overview

Cepht_architectural

Ceph Monitors (MON): Ceph monitors track the health of the entire cluster by keeping a map of the cluster state. It maintains a separate map of information for each component, which includes an OSD map, MON map, PG map (discussed in later chapters), and CRUSH map. All the cluster nodes report to Monitor nodes and share information about every change in their state. The monitor does not store actual data; this is the job of the OSD.
Ceph Object Storage Device (OSD): As soon as your application issues a writes operation to the Ceph cluster, data gets stored in the OSD in the form of objects. This is the only component of the Ceph cluster where actual user data is stored, and the same data is retrieved when the client issues a read operation. Usually, one OSD daemon is tied to one physical disk in your cluster. So, in general, the total number of physical disks in your Ceph cluster is the same as the number of OSD daemons working underneath to store user data on each physical disk.
Ceph Metadata Server (MDS): The MDS keeps track of file hierarchy and stores metadata only for the CephFS filesystem. The Ceph block device and RADOS gateway does not require metadata, hence they do not need the Ceph MDS daemon. The MDS does not serve data directly to clients, thus removing the single point of failure from the system.
RADOS: The Reliable Autonomic Distributed Object Store (RADOS) is the foundation of the Ceph storage cluster. Everything in Ceph is stored in the form of objects, and the RADOS object store is responsible for storing these objects irrespective of their data types. The RADOS layer makes sure that data always remains consistent. To do this, it performs data replication, failure detection and recovery, as well as data migration and rebalancing across cluster nodes.
librados: The librados library is a convenient way to gain access to RADOS with support to the PHP, Ruby, Java, Python, C, and C++ programming languages. It provides a native interface for the Ceph storage cluster (RADOS), as well as a base for other services such as RBD, RGW, and CephFS, which are built on top of librados. Librados also supports direct access to RADOS from applications with no HTTP overhead.
RADOS Block Devices (RBDs): RBDs which are now known as the Ceph block device, provides persistent block storage, which is thin-provisioned, resizable, and stores data striped over multiple OSDs. The RBD service has been built as a native interface on top of librados.
RADOS Gateway interface (RGW): RGW provides object storage service. It uses librgw (the Rados Gateway Library) and librados, allowing applications to establish connections with the Ceph object storage. The RGW provides RESTful APIs with interfaces that are compatible with Amazon S3 and OpenStack Swift.
CephFS: The Ceph File system provides a POSIX-compliant file system that uses the Ceph storage cluster to store user data on a filesystem. Like RBD and RGW, the CephFS service is also implemented as a native interface to librados.
Deploy – Server
#######################################################################
1.Install 3 node
Install centos 7 ( CentOS-7-x86_64-Minimal-1511.iso ) with 4 HDD, each HDD 100Gb, install OS on /dev/sda, config IP Public & Private, DNS
hostnamectl set-hostname CEPH01
hostnamectl set-hostname CEPH02 ( for node2 )
hostnamectl set-hostname CEPH03 ( for node3 )
Set DNS 8.8.8.8
yum update -y
yum install firewalld -y
systemctl start firewalld
Add firewall rules
firewall-cmd –zone=public –add-port=6789/tcp –permanent
firewall-cmd –zone=public –add-port=6800-7100/tcp –permanent
firewall-cmd –reload
firewall-cmd –zone=public –list-all
Disable SELinux
setenforce 0
sed -i s’/SELINUX.*=.*enforcing/SELINUX=disabled’/g /etc/selinux/config
Edit /etc/hosts
 
10.0.0.4 CEPH01
10.0.0.5 CEPH02
10.0.0.6 CEPH03
Config time
yum install ntp ntpdate ntp-doc -y
ntpdate pool.ntp.org
systemctl restart ntpdate.service
systemctl restart ntpd.service
systemctl enable ntpd.service
systemctl enable ntpdate.service
Installing and configuring Ceph
 
sudo yum install -y yum-utils && sudo yum-config-manager –add-repo https://dl.fedoraproject.org/pub/epel/7/x86_64/ && sudo yum install –nogpgcheck -y epel-release && sudo rpm –import /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 && sudo rm /etc/yum.repos.d/dl.fedoraproject.org*
vi /etc/yum.repos.d/ceph.repo
[Ceph]
name=Ceph packages for $basearch
baseurl=http://download.ceph.com/rpm-hammer/el7/$basearch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
priority=1
[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://download.ceph.com/rpm-hammer/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
priority=1
[ceph-source]
name=Ceph source packages
baseurl=http://download.ceph.com/rpm-hammer/el7/SRPMS
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
priority=1
yum update && sudo yum install ceph-deploy
 
[[email protected] ~]$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/ceph/.ssh/id_rsa):
Created directory ‘/home/ceph/.ssh’.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/ceph/.ssh/id_rsa.
Your public key has been saved in /home/ceph/.ssh/id_rsa.pub.
The key fingerprint is:
79:ae:79:d2:e2:a4:3c:bd:ab:f9:87:e6:10:18:3a:5b [email protected]
The key’s randomart image is:
+–[ RSA 2048]—-+
|                 |
|                 |
|    .            |
|   . o   .       |
|  o E . S .      |
|   +   . o       |
|  .   …o.      |
|     ..=*o+      |
|      =BOB       |
+—————–+
 
*****Deploy server —–
From user ceph ( root ) on ceph01
ssh-copy-id [email protected]
ssh-copy-id [email protected]
 
mkdir /etc/ceph
cd /etc/ceph
from node1
ceph-deploy new CEPH01 CEPH02 CEPH03
ceph-deploy install CEPH01 CEPH02 CEPH03
 
vi   /usr/lib/python2.7/site-packages/ceph_deploy/gatherkeys.py
line 78 ‘mds’, ‘allow *’, change this to ‘mds’, ‘allow’
ceph-deploy mon create-initial
ceph -s
ceph-deploy disk zap CEPH01:sdb CEPH01:sdc CEPH01:sdd
ceph-deploy osd create CEPH01:sdb CEPH01:sdc CEPH01:sdd 
ceph-deploy disk zap CEPH02:sdb CEPH02:sdc CEPH02:sdd
ceph-deploy osd create CEPH02:sdb CEPH02:sdc CEPH02:sdd
ceph-deploy disk zap CEPH03:sdb CEPH03:sdc CEPH03:sdd
ceph-deploy osd create CEPH03:sdb CEPH03:sdc CEPH03:sdd 
 
sudo /etc/init.d/ceph start osd
 
ceph-deploy admin CEPH01 CEPH02 CEPH03
Scalling up your Ceph cluster
 
From ceph-node1: 
Edit /etc/ceph/ceph.conf , add
public network = 10.0.0.0/24
ceph -s
ceph mon stat
ceph-deploy disk list CEPH02 CEPH03
ceph -s or ceph status
ceph osd pool set rbd pg_num 512
ceph osd pool set rbd pgp_num 512
ceph osd pool create {pool-name} pg_num
it is mandatory to choose the value of pg_num because it cannot be calculated automatically. Here are a few values commonly used:
  • Less than 5 OSDs set pg_num to 128
  • Between 5 and 10 OSDs set pg_num to 512
  • Between 10 and 50 OSDs set pg_num to 4096
  • If you have more than 50 OSDs, you need to understand the tradeoffs and how to calculate the pg_num value by yourself
  • For calculating pg_num value by yourself please take help of pgcalc tool
  • http://ceph.com/pgcalc/
As the number of OSDs increases, chosing the right value for pg_num becomes more important because it has a significant influence on the behavior of the cluster as well as the durability of the data when something goes wrong (i.e. the probability that a catastrophic event leads to data loss).
Check status of Ceph, using comment
ceph -w
ceph quorum_status –format json-pretty
ceph mon dump
ceph df
ceph mon stat
ceph osd stat
ceph pg stat
ceph pg dump
ceph osd lspools
ceph osd tree
ceph auth list
Print Friendly

Comments

comments

SHARE
Previous articleHow To Install OpenVPN On CentOS 7
Next articlevSphere Client Linux / Multi Platform / vSphere Host Client Web Based CentOS, Redhat, Fedora, Ubuntu