Blog

CEPH -Part 5 – Ceph Configuration on Ubuntu

Tags: cephcloudConfigurationcrushStorage

Published on: December 28, 2015 by Scott S

CEPH -Part 5 – Ceph Configuration on Ubuntu

Scenario:

We have been discussing about Ceph in implementing an indefinite storage scaling system through my previous blogs  ( 12,3,4). Now its the right time for configure it to achieve an indefinite storage scaling system. In this article I am trying to guide through the steps to implement CEPH configuration on Ubuntu LTS.

CEPH Configuration On Ubuntu 14.04.2 LTS Trusty

All the ceph configurations needs a properly configured Ceph Storage Cluster first. ie; whether you want to provide Ceph Filesystem and/or Ceph Object Storage and/or Ceph Block Device services to Cloud Platforms,  or use Ceph for another purpose, it all begins with setting up the ceph storage cluster.

Before configuring the ceph storage cluster to use, we need to perform certain preflight configurations on all the nodes that are to be used for implementing ceph. The preflight configurations mentioned below are based on the scenario described in the diagram.

Preflight  Configurations

I’m using a 4 node scenario to explain the basic configuration as shown in the below diagram. Here, Node refers to a single machine.

Screenshot from 2015-12-28 10:36:49

This prepares a ceph-deploy (admin node -the node used to deploy or push ceph configurations to other nodes in the cluster) and three Ceph Nodes (or virtual machines) that will host your Ceph Storage Cluster.

In the diagram admin-node, node1, node2 and node3 represents the hostnames of the nodes. The host admin-node is used to install and configure ceph-deploy utility, node1 is used to configure the ceph monitor(mon.node1) and node2 and node3 are used to configure ceph OSD daemons osd.0 and osd.1 respectively.

ceph-deploy (Admin – Node)

 The admin-node as mentioned before is used to push the ceph configurations or do remote ceph configurations to all the ceph nodes available in the cluster.

Login to the admin-node(ceph-deploy) as root and execute the following commands.

Add the release key:

root@ceph-deploy:~# wget -q -O- 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc' | sudo apt-key add -

Add the Ceph packages to your debian repository.


root@ceph-deploy:~# echo deb http://ceph.com/debian-{ceph-stable-release}/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list

Replace {ceph-stable-release} with a stable Ceph release (e.g., cuttlefish, dumpling, emperor, firefly, etc.).

For example:

 root@ceph-deploy:~# echo deb http://ceph.com/debian-giant/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list 

Update your repository:

 

root@ceph-deploy:~# apt-get update

Install ceph-deploy:

root@ceph-deploy:~# apt-get install ceph-deploy

As mentioned earlier, ceph-deploy is an application/script that helps to push ceph configurations to other nodes in the cluster, without configuring each nodes independently.

StorageCluster Node’S (Node1, Node2,Node3):

The blow mentioned configurations needs to be done on all the three Nodes(mon.node1, osd.0 and osd.1) except the admin-node.

The ceph-admin node must be have password-less SSH access to all Ceph nodes in order to push the configurations automatically.

When ceph-deploy utility logs in to a Ceph node as a user, that particular user must have passwordless sudo privileges.The other configurations include configuring NTP, Firewall, creating users etc which are discussed below.

 Configuring NTP

It is recommended installing NTP on Ceph nodes (especially on Ceph Monitor nodes) to prevent issues arising from clock drift.

Maintaining the time on ceph nodes is important as the ceph monitor nodes use time intervals to check the states/status of cluster on each osd’s and a delay or a clock skew can make the active+clean state of the cluster to change.

On a terminal execute:

 sudo apt-get install ntp 

Ensure that you enable the NTP service. Ensure that each Ceph Node uses the same NTP time server.

Note: Its the best if we use a local NTP server to sync the times of all the nodes used in a ceph storage cluster. To use local NTP server, edit /etc/ntpd.conf and set the server directive to use the ceph-admin node as the ntp server. Restart ntpd service for the changes to take effect.

Install SSH Server

For ALL Ceph Nodes (mon.node1, osd.0 and osd.1) perform the following steps:

Install an SSH server on each Ceph Node:

 sudo apt-get install openssh-server 

Ensure the SSH server is running on ALL Ceph Nodes.

Create a Ceph User

The ceph-deploy utility need to install software and other configuration files on the remote storage cluster nodes without prompting for passwords. So ceph-deploy utility must login to a Ceph node as a user that has passwordless  sudo privileges.

Note: Recent versions of  ceph-deploy support a –username option so you can specify any user that has password-less sudo (including root, although this is NOT recommended). To use ceph-deploy –username {username}, the user you specify must have password-less SSH access to the Ceph node, as ceph-deploy will not prompt you for a password.

I recommend creating a common Ceph user on ALL Ceph nodes in the cluster.

A uniform user name across the cluster may improve ease of use (not required), but you should avoid obvious user names, because hackers typically use them with brute force hacks (e.g., root, admin, {productname}).

The following procedure, substituting {username} for the user name you define, describes how to create a user with passwordless sudo .

Create a user on each Ceph Nodes(mon.node1, osd.0 and osd.1).

#ssh user@node
#sudo useradd -d /home/{username} -m {username}
#sudo passwd {username}

For the user you added to each Ceph node, ensure that the user has sudo privileges.

#echo "{username} ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/{username}

#sudo chmod 0440 /etc/sudoers.d/{username}

Enable Password-less SSH

Since ceph-deploy will not prompt for a password, you must generate SSH keys on the admin node and distribute the public key to each Ceph node. ceph-deploy will attempt to generate the SSH keys for initial monitors.

We need to login to the ceph-deploy node and generate the SSH keys first, then copy the ssh-id to the other three nodes(mon.node1, osd.0 and osd.1).

One thing to keep in mind is that here we should not use root user or sudo command, so login to the ceph-deploy node as a normal/regular system user. This is because we run the ceph-deploy utility as a regular user to push the ceph configurations to the other nodes.

Generate the SSH keys, but do not use sudo or the root user.

Leave the passphrase empty: (here I’m using the user “deployuser” which I created solely for this purpose) I’ve created this user on all the nodes.

deployuser@ceph-deploy:~# ssh-keygen
Generating public/private key pair.
Enter file in which to save the key (/ceph-deploy/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /ceph-deploy/.ssh/id_rsa.
Your public key has been saved in /ceph-deploy/.ssh/id_rsa.pub. 

Copy the key to each Ceph Node, replacing {username} with the user name you created while creating the Ceph User.

deployuser@ceph-deploy:~# ssh-copy-id {username}@mon.node1
deployuser@ceph-deploy:~# ssh-copy-id {username}@osd.0
deployuser@ceph-deploy:~# ssh-copy-id {username}@osd.1

Example:

deployuser@ceph-deploy:~# ssh-copy-id deployuser@mon.node1

deployuser@ceph-deploy:~# ssh-copy-id deployuser@osd.0

deployuser@ceph-deploy:~# ssh-copy-id deployuser@osd.1

Once this is done the user deployuser in admin-node can ssh to node1, node2 and node3 without prompting a password.

Enable Networking On Bootup

Ceph OSDs peer with each other and report to Ceph Monitors over the network. If networking is OFF by default, the Ceph cluster cannot come online during bootup until you enable networking.

By default Ubuntu 12.04.2 has networking enabled by default on startup.

Ensure Connectivity

Ensure connectivity using ping  with short hostnames (hostname -s). Address hostname resolution issues as necessary. You can accomplish by adding a host entry in /etc/hosts file for the corresponding machines. The goal is that, the ceph-deploy utility can use the hostnames in the commands to communicate with other ceph nodes.

Note: Hostnames should resolve to a network IP address, not to the loopback IP address (e.g., hostnames should resolve to an IP address other than 127.0.0.1). If you use your admin node as a Ceph node, you should also ensure that it resolves to its hostname and IP address (i.e., not its loopback IP address).

Open Required Ports

Ceph Monitors communicate using port 6789 by default

Ceph OSDs communicate in a port range of  6800:7300 by default.

If using IPTABLES as the firewall use the following commands to open the required ports:

Add port 6789 for Ceph Monitors[Node1(mon.node1)] and ports 6800:7300 for Ceph OSDs(osd.0 and osd.1).

For example:


root@mon.node1:~# sudo iptables -A INPUT -i {iface} -p tcp -s {ip-address}/{netmask} --dport 6789 -j ACCEPT

root@osd.0:~# sudo iptables -A INPUT -i {iface} -p tcp -s {ip-address}/{netmask} --dport 6800:7300 -j ACCEPT root@osd.1:~# sudo iptables -A INPUT -i {iface} -p tcp -s {ip-address}/{netmask} --dport 6800:7300 -j ACCEPT 

Once you have finished configuring iptables , ensure that you make the changes persistent on each node so that they will be in effect when your nodes reboot. For that execute the following command:

iptables --save 

Now all the nodes have the basic configurations to get started with the cluster configuration.

DEPLOYING THE CEPH STORAGE CLUSTER

Once you have completed your Preflight configurations, you can start with the storage cluster deployment using the ceph-deploy utility on your admin node. Create a three Ceph Node cluster so you can explore Ceph functionality.

A Ceph Storage Cluster requires at least one Ceph Monitor and at least two Ceph OSD Daemons.

  • Ceph OSDs: A Ceph OSD Daemon (Ceph OSD) stores data, handles data replication, recovery, backfilling, rebalancing, and provides some monitoring information to Ceph Monitors by checking other Ceph OSD Daemons for a heartbeat. A Ceph Storage Cluster requires at least two Ceph OSD Daemons to achieve an active + clean state when the cluster makes two copies of your data (Ceph makes 2 copies by default, but you can adjust it).
  • Monitors: A Ceph Monitor maintains maps of the cluster state, including the monitor map, the OSD map, the Placement Group (PG) map, and the CRUSH map. Ceph maintains a history (called an “epoch”) of each state change in the Ceph Monitors, Ceph OSD Daemons, and PGs.

Deploying a Ceph Storage Cluster has been divided into three parts:

1) Setting up each Ceph Node

2) The Network

3) Ceph Storage Cluster.

Here, I’m Creating a Ceph Storage Cluster with one Ceph Monitor and two Ceph OSD Daemons. Once the cluster reaches a active + clean  state, we can expand it by adding a third Ceph OSD Daemon, a Metadata Server and two more Ceph Monitors.

The best practice is to create a directory on your admin node for maintaining the configuration files and keys that ceph-deploy generates for your cluster. So we begin by creating the necessary directory on the ceph-deploy admin node.

deployuser@ceph-deploy:~# mkdir my-cluster
deployuser@ceph-deploy:~# cd my-cluster

The ceph-deploy utility/command will output files to the current directory. Ensure that the PWD is my-cluster(the directory created in the above step) while executing the ceph-deploy command or utility.

Note:

Do not run  ceph-deploy with sudo or run it as root if you are logged in as a different user, because it will not issue sudo commands needed on the remote host.

CREATING THE CLUSTER

On your admin-node from the directory you created for holding your configuration details(my-cluster), perform the following steps using ceph-deploy

  1. Create the cluster.

    
    deployuser@ceph-deploy:~/my-cluster$ ceph-deploy new {initial-monitor-node(s)}
    
    

    For example:

    deployuser@ceph-deploy:~/my-cluster$ ceph-deploy new node1

    Check the output of ceph-deploy with ls and cat in the current directory. You should see a Ceph configuration file, a monitor secret keyring, and a log file for the new cluster.

  2. Change the default number of replicas in the Ceph configuration file from 3 to 2 so that Ceph can achieve an active + clean state with just two Ceph OSDs. Add the following line under the [global] section of ceph.conf file in my-cluster directory:

    osd pool default size = 2
    
  3. Install CEPH.

    deployuser@ceph-deploy:~/my-cluster$ ceph-deploy install {ceph-node}[{ceph-node} ...]
    

    For example:

    deployuser@ceph-deploy:~/my-cluster$ ceph-deploy install admin-node node1 node2 node3

    The  ceph-deploy utility will install Ceph on each node.

    NOTE: If you use ceph-deploy purge,  you must re-execute this step to re-install Ceph.

  4. Add the initial monitor(s) and gather the keys:

     deployuser@ceph-deploy:~/my-cluster$ ceph-deploy mon create-initial
    

    Once you complete the process, your local directory should have the following keyrings:

    • {cluster-name}.client.admin.keyring
    • {cluster-name}.bootstrap-osd.keyring
    • {cluster-name}.bootstrap-mds.keyring
    • {cluster-name}.bootstrap-rgw.keyring

Note: The bootstrap-rgw keyring is only created during installation of clusters running Hammer or newe

Add two OSDs.

 

For fast setup, I’m using a directory rather than an entire disk per Ceph OSD Daemon.

Login to the Ceph OSD Nodes and create a directory for the Ceph OSD Daemon.

deployuser@node2:~$ sudo mkdir /var/local/osd0
deployuser@node3:~$ sudo mkdir /var/local/osd1 

Then, from your admin node, use ceph-deploy to prepare the OSDs.

ceph-deploy osd prepare {ceph-node}:/path/to/directory

For example:

deployuser@ceph-deploy:~/my-cluster$ ceph-deploy osd prepare node2:/var/local/osd0 node3:/var/local/osd1

Finally, activate the OSDs.

ceph-deploy osd activate {ceph-node}:/path/to/directory

For example:

deployuser@ceph-deploy:~/my-cluster$ ceph-deploy osd activate node2:/var/local/osd0 node3:/var/local/osd1

Use ceph-deploy to copy the configuration file and admin key to your admin node and your Ceph Nodes so that you can use the ceph CLI without having to specify the monitor address and ceph.client.admin.keyring  each time you execute a command.

ceph-deploy admin {admin-node} {ceph-node}

For example:

deployuser@ceph-deploy:~/my-cluster$ ceph-deploy admin admin-node node1 node2 node3

When ceph-deploy is talking to the local admin host(admin-node) , it must be reachable by its hostname. If necessary, modify /etc/hosts to add the name of the admin host.

Ensure that you have the correct permissions for the ceph.client.admin.keyring.

deployuser@ceph-deploy:~/my-cluster$ sudo chmod +r /etc/ceph/ceph.client.admin.keyring

 

Check your cluster’s health.

<div class="highlight-python">
<pre>deployuser@ceph-deploy:~/my-cluster$ ceph health

Your cluster should return an active + clean state when it has finished peering.

On this is done, you have an active ceph storage cluster ready to use with 2 osds and one monitor.

Storing/Retrieving Object Data

To store object data in the Ceph Storage Cluster, a Ceph client must:

  1. Set an object name
  2. Specify a pool

The Ceph Client retrieves the latest cluster map and the CRUSH algorithm calculates how to map the object to a placement group, and then calculates how to assign the placement group to a Ceph OSD Daemon dynamically. To find the object location, all you need is the object name and the pool name.

For example:

deployuser@ceph-deploy:~/my-cluster$ ceph osd map {poolname} {object-name}

Lets create an object. Specify an object name, a path to a test file containing some object data and a pool name using the rados put command on the command line.

For example:

deployuser@ceph-deploy:~/my-cluster$ echo {Test-data} > testfile.txt

deployuser@ceph-deploy:~/my-cluster$ rados put {object-name} {file-path} –pool=data

deployuser@ceph-deploy:~/my-cluster$ rados put test-object-1 testfile.txt –pool=data

 

To verify that the Ceph Storage Cluster stored the object, execute the following:

 deployuser@ceph-deploy:~/my-cluster$ rados -p data ls 

Now, identify the object location:

deployuser@ceph-deploy:~/my-cluster$ ceph osd map {pool-name} {object-name}
deployuser@ceph-deploy:~/my-cluster$ ceph osd map data test-object-1

Ceph should output the object’s location.

For example:

osdmap e537 pool ‘data’ (0) object ‘test-object-1’ -> pg 0.d1743484 (0.4) -> up [1,0] acting [1,0]

To remove the test object, simply delete it using the rados rm command.

For example:

deployuser@ceph-deploy:~/my-cluster$ rados rm test-object-1 –pool=data

Expanding Your Cluster

Once you have a basic cluster up and running, the next step is to expand cluster. Add a Ceph OSD Daemon to node1. Then add a Ceph Monitor to node2 and node3 to establish a quorum of 3 Ceph Monitors.

Screenshot from 2015-12-28 11:58:13

Adding an OSD

Since we are running a 3-node cluster for demonstration purposes, add the OSD to the monitor node(Node1).

deployuser@node2:~$ sudo mkdir /var/local/osd2

Then, from your ceph-deploy node, prepare the OSD.

deployuser@ceph-deploy:~/my-cluster$ ceph-deploy osd prepare {ceph-node}:/path/to/directory

For example:

deployuser@ceph-deploy:~/my-cluster$ ceph-deploy osd prepare node1:/var/local/osd2

Finally, activate the OSDs.

deployuser@ceph-deploy:~/my-cluster$ ceph-deploy osd activate {ceph-node}:/path/to/directory

For example:

deployuser@ceph-deploy:~/my-cluster$ ceph-deploy osd activate node1:/var/local/osd2

Once you have added your new OSD, Ceph will begin rebalancing the cluster by migrating placement groups to your new OSD. You can observe this process with the ceph CLI.

 deployuser@ceph-deploy:~/my-cluster$ ceph -w 

You should see the placement group states change from active+clean to active with some degraded objects, and finally active+clean when migration completes. (Control-c to exit.)

Adding Monitors

A Ceph Storage Cluster requires at least one Ceph Monitor to run. For high availability, Ceph Storage Clusters typically run multiple Ceph Monitors so that the failure of a single Ceph Monitor will not bring down the Ceph Storage Cluster. Ceph uses the Paxos algorithm, which requires a majority of monitors (i.e., 1, 2:3, 3:4, 3:5, 4:6, etc.) to form a quorum.

Add two Ceph Monitors to your cluster to make a quorum.

 deployuser@ceph-deploy:~/my-cluster$ ceph-deploy mon add {ceph-node} 

 

For example:

deployuser@ceph-deploy:~/my-cluster$ ceph-deploy mon add node2 node3

Once you have added your new Ceph Monitors, Ceph will begin synchronizing the monitors and form a quorum.

You can check the quorum status by executing the following:

deployuser@ceph-deploy:~/my-cluster$ ceph quorum_status --format json-pretty

Now you have a cluster with three OSD’s and three monitors.

Block Device Configuration

To configure block devices, make sure that the cluster is in active+clean state.

Do not execute the following procedures on the same physical node as your Ceph Storage Cluster nodes (unless you use a VM).

I’m using a separate independent node(ceph-client) to use the ceph block device. The first step is to install ceph on the ceph-client node. 1. Use ceph-deploy utility on the ceph-admin node to install ceph on the ceph-client node.

deployuser@ceph-deploy:~/my-cluster$ ceph-deploy install ceph-client

2. Use ceph-deploy to copy the Ceph configuration file and the ceph.client.admin.keyring to the ceph-client.

deployuser@ceph-deploy:~/my-cluster$ ceph-deploy admin ceph-client

The ceph-deploy utility copies the keyring to the /etc/ceph directory.

Ensure that the keyring file has appropriate read permissions (e.g., sudo chmod +r /etc/ceph/ceph.client.admin.keyring).

The second step is to configure the block device.

1. On the ceph-client node execute the following command to create a block device image.

 deployuser@ceph-client:~$ rbd create foo --size 4096 [-m {mon-IP}] -k /home/deployuser/my-cluster/ceph.client.admin.keyring 

2. Once the image is created, map the image to a block device.

deployuser@node2:~$ rbd map foo --pool rbd --name client.admin [-m {mon-IP}] -k/home/deployuser/my-cluster/ceph.client.admin.keyring

3. Use the block device by creating a file system on the ceph-client node.

deployuser@node2:~$ sudo mkfs.ext4 -m0 /dev/rbd/rbd/foo

4. Mount the file system on the ceph-client node

deployuser@node2:~$sudo mkdir /mnt/ceph-block-device
deployuser@node2:~$ sudo mount /dev/rbd/rbd/foo /mnt/ceph-block-device
deployuser@node2:~$ cd /mnt/ceph-block-device

Thats it you have mounted ceph block device on /mnt/ceph-block-device.

Get 24/7 expert server management

Category : Howtos, Linux

Scott S

Scott S

Scott follows his heart and enjoys design and implementation of advanced, sophisticated enterprise solutions. His never ending passion towards technological advancements, unyielding affinity to perfection and excitement in exploration of new areas, helps him to be on the top of everything he is involved with. This amateur bike stunting expert probably loves cars and bikes much more than his family. He currently spearheads the Enterprise Solutions and Infrastructure Consultancy wing of SupportSages.

You may also read:

Comments

Add new commentSIGN IN

Let's Connect

Categories

Your Cart

Cart is empty.

Subtotal
₹0.00
APPLY
0
Send this to a friend