Friday, February 15, 2019

ceph-ansible: Installing Ceph

My previous post deals with using Vagrant to install CentOS based test systems to install Ceph on. There, I created 7 virtual machines which include
a) a machine to run ansible commands on,
b) three monitors(mons) and
c) three Object storage devices(OSDs) which each contain an additional 5Gb block device available to the OSD daemon.

Since I use Fedora 29 as my host system, I do not need the ansible vm and disabled it by commenting out the line from the array in the Vagrantfile I posted in the previous post.

The steps detailed below can be run either directly on the host machine or if needed, on the ansible vm with suitable modifications.

Install Ansible on your vm or your host machine.
$ sudo dnf install -y ansible
..
$ rpm -q ansible
ansible-2.7.5-1.fc29.noarch

Obtain the latest ceph-ansible using git
$ git clone https://github.com/ceph/ceph-ansible.git

I wanted to install the "Luminous" version of Ceph. According to the ceph-ansible documentation at
http://docs.ceph.com/ceph-ansible/master/#releases
I need the stable-3.2 branch of ceph-ansible. This will only work with Ansible version 2.6.

From the commands above, we have the 2.7.5-1 version of ansible. We need to downgrade our ansible package.

$ sudo dnf downgrade ansible
..
$ rpm -q ansible
ansible-2.6.5-1.fc29.noarch

We now have to go into the ceph-ansible directory and change to the stable-3.2 branch. I then like to create a branch of my own with the configuration files I need.


$ cd ceph-ansible
$ git checkout stable-3.2
..
$ git checkout -b ceph-test
Switched to a new branch 'ceph-test'

To get ceph-ansible to work, I've also had to separately install python3-pyyaml and python3-notario.

$ sudo dnf install python3-pyyaml
$ sudo dnf install python3-notario

We are not ready to configure ceph-ansible to start the installation.

First create the hosts file containing the machines you would like to use.

[mons]
mon1
mon2
mon3

[osds]
osd1
osd2
osd3

Then create group_vars/all.yml with the content


ceph_origin: 'repository'
ceph_repository: community
ceph_stable_release: luminous
public_network: "192.168.145.0/24"
monitor_interface: eth1
journal_size: 1024
devices:
  - /dev/vdb
osd_scenario: lvm

I use the community repository at http://download.ceph.com to download the Luminous release.
More information at http://docs.ceph.com/ceph-ansible/master/installation/methods.html

ceph_origin: 'repository'
ceph_repository: community
ceph_stable_release: luminous


I create the test machines with the private addresses 192.168.145.0/24 subnet. These are created as eth1 on my KVM based test machines.

public_network: "192.168.145.0/24"
monitor_interface: eth1


The block devices for the OSDs are created as /dev/vdb on these KVM based test machines.

devices:
  - /dev/vdb

The following line was needed for this version of ceph-ansible and describes how ceph-volume creates the devices on the OSD. More information is available at http://docs.ceph.com/ceph-ansible/master/osds/scenarios.html


osd_scenario: lvm

You can look over group_vars/all.yml.sample to look at various configuration options available to you.

Copy over site.yml.

$ cp site.yml.sample site.yml

Make sure that test test machines mon1, mon2, mon3, osd1, osd2, osd3 have been started up using "vagrant up". You can now start deploying ansible on the machine with the command

$ ansible-playbook -i hosts -u root site.yml

This takes several minutes at the end of which you have a ceph cluster installed on your test virtual machines.

At this point, you can optionally commit the changes to the git repo so that you can continue to experiment with various settings and then roll back to the working copy if needed.

Next ssh into mon1 as root and run the "ceph health" and "ceph status" commands
[root@mon1 ~]# ceph health
HEALTH_WARN no active mgr
[root@mon1 ~]# ceph status
  cluster:
    id:     9de96055-aba6-4837-ac8e-12156bb7335c
    health: HEALTH_WARN
            no active mgr

  services:
    mon: 3 daemons, quorum mon1,mon2,mon3
    mgr: no daemons active
    osd: 3 osds: 3 up, 3 in

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0B
    usage:   0B used, 0B / 0B avail
    pgs:    
 
The warning message "no active mgr" is seen because Ceph, since Luminous requires the ceph-mgr daemon running alongside its monitor daemons to provide additional monitoring and to allow external monitoring tools to monitor the system through the interface it provides.

We will cover the installation of the ceph-mgr daemon in the next post.


Setting a frost alarm on Home assistant

One of the issues with winter is the possibility of ice covering your windscreen which needs to be cleared before you can drive. Clearing ou...