Friday, February 15, 2019

ceph-ansible: Installing Ceph

My previous post deals with using Vagrant to install CentOS based test systems to install Ceph on. There, I created 7 virtual machines which include
a) a machine to run ansible commands on,
b) three monitors(mons) and
c) three Object storage devices(OSDs) which each contain an additional 5Gb block device available to the OSD daemon.

Since I use Fedora 29 as my host system, I do not need the ansible vm and disabled it by commenting out the line from the array in the Vagrantfile I posted in the previous post.

The steps detailed below can be run either directly on the host machine or if needed, on the ansible vm with suitable modifications.

Install Ansible on your vm or your host machine.
$ sudo dnf install -y ansible
..
$ rpm -q ansible
ansible-2.7.5-1.fc29.noarch

Obtain the latest ceph-ansible using git
$ git clone https://github.com/ceph/ceph-ansible.git

I wanted to install the "Luminous" version of Ceph. According to the ceph-ansible documentation at
http://docs.ceph.com/ceph-ansible/master/#releases
I need the stable-3.2 branch of ceph-ansible. This will only work with Ansible version 2.6.

From the commands above, we have the 2.7.5-1 version of ansible. We need to downgrade our ansible package.

$ sudo dnf downgrade ansible
..
$ rpm -q ansible
ansible-2.6.5-1.fc29.noarch

We now have to go into the ceph-ansible directory and change to the stable-3.2 branch. I then like to create a branch of my own with the configuration files I need.


$ cd ceph-ansible
$ git checkout stable-3.2
..
$ git checkout -b ceph-test
Switched to a new branch 'ceph-test'

To get ceph-ansible to work, I've also had to separately install python3-pyyaml and python3-notario.

$ sudo dnf install python3-pyyaml
$ sudo dnf install python3-notario

We are not ready to configure ceph-ansible to start the installation.

First create the hosts file containing the machines you would like to use.

[mons]
mon1
mon2
mon3

[osds]
osd1
osd2
osd3

Then create group_vars/all.yml with the content


ceph_origin: 'repository'
ceph_repository: community
ceph_stable_release: luminous
public_network: "192.168.145.0/24"
monitor_interface: eth1
journal_size: 1024
devices:
  - /dev/vdb
osd_scenario: lvm

I use the community repository at http://download.ceph.com to download the Luminous release.
More information at http://docs.ceph.com/ceph-ansible/master/installation/methods.html

ceph_origin: 'repository'
ceph_repository: community
ceph_stable_release: luminous


I create the test machines with the private addresses 192.168.145.0/24 subnet. These are created as eth1 on my KVM based test machines.

public_network: "192.168.145.0/24"
monitor_interface: eth1


The block devices for the OSDs are created as /dev/vdb on these KVM based test machines.

devices:
  - /dev/vdb

The following line was needed for this version of ceph-ansible and describes how ceph-volume creates the devices on the OSD. More information is available at http://docs.ceph.com/ceph-ansible/master/osds/scenarios.html


osd_scenario: lvm

You can look over group_vars/all.yml.sample to look at various configuration options available to you.

Copy over site.yml.

$ cp site.yml.sample site.yml

Make sure that test test machines mon1, mon2, mon3, osd1, osd2, osd3 have been started up using "vagrant up". You can now start deploying ansible on the machine with the command

$ ansible-playbook -i hosts -u root site.yml

This takes several minutes at the end of which you have a ceph cluster installed on your test virtual machines.

At this point, you can optionally commit the changes to the git repo so that you can continue to experiment with various settings and then roll back to the working copy if needed.

Next ssh into mon1 as root and run the "ceph health" and "ceph status" commands
[root@mon1 ~]# ceph health
HEALTH_WARN no active mgr
[root@mon1 ~]# ceph status
  cluster:
    id:     9de96055-aba6-4837-ac8e-12156bb7335c
    health: HEALTH_WARN
            no active mgr

  services:
    mon: 3 daemons, quorum mon1,mon2,mon3
    mgr: no daemons active
    osd: 3 osds: 3 up, 3 in

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0B
    usage:   0B used, 0B / 0B avail
    pgs:    
 
The warning message "no active mgr" is seen because Ceph, since Luminous requires the ceph-mgr daemon running alongside its monitor daemons to provide additional monitoring and to allow external monitoring tools to monitor the system through the interface it provides.

We will cover the installation of the ceph-mgr daemon in the next post.


Using Vagrant to create KVM based vms to test Ceph

Ceph, the software defined storage solution has been growing in popularity especially as a cloud storage platform. To learn more about this product, I purchased the book 'Mastering Ceph: Redefine your storage system' by Nick Fisk.

The first hurdle when using the book was that the examples provided in the book rely on using Vagrant with Virtual box to create test machines which themselves were running Ubuntu. I use a Fedora 29 machine and would like to use CentOS on KVM instead for my setup.

To get the test machines setup for my environment, I've had to deviate from the instructions given in the book. This post is based on my notes. These notes should also benefit those who are just looking to use Vagrant with KVM.

First install Vagrant and the libvirt plugin for vagrant.
sudo dnf install -y vagrant vagrant-libvirt
I use the directory ~/vagrant/ceph as the location for the Vagrantfile for my test machines.

My Vagrant file is as follows


storage_pool_name = "vagrant-pool"

nodes = [
  { :hostname => 'ansible', :ip => '192.168.145.40', :box => 'centos/7' },
  { :hostname => 'mon1', :ip => '192.168.145.41', :box => 'centos/7' },
  { :hostname => 'mon2', :ip => '192.168.145.42', :box => 'centos/7' },
  { :hostname => 'mon3', :ip => '192.168.145.43', :box => 'centos/7' },
  { :hostname => 'osd1', :ip => '192.168.145.51', :box => 'centos/7', :ram =>1024, :osd => 'yes' },
  { :hostname => 'osd2', :ip => '192.168.145.52', :box => 'centos/7', :ram =>1024, :osd => 'yes' },
  { :hostname => 'osd3', :ip => '192.168.145.53', :box => 'centos/7', :ram =>1024, :osd => 'yes' },
]

Vagrant.configure("2") do |config|
  config.ssh.forward_agent = true
  config.ssh.insert_key = false
  config.ssh.private_key_path = ["~/.vagrant.d/insecure_private_key","~/.ssh/id_rsa"]
  config.vm.provision :shell, privileged: false do |s|
    ssh_pub_key = File.readlines("#{Dir.home}/.ssh/id_rsa.pub").first.strip
    s.inline = <<-SHELL
      echo #{ssh_pub_key} >> /home/$USER/.ssh/authorized_keys
      sudo mkdir -m 0700 /root/.ssh/
      sudo bash -c "echo #{ssh_pub_key} >> /root/.ssh/authorized_keys"
    SHELL
  end

  nodes.each do |node|
    config.vm.define node[:hostname] do |nodeconfig|
      nodeconfig.vm.box = node[:box]
      nodeconfig.vm.hostname = node[:hostname]
      nodeconfig.vm.network :private_network, ip: node[:ip]

      memory = node[:ram] ? node[:ram] : 512;
      nodeconfig.vm.provider :libvirt do |lv|
        lv.storage_pool_name = storage_pool_name
        lv.driver = "kvm"
        lv.uri = "qemu:///system"
        lv.memory = memory
        lv.graphics_type = "none"
        if node[:osd] == "yes"
          lv.storage :file, :size => '5G'
        end
      end
    end
  end
end

Going through this step-by-step


    storage_pool_name = "vagrant-pool"

I use a variable storage_pool_name to store the name of the storage pool name in libvirt. This pool is created as a 'Filesystem Directory' on my laptop which has been named "vagrant-pool".


nodes = [
  { :hostname => 'ansible', :ip => '192.168.145.40', :box => 'centos/7' },
  { :hostname => 'mon1', :ip => '192.168.145.41', :box => 'centos/7' },
  { :hostname => 'mon2', :ip => '192.168.145.42', :box => 'centos/7' },
  { :hostname => 'mon3', :ip => '192.168.145.43', :box => 'centos/7' },
  { :hostname => 'osd1', :ip => '192.168.145.51', :box => 'centos/7', :ram =>1024, :osd => 'yes' },
  { :hostname => 'osd2', :ip => '192.168.145.52', :box => 'centos/7', :ram =>1024, :osd => 'yes' },
  { :hostname => 'osd3', :ip => '192.168.145.53', :box => 'centos/7', :ram =>1024, :osd => 'yes' },
]

This is an array or dictionaries containing details of the machines to be setup.


    Vagrant.configure("2") do |config|

    ..

    end

The main configuration block used by Vagrant.

I then initialise common settings for all the test machines. I've commented inline.

      #Use my personal ssh key across the test machines without having to copy my private key to the test machines.
      config.ssh.forward_agent = true

      #Do not regenerate a new key for each test machine.
      config.ssh.insert_key = false

      # The private keys I use. This is used for "vagrant ssh".
      config.ssh.private_key_path = ["~/.vagrant.d/insecure_private_key","~/.ssh/id_rsa"]

      #This block reads plublic keys and appends it to .ssh/authorized_keys for the user and root account.
      config.vm.provision :shell, privileged: false do |s|
        ssh_pub_key = File.readlines("#{Dir.home}/.ssh/id_rsa.pub").first.strip
        s.inline = <<-SHELL
          echo #{ssh_pub_key} >> /home/$USER/.ssh/authorized_keys
          sudo mkdir -m 0700 /root/.ssh/
          sudo bash -c "echo #{ssh_pub_key} >> /root/.ssh/authorized_keys"
        SHELL
      end

We then iterate through the list of nodes in the nodes array setting up a test box for each node with the described features in the block.


  #For each node in the nodes array.
  nodes.each do |node|
    #define a new test box with name :hostname.
    config.vm.define node[:hostname] do |nodeconfig|
      #This is set to centos/7 for all the nodes.
      nodeconfig.vm.box = node[:box]
      #This is the name to be set.
      nodeconfig.vm.hostname = node[:hostname]
      #Set an ip address given in the private network.
      nodeconfig.vm.network :private_network, ip: node[:ip]

      #If a value has been provided for ram, we use that or default to 512M
      memory = node[:ram] ? node[:ram] : 512;
      #We configure testmachine in libvirt
      nodeconfig.vm.provider :libvirt do |lv|
        #This is set to "vagrant-pool" we created earlier
        lv.storage_pool_name = storage_pool_name
        #Use KVM.
        lv.driver = "kvm"
        lv.uri = "qemu:///system"
        lv.memory = memory
        lv.graphics_type = "none"

        #Create a new storage of 5G if it is an OSD.
        if node[:osd] == "yes"
          lv.storage :file, :size => '5G'
        end
      end
    end
  end

To create the test machines, we simply call

$ vagrant up

We can also call up individual machines by providing a list of names.

$ vagrant up mon1 osd1

On first run, vagrant will download an image of CentOS 7. Subsequent runs will be faster.

We can suspend and resume using the commands

$ vagrant suspend
$ vagrant resume

This ensures that the virtual machines are available later when you get back to it.

When we are done and no longer need the machines, we can use the following command to stop and delete them.

$ vagrant destroy

As with "vagrant up", we can provide machine names.

To complete the setup, I add the following to my /etc/hosts file.

#Vagrant hosts
192.168.145.40  ansible
192.168.145.41  mon1
192.168.145.42  mon2
192.168.145.43  mon3
192.168.145.51  osd1
192.168.145.52  osd2
192.168.145.53  osd3

Since I will be recreating these test machines several times, I do not want to keep modifying the ~/.ssh/known_hosts because of the default StrictHostKeyChecking I have on my main setup.
I add the following lines to ~/.ssh/config.

#Vagrant hosts
HOST mon?
        StrictHostKeyChecking no
HOST osd?
        StrictHostKeyChecking no

You can now test by sshing into the test machines
$ ssh root@mon1

Setting a frost alarm on Home assistant

One of the issues with winter is the possibility of ice covering your windscreen which needs to be cleared before you can drive. Clearing ou...