Skip to content

Ifrit LTD Posts

The most demanded DevOps skills stats, DIY approach.

I was curious the other day, what is the most demanded devops skills out there on the market?
Not that I didn’t have any clue, as for someone who has been in the industry for a while it is kinda obvious, but sometimes you simply curious or just want to get some sort of stats. So after couple googling attempts which didn’t give any reasonable results apart from boring marketing ads and stupid suggestions like soft skill (who cares!), I decided that best approach would be DIY!

So here is what I did, step by step

1) Went to the web site many have probably used to find a job and put some search criteria

Then switched to Classic View and changed summary to 200 jobs per page, which is the max. Now all I needed is to find all occurrences of some keyword on the resulting page. (this manual part would benefit from selenium/phantomjs if run regularly, I probably will add it later)

2) Obviously I didn’t want to count manually, so I decided hey let’s do it with curl and then scan the output with some predefined keywords. Initially the keywords file was too big, then I skipped some stuff as it appeared to be not that popular(1 or 2 occurrences). But in general the file needs to be maintained as overtime some new kids in the block will pop out. So here the list in the words.txt file:

➜  trendystuff cat words.txt 
aws
azure

terraform

ansible
puppet
chef

docker
kubernetes
mesos

jenkins
ci/cd

elasticsearch
kibana 
logstash
elk

prometheus
openstack
zabbix
vault

linux
scripting
unix
bash
python
groovy
ruby

git
maven
➜  trendystuff

3) Now let’s write some super simple dummy bash script to get though the output and make some stats:

➜  scrips cat jobstats 
#!/bin/bash

if [[ $@ != **-c** ]]; then 
	curl -s $1 > output.txt  
fi	

if [[ $@ == **-2** ]]; then 
	sort_arg=" -k 2 -r"
fi	

rm result.txt
for word in `cat words.txt`; do
	echo "$word `grep -io  $word output.txt \
	| wc -l`" \
	| xargs  >> result.txt
done; cat result.txt | sort -n $sort_arg%  
➜  scrips 

4) Finally let’s run it, we have to copy the url from the website, which will be generated once you put you search criteria and press search, and pass it as argument to the script:

Leave a Comment

Docker volume monitoring with Ruby, Sensu and Uchiwa.

In this post I am going to demonstrate how to monitor your docker volumes with Sensu.
I came across a problem when our Jenkins instances were running out of space and no jobs could be scheduled because of this,
so before it was too late, it would be very useful to have something in place that will show if some container is too greedy, eating all the space on the volume.

In general we are going to look at the next things today:
1. Some Ruby scripting
2. Identifying disk and docker volume usage commands
3. Configuring Sensu server and client for monitoring
4. Making script run as a root
5. Running a simple Uchiwa dashboard

1. Some Ruby scripting
So first we are going to write the script which will check the volume and report if usage is higher than we configured.
Following best Sensu practices we will write it in Ruby, we probably could also use bash, but it really gets messy once we add more logic and lines.

#!/usr/bin/env /opt/sensu/embedded/bin/ruby

max_size = ARGV[0].to_i
container_name_filter = ARGV[1]
message = ""

procs=`du -sk  /var/lib/docker/volumes/* | sort -rn`
procs.each_line do | process|
  result = process.split(" ")
  vol_usage = result[0].to_i/1024
  vol_name = result[1].gsub "/var/lib/docker/volumes/", ''

  if vol_usage > max_size
    cont_name = `docker ps --filter=volume=#{vol_name} --filter=name=#{container_name_filter} --format {{.Names}}`
    if !cont_name.empty?
      message = message + "container: #{cont_name.delete!("\n")} volume exceeds max disk usage(#{max_size}MB): #{vol_usage}MB; \n"
    end
  end
end

unless message.empty?
  puts message
  exit 1
end

2. Identifying disk and docker volume usage commands

Leave a Comment

Automating gmail check in your shell

It you are just like me, who likes doing almost everything through the shell scripts rather than fancy UI apps, then here is a nice and easy way of checking a new emails in your gmail account:


function gmail(){
	emails=$(curl -s -u $1:$2 "https://mail.google.com/mail/feed/atom"\
	 | egrep -o '<fullcount>[0-9]*' | cut -c 12-)
	if [ "$emails" -gt 0 ] ; then echo "You have ${emails} emails in your $1 account"; fi
}		
gmail daenerys.targaryen $(vault read -field=value secret/GPASSWORD)
gmail simply.dany $(vault read -field=value secret/G2PASSWORD)

Simply add this to your .zshrc/.bashrc and you are done, next time you open a new tab you might get this:


Last login: Mon Feb  5 21:27:39 on ttys006
You have 1 emails in your daenerys.targaryen account
➜  ~ 
Leave a Comment

Provisioning prepackaged stacks easily on Kubernetes with helm

Today I am going to show how to provision prepackaged k8s stacks with helm.

So what is Helm – here is literally what is’t page says – a tool for managing Kubernetes charts and Charts are packages of pre-configured Kubernetes resources. So imagine you want to provision some stack, ELK for example, although there are many ways to do it, here is one I did as an example for Jenkins logs although not on k8s but with just docker, but nevertheless, so instead of reinventing the wheel you just provision it using helm.

So let’s just do it instead of talking.

Go to download page and get the right version from https://github.com/kubernetes/helm

Then untar, move to right dir and run:

➜  tar -xzvf helm-v2.7.2-darwin-amd64.tar.gz

➜   mv darwin-amd64/helm /usr/local/bin


➜   helm
The Kubernetes package manager

To begin working with Helm, run the 'helm init' command:

	$ helm init
..
...

So let’s do what is asks

Comments closed

How to setup Kubernetes cluster on AWS with kops

Today I am going to show how to setup Kubernetes cluster on AWS using kops(k8s operations).

In order to provision k8s cluster and deploy a Docker container we will need to install and setup couple of things, so here is the list:

0. Setup a VM with CentOS Linux as a control center.
1. Install and configure AWS cli to manage AWS resources.
2. Install and configure kops to manage provisioning of k8s cluster and AWS resources required by k8s.
3. Create a hosted zone in AWS Route53 and setup ELB to access deployed container services.
4. Install and configure kubectl to manage containers on k8s.

0. Setup a VM with CentOS Linux

Even though I am using MacOS, sometimes it is annoying that you can’t run certain commands or some arguments are different, so let’s spin up a Linux VM first, I choose centos this time, you can go with ubuntu if you wish. Here is how Vagrantfile looks like:

➜  kops cat Vagrantfile
# -*- mode: ruby -*-
# vi: set ft=ruby :


Vagrant.configure(2) do |config|

  config.vm.define "kops" do |m|
    m.vm.box = "centos/7"
    m.vm.hostname = "kops"
  end

end

Let’s start it up and logon:

Comments closed

Provisioning EC2 key pairs with terraform.

In the previous example, we created an EC2 instance, which we wouldn’t be able to access, that is because we neither provisioned a new key pair nor used existing one, which we could see from the state report:

➜  terraform_demo grep key_name terraform.tfstate
                            "key_name": "",
➜  terraform_demo

As you can see key_name is empty.

Now, if you already have a key pair which you are using to connect to your instance, which you will find
in EC2 Dashboard, NETWORK & SECURITY – Key Pairs:

then we can specify it in aws_instance section so EC2 can be accessed with that key:

resource "aws_instance" "ubuntu_zesty" {
  ami           = "ami-6b7f610f"
  instance_type = "t2.micro"
  key_name = "myec2key"
}

Let’s create an instance:

Comments closed

Spinning up an EC2 with Terraform and Vault.

Today we will look at how to setup EC2 instance with Terraform.

  1. Set up Terraform
  2. Spin up EC2
  3. Externalise secrets and other resources with terraform variables.
  4. Set up Vault as secret repo

1. Set up Terraform

So first thing first, quick installation guide, visit https://www.terraform.io/downloads.html , pick up right version and download:

➜  apps wget https://releases.hashicorp.com/terraform/0.11.1/terraform_0.11.1_darwin_amd64.zip\?_ga\=2.1738614.654909398.1512400028-228831855.1511115744
--2017-12-04 15:16:06--  https://releases.hashicorp.com/terraform/0.11.1/terraform_0.11.1_darwin_amd64.zip?_ga=2.1738614.654909398.1512400028-228831855.1511115744
Resolving releases.hashicorp.com... 151.101.17.183, 2a04:4e42:4::439
Connecting to releases.hashicorp.com|151.101.17.183|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 15750266 (15M) [application/zip]
Saving to: ‘terraform_0.11.1_darwin_amd64.zip?_ga=2.1738614.654909398.1512400028-228831855.1511115744’

terraform_0.11.1_darwin_amd64.zip?_ga=2.17386 100%[=================================================================================================>]  15.02M   499KB/s    in 30s

2017-12-04 15:16:36 (517 KB/s) - ‘terraform_0.11.1_darwin_amd64.zip?_ga=2.1738614.654909398.1512400028-228831855.1511115744’ saved [15750266/15750266]

Then unzip:

➜  apps unzip terraform_0.11.1_darwin_amd64.zip\?_ga=2.1738614.654909398.1512400028-228831855.1511115744
Archive:  terraform_0.11.1_darwin_amd64.zip?_ga=2.1738614.654909398.1512400028-228831855.1511115744
  inflating: terraform

Finally make sure location added to PATH:

➜  ~ export PATH=~/apps:$PATH

Check installation works:

➜  ~ terraform -v
Terraform v0.11.1

2. Spin up EC2

The plan is to spin up latest Ubuntu.

Comments closed

How to setup LVM, dynamic partitions in Linux.

In the previous blog I showed how to add a new storage in Linux and split the disk into partitions. Today I will touch a bit more advanced topic and will show how to create logical volumes with LVM. There are plenty advantages of LVM:

  • you can create/resize/delete partitions while your system is running, without reboot.
  • merge multiple small disks space together, creating a bigger logical disk
  • create distributed I/O across all disks, similar to RAID, but much easier to set up.
  • create snapshots of the volume easily for disk backups. etc

Last time we used Ubuntu, this time we will use CentOS, as when it comes to storage management and commands and tools that we will use, they are pretty much similar:

[vagrant@centos ~]$ rpm -qa | grep lvm
lvm2-2.02.171-8.el7.x86_64
lvm2-libs-2.02.171-8.el7.x86_64
[vagrant@centos ~]$
ubuntu@zesty:~$ dpkg --list | grep lvm
ii  liblvm2app2.2:amd64                        2.02.167-1ubuntu5                         amd64        LVM2 application library
ii  liblvm2cmd2.02:amd64                       2.02.167-1ubuntu5                         amd64        LVM2 command library
ii  lvm2                                       2.02.167-1ubuntu5                         amd64        Linux Logical Volume Manager
ubuntu@zesty:~$

Let’s create a VM, make sure the directory you running the command is empty as vagrant is using rsync to synchronise contents of current directory with the VM, so if you have GBs of files, it might take a while without a reason:

vagrant init centos/7 && \
 vagrant up && \
 vagrant ssh 

If you didn’t have centos previously it will download about 385MB:

➜  ~ du  -sh ~/.vagrant.d/boxes/*
385M	/Users/kayanazimov/.vagrant.d/boxes/centos-VAGRANTSLASH-7
425M	/Users/kayanazimov/.vagrant.d/boxes/ubuntu-VAGRANTSLASH-trusty64
269M	/Users/kayanazimov/.vagrant.d/boxes/ubuntu-VAGRANTSLASH-xenial64
290M	/Users/kayanazimov/.vagrant.d/boxes/ubuntu-VAGRANTSLASH-zesty64

Once inside, let’s check the existing storage devices:

[vagrant@centos ~]$ lsblk
NAME                    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda                       8:0    0   40G  0 disk
├─sda1                    8:1    0    1M  0 part
├─sda2                    8:2    0    1G  0 part /boot
└─sda3                    8:3    0   39G  0 part
  ├─VolGroup00-LogVol00 253:0    0 37.5G  0 lvm  /
  └─VolGroup00-LogVol01 253:1    0  1.5G  0 lvm  [SWAP]

Now let’s exit,, halt the vm, add 2 new disks of size 1GB and then start the vm and logon again,
If you don’t know how to add new disks to vm you can read first part of previous blog about storages.

Now let’s check disks again:

[vagrant@centos ~]$ lsblk
NAME                    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda                       8:0    0   40G  0 disk
├─sda1                    8:1    0    1M  0 part
├─sda2                    8:2    0    1G  0 part /boot
└─sda3                    8:3    0   39G  0 part
  ├─VolGroup00-LogVol00 253:0    0 37.5G  0 lvm  /
  └─VolGroup00-LogVol01 253:1    0  1.5G  0 lvm  [SWAP]
sdb                       8:16   0    1G  0 disk
sdc                       8:32   0    1G  0 disk

As you can see sdb and sdc have been added. Let’s ask LVM which devices available to it:

[vagrant@centos ~]$ sudo lvmscan
sudo: lvmscan: command not found
[vagrant@centos ~]$ sudo lvmdiscan
sudo: lvmdiscan: command not found
[vagrant@centos ~]$ sudo lvmdiskscan
  /dev/VolGroup00/LogVol00 [     <37.47 GiB]
  /dev/VolGroup00/LogVol01 [       1.50 GiB]
  /dev/sda2                [       1.00 GiB]
  /dev/sda3                [     <39.00 GiB] LVM physical volume
  /dev/sdb                 [       1.00 GiB]
  /dev/sdc                 [       1.00 GiB]
  2 disks
  3 partitions
  0 LVM physical volume whole disks
  1 LVM physical volume

First we need to initialise a physical volumes for use by LVM:

Comments closed

How to add a new storage volume to Linux VM locally and on AWS EC2.

Sooner or later we all run out of space. Today I am going to demo how to add a new
storage to Linux VM. First we will look at how to do this on local VM with virtualbox and vagrant,
then in AWS.

1. Adding a new volume locally.
2. Splitting disk into partitions
3. Spinning AWS EC2 instance and adding a new volume manually.
4. Attaching new volume with AWS CLI.

So let’s assume you have vagrant and virtualbox installed, let’s spin up a new VM:

vagrant init ubuntu/trusty64 && vagrant up && vagrant ssh

You can pick up newer version of Ubuntu of course, Xenial or Zesty, or any other Linux distro even, I have ubuntu/trusty64 vagrant box already downloaded, so I will be using that one.

First let’s check what we have already got there with ‘list block devices’ command:

vagrant@sensuclient:~$ lsblk
NAME   MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda      8:0    0  40G  0 disk
`-sda1   8:1    0  40G  0 part /
vagrant@sensuclient:~$

Now let’s exit VM and stop it:


vagrant halt
==> sensuclient: Attempting graceful shutdown of VM...

Then we need to go to virtualbox and add new disk as shown below:

Once it is done, we can start VM and check devices again:

vagrant up  && vagrant ssh  

vagrant@sensuclient:~$ sudo lsblk -f
NAME   FSTYPE LABEL           MOUNTPOINT
sda
`-sda1 ext4   cloudimg-rootfs /
sdb

As you can see new disk, ‘sdb’ has been added to the list.

Next we need to crate a filesystem:

Comments closed