Ansible Mitogen

Today I discovered a new ansible strategy module that increase ansible performance a lot : Ansible Mitogen.

Mitogen is a python library for writing distributed self-replicating programs.

You can read a great article about this here :

After some benchmark, I confirm : Mitogen is very fast ! I’ve divised my deployment by 2 :

For example, for a small playbook to deploy and configure 3 kafka nodes :

Before mitogen :

PLAY RECAP **********************************************************************************************************************************************************************************************
brok01 : ok=18 changed=0 unreachable=0 failed=0
brok02 : ok=18 changed=0 unreachable=0 failed=0
brok03 : ok=18 changed=0 unreachable=0 failed=0

Monday 28 May 2018 15:05:21 +0200 (0:00:02.680) 0:02:13.012 ************
kafka : configuration "projet" ------------------------------------------------------------------------------------------------------------------------------------------------------------------ 11.85s
kafka : import ca erdf dans keystore ------------------------------------------------------------------------------------------------------------------------------------------------------------- 8.10s
kafka : configuration kafka ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- 6.51s
Gathering Facts ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 3.04s
Gathering Facts ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 3.03s
Gathering Facts ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2.68s
Gathering Facts ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2.67s
Gathering Facts ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2.22s
kafka : kafka directories ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 2.19s
Gathering Facts ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2.15s
Gathering Facts ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2.14s
Gathering Facts ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2.13s
Gathering Facts ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2.08s
Gathering Facts ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 1.99s
kafka : keystore jks ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 1.52s
java : installation de java ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- 1.00s
kafka : kafka service rhel7 ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- 0.85s
kafka_metrics_reporter : copie du jar kafka metrics reporter ------------------------------------------------------------------------------------------------------------------------------------- 0.79s
kafka : kafka user ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 0.69s
kafka : kafka exists ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 0.66s

real 2m16,558s
user 0m37,302s
sys 0m4,845s

With Mitogen_linear strategy :

PLAY RECAP **********************************************************************************************************************************************************************************************
brok01 : ok=18 changed=0 unreachable=0 failed=0
brok02 : ok=18 changed=0 unreachable=0 failed=0
brok03 : ok=18 changed=0 unreachable=0 failed=0

Monday 28 May 2018 15:07:01 +0200 (0:00:01.775) 0:01:02.035 ************
Gathering Facts ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 6.01s
kafka : configuration "projet" ------------------------------------------------------------------------------------------------------------------------------------------------------------------- 4.57s
kafka : import ca erdf dans keystore ------------------------------------------------------------------------------------------------------------------------------------------------------------- 3.90s
Gathering Facts ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2.36s
Gathering Facts ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 1.93s
Gathering Facts ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 1.78s
Gathering Facts ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 1.48s
Gathering Facts ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 1.46s
Gathering Facts ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 1.45s
Gathering Facts ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 1.45s
Gathering Facts ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 1.40s
Gathering Facts ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 1.38s
kafka : configuration kafka ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- 1.06s
java : installation de java ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- 0.99s
kafka : keystore jks ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 0.70s
include_vars ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 0.58s
kafka : kafka directories ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 0.24s
kafka : kafka service rhel7 ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- 0.24s
kafka_metrics_reporter : copie du jar kafka metrics reporter ------------------------------------------------------------------------------------------------------------------------------------- 0.21s
Attend que le broker en question soit dans le cluster avant de restart un autre ------------------------------------------------------------------------------------------------------------------ 0.20s

real 1m5,575s
user 0m27,440s
sys 0m2,207s

All tasks are divided by 2.


Installation is super easy :

git clone

Update ansible.cfg :

strategy_plugins = ~/git/seuf/mitogen/ansible_mitogen/plugins/strategy
strategy = mitogen_linear

That’s it !

You  can now run your playbooks faster !

Tips : I’ve updated my sudoers configuration file to allow the commands

deploy = (ALL) NOPASSWD:/usr/bin/python -c*

Zabbix – Send alerts to Slack

Today I’ve configured my Zabbix Server to automatically send alerts to the #alerting channel of my slack team.

To to this, you have to create un new Incoming WebHook in your slack Team.

  • Click on your team, select App and Integration.
  • Search for Incoming Webhook
  • Click « Add a configuration »
  • Configure the webhook with the default destination topic, etc..
  • Copy the generated URL in your clipboard, you will need it later. The webhook url is something like

Next Go to your Zabbix Server.

Edit the /etc/zabbix/zabbix_server.conf file to specify a correct Path for AlertScriptPath, for example :


Then go to the /usr/share/zabbix/alertscripts directory and create a tiny shell script :



curl -k -X POST -d "payload={\"username\":\"zabbix\", \"text\":\"$message\"}" $webhook_url

Then Go to The Zabbix IHM and add a new media type of type script, with the script name you created before (Administration > Media Types > create media type)

Add 2 parameters for your script :

  1. The first one is the Slack Webhook URL you generated in step 1,
  2. the second is the Alert Subject.

Then, Configure a Zabbix Actions (Configuration > Actions) to send a message to slack media for the admin user.

Set a correct Subject message to get all the information you need. For example :


Finaly edit the Zabbix admin user and add a new media of type slack.

That’s It !

Now all your Zabbix Alerts are sent to your slack #alerting Channel \o/

PS : You can configure multiple slack media type to send alerts to multiple webhooks, according to the Host group.



Wetty : un terminal dans ton navigateur

J’avais déjà fait un article sur GateOne, il y a un moment déjà. Mais celui-ci était plutôt lourd et difficile à installer.


Et la dernièrement je suis tombé sur Wetty, une web app codée en NodeJS permettant d’avoir un terminal dans le navigateur !

Idéal pour pouvoir accéder à son serveur depuis le boulot quand tous les ports sont bloqués 😉

Du coup, j’ai créé une petite image Docker dont voici le dockerfile :

FROM alpine:edge

RUN apk --update add git nodejs python build-base openssh-client

WORKDIR /usr/share/webapps

RUN git clone && \
    cd wetty && \
    npm install

RUN apk del build-base

WORKDIR /usr/share/webapps/wetty

RUN addgroup seuf
RUN adduser -G seuf -h /home/seuf -s /bin/ash -D seuf

RUN echo "seuf:PassWordSecretAChanger" | chpasswd

RUN chown -R seuf:seuf /home/seuf/.ssh

CMD node app.js -p 3000

Ensuite je rajoute une petite section wetty dans mon docker compose :

    image: seuf/wetty
    hostname: wetty
    container_name: ssh
      - "3000"
    restart: always

Et hop, grâce à Traefik j’ai un client web ssh dans mon terminal en https !

Je n’ai plus qu’à me connecter dessus pour atterrir dans un conteneur Alpine Linux. Je peux ensuite rebondir en ssh sur n’importe quel autre serveur ssh.

Traefik : HTTP reverse proxy / Load balancer pour Docker


Lors de ma formation Docker par @emilevauge, il nous avait rapidement montré son outil de reverse proxy et load balancer écrit en Go pour Docker : Træfɪk

La conf est super simple, il suffit de lui donner accès a l’API Docker (socket / port) ou n’importe quel autre Backend (Kubernetes, Etcd, consul, etc..) et traefik se débrouille tout seul pour automatiquement créer les routes vers les conteneurs !

Du coup, je l’ai utilisé pour mes propres besoin. L’avantage c’est qu’il gère aussi le https et même la génération des certificats avec Let’s Encrypt !

Comme j’utilise principalement docker-compose pour gérer les services, j’ai juste ajouté une entrée dans mon docker-compose.yml :

version: '2'

    image: traefik
    hostname: traefik
    container_name: traefik
    command: --web --docker  --logLevel=DEBUG --acme --acme.storagefile=/etc/traefik/acme.json --acme.entrypoint=https --acme.caserver='' --acme.ondemand=true --entryPoints='Name:https Address::443 TLS:/etc/traefik/ssl/,/etc/traefik/ssl/' --entryPoints='Name:http Address::80 Redirect.EntryPoint:https' --defaultentrypoints=http,https
      - "80:80"
      - "443:443"
      - "8080:8080"
      - "./ssl:/etc/traefik/ssl"
      - "/var/run/docker.sock:/var/run/docker.sock"
      - "/dev/null:/traefik.toml"
      - "/data/traefik/acme.json:/etc/traefik/acme.json"

Et voila !

Un coup de docker-compose up -d et traefik va détecter mes différents conteneurs et créer les routes qui vont bien 🙂

Ensuite, je n’avais plus qu’a créer des alias DNS pour pointer vers les bons sous domaines (, etc..)


Alpine Linux : A Tiny Tiny Docker Image

Do you know Alpine Linux ? It’s a Tiny Linux distribution based on busybox.

Every one working with docker now that it takes a lot of disk space. That’s why here at ERDF Lyon we are using Docker-Alpine to build our own services..

It is very cool to have a very small containers, for example the base image is only 5 Mb, comparatively to a debian base docker or a centos it’s amazing !!

Here is a list of some images builds by our team :

[root@XXXXX ~]# docker images

REPOSITORY                                               TAG                 IMAGE ID            CREATED             VIRTUAL SIZE

library/alpine-java                  latest              b1ac8a415cc2        8 minutes ago       167 MB

library/alpine-nginx-php             latest              04dd1dd5f3a8        36 minutes ago      73.07 MB

library/alpine-adaje-nginx           latest              babdce2f7acb        2 hours ago         22.34 MB

library/alpine-influxdb              latest              8c1c8602ab3d        5 hours ago         43.71 MB

library/alpine-grafana               latest              87e008359f09        7 hours ago         105 MB

alpine-libc                          latest              768e1a26255d        8 hours ago         14.01 MB

alpine                               latest              809d1eb48c44        9 weeks ago         5.244 MB

library/centos6                      latest              5af3557457ba        5 months ago        396.1 MB


as you can see the base alpine is only 5.2MB, an Alpine image with nginx is only 43 MB and even an image with the full java runtime pre-installed is 167MB.

Against a centos6 base (with nothing on it) that takes nearly 400MB : y’a pas photo !!


We already have our own docker registry so we just have to build and push a lot of new images base on this tiny Docker image..

Here what’s in the pipeline :

  • alpine-mysql
  • alpine-postgresql
  • alpine-apache (already build from a centos)
  • alpine-zabbix-server
  • alpine-zabbix-frontend
  • alpine-jenkins (already build from a centos)
  • alpine-python (already build from a centos)
  • alpine-ansible (already build from a centos)

How to Use Ansible with Jenkins and Gitlab



In this article I will try to explain how, here at OI-ERDF Lyon we use ansible Through Jenkins.

Ansible is an Open source Deployment and automation tool.

It is based on ssh, so no agents are required and it’s compatible on Linux / Aix / Solaris and even Windows !

It use playbooks, writted in YAML, to describe the list of tasks that have to be executed on remote servers.

Each task can be associated to a specific host or host group defined in an inventory.

For example :

we have an inventory like this:









where a host file looks like :













then a playbook define a list of tasks:

hosts: all

  - tasks:

    - name: "display host type"

      shell: uname

      register: host_type

    - debug: host_type.stdout

hosts: apache

  - roles:

    - role: apache

hosts: mysql

  - roles:

    - roles: mysql_server

hosts: postgresql

  - roles:

    - roles: postgresql_server

hosts: database

  - roles:

    - role: mysql_client



Here we can see that all hosts will execute the uname command and display the result, then each inventory group is affected to the corresponding role.

Simple, clear, powerfull.

to launch a playbook just use :

ansible-playbook -i dev/hosts.txt playbook.yml



First, we need a version control system to store all our ansible stuffs.

We choosed Gitlab, cause it has a lot of good features :

  • User friendly
  • Wiki integrated for all projects
  • Issue tracker
  • ldap auth integration
  • Open source
  • hooks
  • and more…

One it’s installed with the community edition packages (rpm or deb) on a dedicated server, you juste have to create a new group « ANSIBLE » in which all the galaxy will be stored.

Ansible Galaxy



The best way to capitalize and re-use Ansible roles, modules and plugins is the Ansible Galaxy.

Our servers are in DMZ so we can’t access the internet web site ansible galaxy. We have to create our own Galaxy with our own ansible roles and module.

In the ANSIBLE group previously created in Gitlab juste create as many role, modules and plugins as you need for ansible deployments.

To find easlily our modules we have defined some dev rules :

  • all ansible roles starts with « role_ »
  • all ansible modules starts with « module_ »
  • all ansible plugins starts with « plugin_ »

a role has the following structure :


Where :

  • defaults is the directory where all the defaults variables for the role are defined. Each var can be overloaded by a project by a host var / group var or extra var.
  • files is the directory where the role can store fixed files (almost not used)
  • handlers is the directory where the ansible handlers tasks are defined for the role (i.e. restart service etc…)
  • meta is used to store the ansible galaxy metadata (author / description / dependencies / tags / etc..)
  • tasks is the directory where all the the role main tasks are defined (create FS, apt-get, yum, deploy config file, restart service, etc…)
  • templates is the directory where we store the role’s Jinja2 template files (withe the {{ variables }} replaced)
  • vars is a directory where are stored role specific variables. This variable cannot be overloaded !
  • is the Markdown readme file used to describe the role and defaults vars that can be overridden.

Then publish your role to a new project role_example in your ansible group in Gitlab.

Ansible Projects

Now we can create projects that will use the galaxy.

To use the Ansible Galaxy, just create re requirements.yml file and put

- src: git+
  path: roles
  name: role_apache

- src: git+
  path: roles
  name: role_mysql

- src: git+
  path: roles
  name: role_postgresql

- src: git+
  path: roles
  name: role_mysql_client

- src: git+
  path: filter_plugins
  name: tail

- src: git+
  path: library
  name: assert



And install it to your project by using the command

ansible-galaxy install -r requirements.yml

This will fetch all roles, plugins and modules into the corrects directory.

Now you can use this roles into your project and overload roles defaults variable with your inventory.

Of course the project use Gitlab to store the playbooks,  inventory and requirements.



Now we have :

  • Gitlab to store ansible galaxy and projects playbooks and inventory
  • Ansible Galaxy with our own roles that respect our dev rules and the enterprise standards
  • Projects that use the ansible galaxy to deploy their apps

All we need is a web UI to simplify the deployments.

Jenkins is a unit test / task launcher with a lot of plugins like GIT, MultiSCM, Rebluid, etc..

We have created jobs that will :

  • Checkout our project playbook, inventory and requirements file (with the Jenkins MultiSCM plugin)
  • Install the requirements in the job workspace (ansible-galaxy install -r requirements.yml)
  • launch the ansible playbook (ansible-playbook -i ${env}/hosts.txt playbook.yml

The job has some parameters like

  • git branch : the name of the git branch to use (master by default)
  • env : the name of the environnement we want to deploy (aka the inventory file)
  • extra var : if you want to specify some more extra vars
  • ansible options : to allow users to specify more options (–limit=apache for example).



Then the job use the git or MultiSCM plugin to fetch playbooks from gitlab



Don’t forget to create a deploy key in gitlab with the public key of the jenkins user (www-data for example).

And finally launch the playbook from the job workspace :


Now every user with jenkins permissions can run deployments on any environment with just a few clicks


Ansible Galaxy with Gitlab and Jenkins give a lot of benefits :

  • Stabilization or Production deployments (no more forgotten tasks)
  • Shortest deployments duration (we can deploy all applications stack from scratch in less than an hour with ansible (a week before) )
  • Capitalization : each project can re-use roles from the galaxy and contribute !
  • Documentation : Gitlab integrate a wiki, so each role of the galaxy is documented. (We also have example projects, 10 steps tutorials)
  • Collaboration through Gitlab issue system.
  • Simplicity : Just click on the BIG DEPLOY button !

And you : which automation system are you using ? Chef ? Puppet ?

[QEMU] – Linux virtuel sur windows sans droits d’admins

Comme beaucoup, je travail en clientèle sur un poste de m**** qui ne  peut booter que sur Windows et, cerise sur le gâteau, nous n’avons aucun droits… Dans ce monde cruel on ne rêve que d’une chose, pouvoir utiliser Linux !

Continuer la lecture de [QEMU] – Linux virtuel sur windows sans droits d’admins

Logstash input HTTP + gitlab

Depuis la nouvelle version de Logstash (1.5.2), il y a un nouveau type d’input qui est « http ». Cela permet, depuis n’importe quel autre application d’envoyer des données au format json directement dans logstash.

Ensuite, grâce aux multiples plugins filter et output, il est possible de convertir et exporter ces données comme on le souhaite.


Au taf, on utilise Gitlab pour administrer les projets sous git. Du coup j’ai configuré un web hook sur mes projets pour envoyer directement les stats de commit / issues / merges / etc dans ELK.

dans logstash, il suffit de créer un nouveau input :

input {
    http {
      port => 8084

Et ensuite, dans Gitlab, configurer le web hook pour envoyer les données directement dans logstash.



Enfin,  après chaque commit, un message http est envoyé est envoyé au logstash qui va l’insérer dans la base elasticsearch.

Il ne reste plus qu’a configurer un petit Dashboard Kibana pour avoir les stats du nombre de commits par branches / utilisateur / etc..

Et voila le résultat :


Tiny Tiny RSS : Un aggregateur de flux RSS

Aujourd’hui un petit post à propos de Tiny Tiny RSS ou plus connu sous le nom de tt-rss.


En fait, depuis la disparition de Google Reader, je ne savais pas trop quel lecteur de flux rss utiliser. J’en avais testé plusieurs et au final j’utilisais principalement Feedly.

Mais voila, j’en avais marre de devoir m’authentifier avec mon compte Google+, et d’être dépendant d’un service qui peut devenir payant du jour au lendemain alors j’ai commencé a chercher un aggégateur rss open source qui puisse être installé sur son propre serveur.

C’est là que je suis tombé sur tt-rss !!


L’interface est plutôt cool et il existe même des themes pour ceux qui veulent rester en mode Google Reader ou Feedly.L’installation est super simple (on extrait l’archive dans le répertoire apache, on donne les bons droits à certains répertoires et on va sur  la page d’installation pour configurer les accès à la base de données.

Enfin, le must : il y a même une application Android (dispo sur le play store ou sur F-Droid) qui permet de fetcher puis stocker en local ses flux RSS pour une lecture en en hors ligne pendant les transports 🙂

Bref, je recommande à tous ceux qui cherchent un lecteur de flux rss customisable, open source à auto-héberger !



Provisionning Zabbix en Perl

Juste un petit script pratique que j’avais développé il y a quelques temps pour provisionner un Zabbix en utilisant l’API Zabbix en perl :

Ça s’utilise super facilement :

perl [long options...]
        --host               host
        --group              Specify the host group.
        --template           Link a template to this host.
        --interface          add an interface to host (format :
                where type can
                             be 1 for AGENT, 2 for SNMP, 3 for JMX or 4 for
        --csv                CVS file to load. The format should be like

                "host:groups (comma separated):templates
        --zabbix-config      where to find the Zabbix::API configuration file.
        --l4p-config         Log::Log4perl configuration file

Il faut avoir un fichier de conf zabbix avec les identifiants pour se connecter au zabbix que l’on fourni avec l’option –zabbix :

user: apiuser
password: apipass

Bref, ça peut être utile pour des gens qui bossent sur Zabbix et qui ne veulent pas se prendre la tête a ajouter tous les équipements 1 par 1.