Genymobile Screen Copy

Juste un petit post pour parler de ma découverte du jour : scrcpy !

https://github.com/Genymobile/scrcpy

Un petit outil qui permet de faire du miroir d’écran Android sur son PC via USB (adb).

Pour cela, il suffit simplement de télécharger la dernière version de scrcpy, d’activer le debug USB dans les options développeur de son téléphone et de lancer scrcpy !

C’est magique !

Merci Genymobile, ça peut être bien pratique 🙂

 

Ansible 2.5 grafana modules

 

At work, we needed to automatize grafana installation and grafana provisionning (datasources, plugins and dashboards).

So I’ve created 3 new ansible modules that will be released with the next version of ansible 2.5

The first module is grafana_datasource. If you have to create a lot of different datsources for your grafana instance in multiples organisations, I suggest you to use it.

with a single ansible task, you can create all your datasources. For example if I want to create multiple datasources :

 - name: create elasticsearch datasource
   grafana_datasource:
     name: "{{ item.name }}"
     grafana_url: "{{ grafana_url }}"
     grafana_user: "{{ grafana_user }}"
     grafana_password: "{{ grafana_password }}"
     ds_type: "{{ item.type }}"
     url: "{{ item.url }}"
     database: "{{ item.database }}"
     basic_auth_user: "{{ item.basic_auth_user | default('') }}"
     basic_auth_password: "{{ item.basic_auth_password | default('') }}"
     esVersion: "{{ item.es_version | default(5) }}"
     time_field": "{{ item.time_field | default('@timestamp') }}"
     state: present
  with_items: "{{ grafana_datasources }}"

where the grafana_datasources variable is :

grafana_datasources:
  - name: "es_index1"
    ds_type: "elasticsearch"
    url: "http://elasticsearch.aperogeek.fr:9200"
    database: "index_[YYYY.mm.dd]"
    basic_auth_user: "grafana"
    basic_auth_password: "{{ grafana_es_password }}"
    es_version: 56
  - name: "influxdb"
    ds_type: "influxdb"
    url: "http://elasticsearch.aperogeek.fr:9200"
    database: "telegraf"

 

The second module is grafana_plugin. With this one, you can automate the installation and the upgrade of all your grafana plugins. For example :

 - name: install - update Grafana piechart panel plugin
   grafana_plugin:
     name: grafana-piechart-panel
     version: latest

And the last one is grafana_dashboard. This one is very great because it allow you to import or backup all your existing dashboards.

  - name: import grafana dashboard foo
   grafana_dashboard:
     grafana_url: http://grafana.company.com
     grafana_api_key: XXXXXXXXXXXX
     state: present
     message: "updated by ansible"
     overwrite: true
     path: /path/to/dashboards/foo.json

 - name: export dashboard
   grafana_dashboard:
     grafana_url: http://grafana.company.com
     grafana_api_key: XXXXXXXXXXXX
     state: export
     slug: foo
     path: /path/to/dashboards/foo.json

Hope thoses new ansible modules will be usefull for someone 🙂

If you have some suggestion of missing feature in this modules, you can comment in this article or make a pull requests in ansible github repo.

HAPROXY : client certificate validation

Today at the office, the security team ask me to secure our reverse proxy by adding a client certificate validation to only trust the client host CN.

So here is my method to verify the client certificate CN according to the expected one :

frontend frontend_foo
  mode tcp
  bind *:443 ssl crt /etc/ssl/certs/haproxy_reverse.proxy.company.com.pem ca-file /etc/ssl/certs/autorite_chain_haproxy.pem crl-file /etc/ssl/certs/crl-bundle_haproxy.pem verify required ca-ignore-err all crt-ignore-err all
 default_backend backend_foo

backend backend_foo
  mode tcp
  option httpchk

  acl cert_from_trusted_client ssl_c_s_dn(CN) -m reg ^trusted\.client\.(site1|site2)\.company\.(com|fr)$
  tcp-response inspect-delay 2s
  tcp-response content reject unless cert_from_trusted_client

  server srv_load01 backend.company.com:443 check ssl crt /etc/ssl/certs/haproxy_reverse.proxy.company.com.pem ca-file /etc/ssl/certs/autorite_chain_haproxy.pem verify required

With this configuration, only hosts with a certificate with a CN like  « trusted.client.site1.company.fr » , « trusted.client.site2.company.fr », « trusted.client.site1.company.com », « trusted.client.site2.company.com » can connect to the revperse proxy.

Hope this will help someone 😛

Marcus Bière

Aperogeek c’est de l’actu Geek, mais aussi de l’actu Bières !

Du coup je vais vous présenter aujourd’hui une brasserie que j’aprécie particulièrement : Marcus Bière, la bière de la drôme !

Situé dans le petit village de Saou au coeur de la drôme, Marcus Bière est une brasserie artisanale qui fait de la super bonne bière !


Voici quelques petites photos histoire de vous donner envie d’y aller 🙂

Comme vous le voyez le coin est très agréable. En plus c’est juste a côté de chez beau papa, donc j’y vais régulièrement 😛

Meetup Grafana Lyon


J’ai créé le groupe Meetup Grafana Lyon, et je compte organiser un premier meetup Grafana prochainement !

Le rendez vous est fixé au mardi 12 Septembre dans un pub que je connais bien : L’antidote ^_^’

 

Au niveau des présentations, il y aura :

  • Automatisation de la gestion des datasources grafana avec un module ansible (moi)
  • Monitoring Docker avec Telegraf + InfluxDB + Grafana (moi)
  • monitorer un cluster Cassandra avec graphite_exporter / node_exporter, prometheus et grafana (Christophe Schmitz)

N’hésitez pas à vous inscrire, il reste de la place !

 

EDIT : le Meetup s’est bien passé. voici les liens vers les slides que j’ai présenté :

Kapacitor : Alerting for your timeseries

I already talk about monitoring docker with Telegraf, InfluxDB and Grafana. It’s nice, we have pretty dashboards, but it doesn’t do alerting ! Unless you sit in front of your screen all the day, you will not be warned when a container is crashing or when a friend connect on your Teamspeak channel !

Fortunaletly, in the TICK Stack of Influxdata, there is the « K » of Kapacitor.

Kapacitor is an Open source framework for processing, monitoring, and alerting on time series data.

To do that, Kapacitor use TickScripts, small scripts written in a custom DSL Langage, very simple to understand and deploy.

For exmple, if you want to send a warning level alert on your Slack Channel, when the CPU usage of one of your servers is greater than 70%, and a critical level alert when above 85% :

stream
    |from()
        .measurement('cpu_usage_idle')
        .groupBy('host')
    |window()
        .period(1m)
        .every(1m)
    |mean('value')
    |eval(lambda: 100.0 - "mean")
        .as('used')
    |alert()
        .message('{{ .Level}}: {{ .Name }}/{{ index .Tags "host" }} has high cpu usage: {{ index .Fields "used" }}')
        .warn(lambda: "used" > 70.0)
        .crit(lambda: "used" > 85.0)

        // Slack
        .slack()
        .channel('#alerts')

With this kind of DSL langage, we can create any rule we want. By requesting InfluxDB with InfluxQL queries then aggregating metrics by host or any tag, adding a filter based on any criteria (between 8am. and 7pm. from monday to friday for example), etc..

When the alert rule is ready, Kapacitor can use any alerting system, like :

  • sending an email,
  • post to slack or mattermost,
  • write in a log file,
  • send a pager duty message,
  • upscale or downscale a docker swarm/kubernetes stack
  • or simply execute a custom bash script.

Here is an example of alert generated by Kapacitor in Slack :

kapacitor alerting slack

And for thoses who doesn’t want to get their hands dirty, there is Chronograf (the « C » in TICK Stack).

chronograf dashboard

Chronograf is an open-source web application written in Go and React.js designed to visualize your monitoring data from influxDB.

We are far from a Grafana in term of features (and community), but it’s getting better every day. It allow you to explore your data very efficiently :

chronograf data explorer

My favorite feature is the web based interface to easily create alerting and automation rules for Kapacitor.chronograf kapacitor ruleOf course the web interface limits you in term of Kapacitor DSL langage (an expert mode is on the way), but you can easily, in 3 clics, create simple rules like a threshold, detect a delta during a time period or even send a alert when there is no data (deadman) !

So, theses tools are pretty youngs, but are very interestings : I’ll keep a watch on it !

Kapacitor : l’outil d’Alerting pour vos time series

J’ai déjà parlé du monitoring docker avec Telegraf, Influxdb et Grafana. C’est bien jolie, on a des beaux graphes, mais ça ne fait pas d’alerting. A moins de rester le nez devant les écrans toutes la journées, on ne sera pas prévenu en cas de crash d’un des conteneurs ou lorsqu’un pote se connecte au Teamspeak aperogeek !

Heureusement, dans la TICK Stack de influxdata, il y a le K de Kapacitor.

Kapacitor est un outil de stream processing, capable d’analyser au fil de l’eau les métriques qui arrivent dans influxDB.

Pour cela, Kapacitor utilise des TickScripts : des petits script dans un langage DSL propre a kapacitor, super simple à comprendre et mettre en place.

Par exemple pour envoyer un warning lorsque le cpu d’un des serveurs est utilisé a plus de 70%, et un gros warning à plus de 85% :

stream
    |from()
        .measurement('cpu_usage_idle')
        .groupBy('host')
    |window()
        .period(1m)
        .every(1m)
    |mean('value')
    |eval(lambda: 100.0 - "mean")
        .as('used')
    |alert()
        .message('{{ .Level}}: {{ .Name }}/{{ index .Tags "host" }} has high cpu usage: {{ index .Fields "used" }}')
        .warn(lambda: "used" > 70.0)
        .crit(lambda: "used" > 85.0)

        // Slack
        .slack()
        .channel('#alerts')

Avec ce type de langage, on est capable de créer n’importe quel type de règle, en interrogeant influxDB via des requêtes InfluxQL, de faire des agrégations par groupe de serveur, ou n’importe quel tag, en filtrant sur n’importe quel critère (du lundi au vendredi, entre 8h et 19h), etc..

Une fois la règle générée, Kapacitor est capable d’envoyer l’alerte n’importe où, comme :

  • envoyer une mail,
  • poster dans slack / mattermost,
  • écrire dans un fichier de log,
  • envoyer un message pager duty,
  • upscaler/downscaler une stack docker swarm/kubernetes
  • ou tout simplement exécuter un script maison.

Exemple d’alerting Slack :

kapacitor alerting slackEt pour ceux qui ne souhaitent pas mettre les mains dans le cambouis, il y a Chronograf (le « C » de TICK Stack).

chronograf dashboard

Chronograf et avant tout un outil de visualisation des métriques stockées dans influxDB. On est encore loin d’un Grafana en terme de fonctionnalités (et de communauté), mais ça avance petit à petit. Il permet d’explorer de de grapher rapidement une base influxdb :

chronograf data explorer

Enfin, le gros avantage est qu’il offre justement une interface graphique pour créer des règles Kapacitor !

chronograf kapacitor ruleAlors bien sur, on est limité dans le langage DSL, mais cela permet en 3 clics de créer des règles simples, comme un dépassement de seuil sur une période données, de détecter un écart significatif sur une plage de temps, ou encore d’alerter en cas d’absence de mesure !

Bref, ce sont des outils encore jeunes, mais très prometteurs, à surveiller !

Zabbix – Send alerts to Slack

Today I’ve configured my Zabbix Server to automatically send alerts to the #alerting channel of my slack team.

To to this, you have to create un new Incoming WebHook in your slack Team.

  • Click on your team, select App and Integration.
  • Search for Incoming Webhook
  • Click « Add a configuration »
  • Configure the webhook with the default destination topic, etc..
  • Copy the generated URL in your clipboard, you will need it later. The webhook url is something like https://hooks.slack.com/services/ABCDEFGHIJKJMNOPQRSTUVWXYZ

Next Go to your Zabbix Server.

Edit the /etc/zabbix/zabbix_server.conf file to specify a correct Path for AlertScriptPath, for example :

AlertScriptsPath=/usr/share/zabbix/alertscripts

Then go to the /usr/share/zabbix/alertscripts directory and create a tiny shell script post_to_slack.sh :

#!/bin/sh

webhook_url=$1
message=$2

curl -k -X POST -d "payload={\"username\":\"zabbix\", \"text\":\"$message\"}" $webhook_url

Then Go to The Zabbix IHM and add a new media type of type script, with the script name you created before (Administration > Media Types > create media type)

Add 2 parameters for your script :

  1. The first one is the Slack Webhook URL you generated in step 1,
  2. the second is the Alert Subject.

Then, Configure a Zabbix Actions (Configuration > Actions) to send a message to slack media for the admin user.

Set a correct Subject message to get all the information you need. For example :

[{HOST.HOST}] {TRIGGER.SEVERITY}: {TRIGGER.NAME}

Finaly edit the Zabbix admin user and add a new media of type slack.

That’s It !

Now all your Zabbix Alerts are sent to your slack #alerting Channel \o/

PS : You can configure multiple slack media type to send alerts to multiple webhooks, according to the Host group.

 

 

Monitoring Docker with Telegraf, InfluxDB and Grafana

A small post to talk about telegraf and influxdb (aka Tick Stack, without Chronograf and Kapacitor)

Telegraf

Telegraf is a metrics collect tool written in Go which can collect system metrics like cpu, memory disk, and also application metrics (apache, nginx, elasticsearch, jmx, etc..)

Telegraf collect metrics from « input » plugins, parse it to the correct format (influxdb line protocol / json) then send it to « output » plugins. There is a lot of input and output plugins, you just have to activate them in the telegraf config file.

Here I’m using the Docker input plugin to fetch all the stats from my docker daemon (number of running containers, cpu / memory usage per container, etc..)

Here is the configuration for the docker input plugin :

# # Read metrics about docker containers
[[inputs.docker]]
  ## Docker Endpoint 
  ## To use TCP, set endpoint = "tcp://[ip]:[port]" 
  ## To use environment variables (ie, docker-machine), set endpoint = "ENV"
  endpoint = "unix:///var/run/docker.sock" 
  ## Only collect metrics for these containers, collect all if empty 
  container_names = [] 
  ## Timeout for docker list, info, and stats commands 
  timeout = "5s" 
  ## Whether to report for each container per-device blkio (8:0, 8:1...) and 
  ## network (eth0, eth1, ...) stats or not perdevice = true 
  ## Whether to report for each container total blkio and network stats or not 
  total = false

InfluxDB

Next I tell telegraf to send all my metrics to InfluxDB, a time series database :

# Configuration for influxdb server to send metrics to
[[outputs.influxdb]]
 ## The full HTTP or UDP endpoint URL for your InfluxDB instance.
 ## Multiple urls can be specified as part of the same cluster,
 ## this means that only ONE of the urls will be written to each interval.
 # urls = ["udp://localhost:8089"] # UDP endpoint example
 urls = ["http://influxdb:8086"] # required
 ## The target database for metrics (telegraf will create it if not exists).
 database = "telegraf" # required
 ## Retention policy to write to. Empty string writes to the default rp.
 retention_policy = ""
 ## Write consistency (clusters only), can be: "any", "one", "quorum", "all"
 write_consistency = "any"
 ## Write timeout (for the InfluxDB client), formatted as a string.
 ## If not provided, will default to 5s. 0s means no timeout (not recommended).
 timeout = "5s"
 # username = "telegraf"
 # password = "metricsmetricsmetricsmetrics"
 ## Set the user agent for HTTP POSTs (can be useful for log differentiation)
 # user_agent = "telegraf"
 ## Set UDP payload size, defaults to InfluxDB UDP Client default (512 bytes)
 # udp_payload = 512

Docker compose

To start all of theses services I’m using docker-compose. I’ve just updated my docker-compose.yml file to add the following lines :

version: '2'
services:
[...]
  telegraf:
    image: telegraf
    hostname: telegraf
    container_name: telegraf
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - /data/telegraf/telegraf.conf:/etc/telegraf/telegraf.conf
    labels:
      - "traefik.enable=false"

  influxdb:
    image: influxdb
    hostname: influxdb
    container_name: influxdb
    volumes:
      - /data/influxdb:/var/lib/influxdb
    labels:
      - "traefik.enable=false"
 
  grafana:
    hostname: grafana
    container_name: grafana
    image: grafana/grafana
    expose:
      - 3000
    volumes:
      - /data/grafana:/var/lib/grafana

I reload my compose file with the command :

docker-compose up -d

Grafana

Now, with my docker metrics stored into InfluxDB, I can create dashboards into grafana.

First create an influxdb datasource pointing to the influxdb container (http://influxdb:8086)

Then import some dashboards available from the grafana catalog.

Here is a screenshot of the result.