Kubernetes Deployment with Terraform
Terraform is a very powerfull tool for creating compute, network, storage ressource on every public cloud provider. It have a declarative language, so what you write is what you get. For example if you remove a compte server from your terraform config file, the next time you apply your configuration, the server will be destroyed.
Terraform comes with a lot of providers, from major cloud providers to Gitlab project management, Postgresql database, or DNS providers. There is also a list of community maintained providers.
Usually, to deploy stuff in a kubrnetes you will do a kubectl configuration file with all your resources, or use a Helm chart. By default, Helm chart is not really secure, every one who have access to your tiller pod will be able to deploy a chart in your cluster. That’s why we chose to avoid using helm charts and use Terraform instead.
Terraform provides a Kubernetes providers that allow you to create kubrnetes Objects in your kubernetes cluster :
- namespace
- configMap
- secrets
- deployments
- services
- etc..
- etc..
Unfortunatly the official kubernetes providers does not provides all the resources like ingress or cluster roles. That’s why we chose to use the fork of this provider available here : https://github.com/sl1pm4t/terraform-provider-kubernetes/tree/custom/kubernetes
To install Terraform on your computer, just download the latest binary on the Terraform.io website, copy it to your $PATH (/usr/local/bin) and make it executable. Usually Terraform can download and install officials plugins directly when you do a terraform init. For the forked Kubernetes provider we have to install it manually like this :
mkdir ~/.terraform.d/plugins/linux_amd64 -p &&
rm -rf .terraform &&
wget https://github.com/sl1pm4t/terraform-provider-kubernetes/releases/download/v1.3.0-custom/terraform-provider-kubernetes_v1.3.0-custom_linux_amd64.zip -O kubernetes.zip &&
unzip -o kubernetes.zip -d ~/.terraform.d/plugins/linux_amd64/ &&
rm -f kubernetes.zip &&
terraform init
Grafana
So, now we have installed Terraform and the required Kubernetes provider plugin, let’s deploy a Grafana in a dedicated monitoring Namespace :
# Namespace
resource "kubernetes_namespace" "monitoring" {
metadata {
annotations {
name = "monitoring"
}
name = "monitoring"
}
}
# Secret
resource "kubernetes_secret" "grafana-secret" {
metadata {
name = "grafana-secret"
namespace = "${kubernetes_namespace.monitoring.metadata.0.name}"
}
data {
"grafana-root-password" = "${file("${path.module}/monitoring/grafana-root-password")}"
"datasources.yml" = "${file("${path.module}/monitoring/grafana/datasources.yml")}"
"dashboards.yml" = "${file("${path.module}/monitoring/grafana/dashboards.yml")}"
}
type = "Opaque"
}
resource "kubernetes_config_map" "grafana-dashboards" {
metadata {
name = "grafana-dashboards"
namespace = "${kubernetes_namespace.monitoring.metadata.0.name}"
}
data {
"my-dashboard.json" = "${file("${path.module}/monitoring/grafana/my-dashboard.json")}"
}
}
# Peristent volume Claim
resource "kubernetes_persistent_volume_claim" "grafana-pv-claim" {
metadata {
name = "grafana-pv-claim"
namespace = "${kubernetes_namespace.monitoring.metadata.0.name}"
}
spec {
resources {
requests {
storage = "5Gi"
}
}
access_modes = ["ReadWriteOnce"]
}
}
# Deployment
resource "kubernetes_deployment" "grafana" {
metadata {
name = "grafana"
namespace = "${kubernetes_namespace.monitoring.metadata.0.name}"
labels {
app = "grafana"
}
}
spec {
replicas = "1"
selector {
app = "grafana"
}
strategy {
type = "Recreate"
}
template {
metadata {
labels {
app = "grafana"
name = "grafana"
}
}
spec {
container {
image = "mirror.gcr.io/grafana/grafana:latest"
name = "grafana"
liveness_probe {
tcp_socket {
port = 3000
}
failure_threshold = 3
initial_delay_seconds = 3
period_seconds = 10
success_threshold = 1
timeout_seconds = 2
}
readiness_probe {
tcp_socket {
port = 3000
}
failure_threshold = 1
initial_delay_seconds = 10
period_seconds = 10
success_threshold = 1
timeout_seconds = 2
}
resources {
limits {
cpu = "200m"
memory = "256M"
}
}
port {
name = "http"
container_port = 3000
protocol = "TCP"
}
env = [
{
name = "GF_SECURITY_ADMIN_PASSWORD"
value_from {
secret_key_ref {
key = "grafana-root-password"
name = "${kubernetes_secret.grafana-secret.metadata.0.name}"
}
}
},
{
name = "GF_INSTALL_PLUGINS"
value = "grafana-clock-panel,grafana-simple-json-datasource,grafana-piechart-panel"
},
{
name = "GF_PATH_PROVISIONING"
value = "/etc/grafana/provisioning"
},
]
volume_mount {
mount_path = "/var/lib/grafana"
name = "grafana-volume"
}
volume_mount {
mount_path = "/etc/grafana/provisioning/datasources/"
name = "grafana-config-datasources"
}
volume_mount {
mount_path = "/etc/grafana/provisioning/dashboards/"
name = "grafana-config-dashboards"
}
volume_mount {
mount_path = "/etc/grafana/dashboards"
name = "grafana-dashboards"
}
}
volume {
name = "grafana-volume"
persistent_volume_claim {
claim_name = "${kubernetes_persistent_volume_claim.grafana-pv-claim.metadata.0.name}"
}
}
volume {
name = "grafana-config-datasources"
secret {
secret_name = "grafana-secret"
items {
key = "datasources.yml"
path = "datasources.yml"
}
}
}
volume {
name = "grafana-config-dashboards"
secret {
secret_name = "grafana-secret"
items {
key = "dashboards.yml"
path = "dashboards.yml"
}
}
}
volume {
name = "grafana-dashboards"
config_map {
name = "grafana-dashboards"
}
}
# Must be set for persistent volume permissions
# See http://docs.grafana.org/installation/docker/#user-id-changes
security_context {
fs_group = "472"
}
}
}
}
}
# Service
resource "kubernetes_service" "grafana" {
metadata {
name = "grafana"
namespace = "${kubernetes_namespace.monitoring.metadata.0.name}"
}
spec {
selector {
app = "grafana"
}
port {
name = "http"
port = 3000
protocol = "TCP"
target_port = 3000
}
}
}
# Ingress
resource "kubernetes_ingress" "grafana" {
metadata {
annotations {
"kubernetes.io/ingress.class" = "traefik"
}
name = "grafana"
namespace = "${kubernetes_namespace.monitoring.metadata.0.name}"
}
spec {
rule {
host = "grafana.aperogeek.fr"
http {
path {
backend {
service_name = "${kubernetes_service.grafana.metadata.0.name}"
service_port = 3000
}
}
}
}
}
}
As you can see, the Terraform syntax is quite the same as kubectl yaml, you just have to transform CamelCase into snake_case đ
I’ve created a Kubernetes secret which contain the grafana root password (from a file on my disk), and the Grafana provisioning datasources and yaml configuration.
There is also a configMap where I load all my dashboards from a directory. (I’ve put only one example here), so when my Grafana starts, all my dashboards and datasources are already present !
After that, There is a classic Kubernetes deployment which mount the previously created secret and configmap in volumes.
Finally to reach my web interface, I’ve added a service and and ingress with the lovely Traefik Ingress Controller .
Can you please share the details of the password file, yml files of datasources and dashboard.json also. Thanks.
Datasource.yml and Dashboard.yml files are Grafana provisionning configuration files : https://grafana.com/docs/administration/provisioning/
Then you can load your dashboards from json files (you have to export them before in Grafana UI)
After a lot of searching, I finally found the correct syntax for mounting a PersistentVolumeClaim to a pod thanks to this blog post.
THANK YOU!