Spinning up a highly available Prometheus setup with Thanos

Spinning up a highly available Prometheus setup with Thanos

The Problem

Prometheus has become one of the standard tools of any monitoring solutions due to it’s simple and reliable architecture and ease of use. Despite this, the tool has some shortcomings when working on a certain scale. When trying to scale Prometheus, one major issue you quickly bump into is the problem of cross-shard visibility.

Prometheus encourages a functional sharding approach. Even a single Prometheus server provides enough scalability to free users from the complexity of horizontal sharding in virtually all use cases.

While this is a great deployment model, you often want to access all the data through the same API or UI – that is, a global view. For example, you can render multiple queries in a Grafana graph, but each query can be done only against a single Prometheus server.

The Solution

Thanos is an open-source, highly available Prometheus setup with long term storage capabilities that seeks to act as a “silver bullet” to solve some of the shortcomings that plague vanilla Prometheus setups. Thanos allows users to aggregate Prometheus data natively by directly querying the Prometheus API, efficiently compact it, and most importantly, de-duplicate data.

Thanos’ architecture introduces a central query layer across all the servers via a sidecar component that sits alongside each Prometheus server and a central Querier component that responds to PromQL queries. This makes up a Thanos deployment.

Background

Following the KISS and Unix philosophies, Thanos is made of a set of components with each filling a specific role.

  • Sidecar: connects to Prometheus, reads its data for query and/or uploads it to cloud storage.
  • Store Gateway: serves metrics inside of a cloud storage bucket.
  • Compactor: compacts, downsamples and applies retention on the data stored in the cloud storage bucket.
  • Receiver: receives data from Prometheus’ remote-write WAL, exposes it and/or uploads it to cloud storage.
  • Ruler/Rule: evaluates recording and alerting rules against data in Thanos for exposition and/or upload.
  • Querier/Query: implements Prometheus’ v1 API to aggregate data from the underlying components.See those components on this diagram:

Thanos integrates with existing Prometheus servers through a Sidecar process, which runs on the same machine or in the same pod as the Prometheus server.

The purpose of the Sidecar is to backup Prometheus data into an Object Storage bucket, and give other Thanos components access to the Prometheus metrics via a gRPC API.

The Sidecar makes use of the reload Prometheus endpoint. Make sure it’s enabled with the flag --web.enable-lifecycle.

Installing Thanos

Prerequisites

To install Thanos you’ll need:

  • One or more Prometheus v2.2.1+ installations with a persistent disk.
  • Optional object storage.

The easiest way to deploy Thanos for the purposes of this tutorial is to deploy the Thanos sidecar along with Prometheus using the official Helm chart.

To deploy both- just run the next command, putting the values to a file values.yaml and changing --namespace value beforehand:

helm upgrade --version="8.6.0" --install --namespace="my-lovely-namespace" --values values.yaml  prometheus-thanos-sidecar stable/prometheus

Take a note that you need to replace two placeholders in the values: BUCKET_REPLACE_ME and CLUSTER_NAME. Also, adjust all the other values according to your infrastructure requirements.

External Storage

The following configures the sidecar to write Prometheus’ data into a configured object storage:

thanos sidecar \
    --tsdb.path            /var/prometheus \          # TSDB data directory of Prometheus
    --prometheus.url       "http://localhost:9090" \  # Be sure that the sidecar can use this url!
    --objstore.config-file bucket_config.yaml \       # Storage configuration for uploading data

The format of YAML file depends on the provider you choose. Examples of config and up-to-date list of storage types Thanos supports are available here.

Rolling this out has little to zero impact on the running Prometheus instance. It is a good start to ensure you are backing up your data while figuring out the other pieces of Thanos.

Deduplicating data from Prometheus HA pairs

The Query component is also capable of deduplicating data collected from Prometheus HA pairs. This requires configuring Prometheus’s global.external_labels configuration block to identify the role of a given Prometheus instance.

A typical choice is simply the label name “replica” while letting the value be whatever you wish. For example, you might set up the following in Prometheus’s configuration file:

global:
  external_labels:
    region: eu-west
    monitor: infrastructure
    replica: A
# ...

In a Kubernetes stateful deployment, the replica label can also be the pod name.

Reload your Prometheus instances, and then, in Query, we will define replica as the label we want to enable deduplication to occur on:

thanos query \
    --http-address        0.0.0.0:19192 \
    --store               1.2.3.4:19090 \
    --store               1.2.3.5:19090 \
    --query.replica-label replica  # Replica label for de-duplication
    --query.replica-label replicaX # Supports multiple replica labels for de-duplication

Go to the configured HTTP address, and you should now be able to query across all Prometheus instances and receive de-duplicated data.

Next Steps

At this point, you should have an idea of how Thanos approaches the task of solving Prometheus’s shortcomings. Thanos takes Prometheus and extends the functionality with the sidecar component to introduce a central query layer to act as a long term metrics store with the ability to de-duplicate your metric data.

I hope this overview has helped you gain valuable context surrounding Thanos and the issues it solves. Thanks for reading!

Spinning up a highly available Prometheus setup with Thanos

Subscribe
to our newsletter

Get the latest industry trends, exclusive insights, and Gcore
updates delivered straight to your inbox.