Debugging K8s infrastructure locally with Telepresence

What is Telepresence?

With so many moving parts and services in the mix, it’s no wonder that any engineer who has had to debug a Kubernetes micro-service can be a major pain.  That said, what if it didn’t have to be? I would like to introduce you to a CNCF Incubation product created by Datawire that has caught my interest recently and has really become a handy tool in my cloud-native toolbox.

Until now, Telepresence was a word to describe a sensation of being elsewhere, created by the use of virtual reality technology. This is why I feel like this is such a perfect name for a tool that makes you feel like you are “inside” your Kubernetes cluster while working locally.

Telepresence is an open-source tool that lets you run a single service locally while connecting that service to a remote Kubernetes cluster. This lets developers working on multi-service applications to:

  1. Do fast local development of a single service, even if that service depends on other services in your cluster. Make a change to your service, save, and you can immediately see the new service in action.
  2. Use any tool installed locally to test/debug/edit your service. For example, you can use a debugger or IDE!
  3. Make your local development machine operate as if it’s part of your Kubernetes cluster. If you’ve got an application on your machine that you want to run against a service in the cluster — it’s easy to do.

How it works

Telepresence deploys a two-way network proxy in a pod running in your Kubernetes cluster. This pod proxies data from your Kubernetes environment (e.g., TCP connections, environment variables, volumes) to the local process. The local process has its networking transparently overridden so that DNS calls and TCP connections are routed over the proxy to the remote Kubernetes cluster.

This approach gives:

  • your local service full access to other services in the remote cluster
  • your local service full access to Kubernetes environment variables, secrets, and ConfigMap
  • your remote services full access to your local service

Installing Telepresence

OS X

On OS X you can install Telepresence by running the following:

brew cask install osxfuse
brew install datawire/blackbird/telepresence

Ubuntu 16.04 or later

On Ubuntu 16.04 and up, you can run the following to install Telepresence:

curl -s https://packagecloud.io/install/repositories/datawireio/telepresence/script.deb.sh | sudo bash
sudo apt install --no-install-recommends telepresence

If you are running another Debian-based distribution that has Python 3.5 installable as python3, you may be able to use the Ubuntu 16.04 (Xenial) packages. The following works on Linux Mint 18.2 (Sonya) and Debian 9 (Stretch) by forcing the PackageCloud installer to access Xenial packages:

curl -sO https://packagecloud.io/install/repositories/datawireio/telepresence/script.deb.sh
sudo env os=ubuntu dist=xenial bash script.deb.sh
sudo apt install --no-install-recommends telepresence
rm script.deb.sh

A similar approach may work on Debian-based distributions with Python 3.6 by using the Ubuntu 17.10 (Artful) packages.

Fedora 26 or later

Run the following:

curl -s https://packagecloud.io/install/repositories/datawireio/telepresence/script.rpm.sh | sudo bash
sudo dnf install telepresence

If you are running a Fedora-based distribution that has Python 3.6 installable as python3, you may be able to use Fedora packages. See the Ubuntu section above for information on how to invoke the PackageCloud installer script to force OS and distribution.

Install from source

On systems with Python 3.5 or newer, install into /usr/local/share/telepresence and /usr/local/bin by running:

sudo env PREFIX=/usr/local ./install.sh

Install the software from the list of dependencies to finish. Install into arbitrary locations by setting other environment variables before calling the install script. After installation, you can safely delete the source code.

Dependencies

If you install Telepresence using a pre-built package, dependencies other than kubectl are handled by the package manager. If you install from source, you will also need to install the following software.

  • kubectl (OpenShift users can use oc)
  • Python 3.5 or newer
  • OpenSSH (the ssh command)
  • sshfs to mount the pod’s filesystem
  • conntrack and iptables on Linux for the vpn-tcp method
  • torsocks for the inject-tcp method
  • Docker for the container method
  • sudo to allow Telepresence to
  • modify the local network (via sshuttle and pf/iptables) for the vpn-tcp method
  • run the docker command in some configurations on Linux
  • mount the remote filesystem for access in a Docker container

Getting Started

Telepresence offers a broad set of proxying options that have different strengths and weaknesses, but for the sake of this tutorial, we are going to start with the recommended container method, which provides the most consistent environment for your code.

Now let’s learn how to use Telepresence to connect and debug out Kubernetes clusters locally by getting hands-on!

Telepresence allows you to get transparent access to a remote cluster from a local process. This allows you to use your local tools on your laptop to communicate with processes inside the cluster.

Connecting to the remote cluster

You should start by running a demo service in your cluster:

$ kubectl run myservice --image=datawire/hello-world --port=8000 --expose
$ kubectl get service myservice
NAME        CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE
myservice   10.0.0.12    <none>        8000/TCP   1m

If your cluster is in the cloud, you can find the address of the resulting Service like this:

$ kubectl get service hello-world
NAME          CLUSTER-IP     EXTERNAL-IP       PORT(S)          AGE
hello-world   10.3.242.226   104.197.103.123   8000:30022/TCP   5d

If you see <pending> under EXTERNAL-IP wait a few seconds and try again. In this case the Service is exposed at http://104.197.103.123:8000/.

It may take a minute or two for the pod running the server to be up and running, depending on how fast your cluster is.

Once you know the address you can store its value (don’t forget to replace this with the real address!):

$ export HELLOWORLD=http://104.197.103.13:8000

You can now run a local process using Telepresence that can access that service, even though the process is local but the service is running in the Kubernetes cluster:

$ telepresence --run curl http://myservice:8000/
Hello, world!

(This will not work if the hello world pod hasn’t started yet… if so, try again.)

What’s going on:

  1. Telepresence creates a new Deployment, which runs a proxy.
  2. Telepresence runs curl locally in a way that proxies networking through that Deployment.
  3. The DNS lookup and HTTP request done by curl get routed through the proxy and transparently access the cluster… even though curl is running locally.
  4. When curl exits the new Deployment will be cleaned up.

Setting up the proxy

To use Telepresence with a cluster (Kubernetes or OpenShift, local or remote) you need to run a proxy inside the cluster. There are three ways of doing so.

Creating a new deployment

By using the --new-deployment option telepresence can create a new deployment for you. It will be deleted when the local telepresence process exits. This is the default if no deployment option is specified.

For example, this creates a Deployment called myserver:

telepresence --new-deployment myserver --run-shell

This will create two Kubernetes objects, a Deployment and a Service, both named myserver. (On OpenShift a DeploymentConfig will be used instead of Deployment.) Or, if you don’t care what your new Deployment is called, you can do:

telepresence --run-shell

Running Telepresence manually

You can also choose to run Telepresence manually by starting a Deployment that runs the proxy in a pod.

The Deployment should only have 1 replica, and use the different Telepresence image:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: myservice
spec:
  replicas: 1  # only one replica
  template:
    metadata:
      labels:
        name: myservice
    spec:
      containers:
      - name: myservice
        image: datawire/telepresence-k8s:0.103  # new image

You should apply this file to your cluster:

kubectl apply -f telepresence-deployment.yaml

Next, you need to run the local Telepresence client on your machine, using --deployment to indicate the name of the Deployment object whose pod is running telepresence/datawire-k8s:

telepresence --deployment myservice --run-shell

Telepresence will leave the deployment untouched when it exits. When you finish, you should see an output that looks like this:

And that’s it! You’re now ready to debug and work with your cluster locally with Telepresence! I hope you enjoyed this introduction and if you want to dive into the product more, check out the docs here.

Subscribe and discover the newest
updates, news, and features

We value your inbox and are committed to preventing spam