Best Practices for Multitenant SaaS Application Using Gcore Managed Kubernetes

Best Practices for Multitenant SaaS Application Using Gcore Managed Kubernetes

Many organizations host their software-as-a-service (SaaS) applications on a managed Kubernetes service using multitenant architecture. This architecture isolates computing, network, and storage resources, maintaining workflow security. When using multitenancy, it is important to consider the possible challenges of sharing these resources. Read on to discover the best practices and considerations for multitenant SaaS applications running on Gcore Managed Kubernetes.

What Is Multitenant Architecture?

Multitenant architecture is a design approach that allows multiple groups of users, called tenants, to access an instance of infrastructure. Tenants can be teams or individuals within an organization. This architecture is particularly beneficial for SaaS companies because it helps them scale applications efficiently and cost-effectively. There are two main benefits for it:

  • You share the same infrastructure while maintaining private and secure environments for each tenant.
  • You can focus on specific infrastructure instances or application microservices and apply effort and resources in a targeted manner rather than across the entire infrastructure.

In Kubernetes, infrastructure instances can refer to pod, storage, or network resources. They are usually organized into separate namespaces, which allows an organization to share the same cluster resources while maintaining private and secure environments for each tenant.

Practice 1: Separate Namespaces for Each Tenant

Namespaces are the primary unit of isolation in Kubernetes. Separate namespaces are essential when deploying a multitenant SaaS application, as a single cluster resource is shared among multiple tenants. Kubernetes supports creating a separate namespace for each tenant running the SaaS application, so you don’t need to create different clusters for each tenant. This ultimately results in cost savings on computing resources.

How pods and services are isolated from each other in two different namespaces
Figure 1: You can isolate different K8s resources by namespaces

Practice 2: Setting Resource Quotas on Resource Consumption

A multitenant SaaS application serves multiple tenants, each of which simultaneously accesses the same Kubernetes cluster resources. There may be scenarios where a particular tenant consumes all cluster resources, leaving no resources for other tenants. To avoid such capacity failures, use resource quotas. They allow you to limit the resources that pods or other services hosting your SaaS application can use, ensuring that a particular tenant doesn’t use all the resources.

Here is an example of a resource quota:

apiVersion: v1
kind: ResourceQuota
metadata:
  name: resourceQuotaSetting
  namespace: tenant1
spec:
  hard:
    requests.cpu: "2"
    requests.memory: "1Gi"
    limits.cpu: "4"
    limits.memory: "2Gi"

The manifest guarantees that a container gets at least 2 CPUs but can’t use more than 4. The same logic applies to memory.

Practice 3: Network Isolation Using Network Policies

By default, a Kubernetes cluster allows namespaces to communicate with each other. However, if your SaaS application runs in a multitenant architecture, you may want to avoid this transparency and isolate your namespaces. You can do this by using network policies.

For example, you can use the Calico CNI available in Gcore Managed Kubernetes and assign network policies to pods, as shown below:

apiVersion: projectcalico.org/v3
kind: NetworkPolicy
metadata:
  name: same-namespace
  namespace: tenant-a
spec:
  podSelector:
	matchLabels:
  	app: api
  policyTypes:
  - ingress
  - egress
  ingress:
  - from:
	- namespaceSelector:
    	    matchLabels:
      	 nsname: tenant-a
  egress:
  - to:
	- namespaceSelector:
    	    matchLabels:
      	 nsname: tenant-a

Pods with the app: api label in the tenant-a namespace will only communicate within this namespace and can’t communicate with other namespaces (tenants), achieving network isolation.

If you want to use more detailed and flexible multitenant network policies, we recommend Cilium as a CNI provider, which Gcore Managed Kubernetes also supports. Cilium offers advanced networking features, including L7 policies, high-performance L4 load balancing, and no-sidecar service mesh.

Here is an example of using Cilium to specify allowed ingress traffic that goes to the ns1 namespace:

apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
  name: "k8s-expose-across-namespace"
  namespace: ns1
spec:
  endpointSelector:
    matchLabels:
      name: leia
  ingress:
  - fromEndpoints:
    - matchLabels:
        k8s:io.kubernetes.pod.namespace: ns2
        name: luke

Practice 4: Storage Isolation Using Persistent Volume and Persistent Volume Claim

With Gcore Managed Kubernetes, you can allocate and manage storage for multiple tenants using the persistent volume (PV) and persistent volume claim (PVC) API resources. A PV is a piece of storage provisioned in the cluster, while a PVC is a user’s request for that piece of storage. In our case, a user is a tenant. Since PVC is a namespaced resource, you can easily create isolation of storage among different tenants.

In the example below for tenant1, we’ve configured the PVC with only ReadWriteOnce access mode and 2 Gi of storage space.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-storage
  namespace: tenant1
spec:
  accessModes:
	- ReadWriteOnce
  resources:
	requests:
  	storage: 2Gi

Practice 5: Manage Tenant Placement on Kubernetes Nodes

Kubernetes supports taints and tolerations to manage pod placement on nodes. This ensures pods are not scheduled on inappropriate nodes. Taints applied to nodes indicate that the node should not accept pods that do not tolerate the taints. This lets you decide on which node to run—or not to run—a particular tenant’s pod. On the other hand, you can choose whether pods of different tenants should be on the same or different nodes.

The command below schedules no pods on node 1 until it matches the tolerance arguments, where the key value, client, must be tenant1:

apiVersion: v1
kind: Pod
metadata:
  name: ubuntu
  labels:
   env: prod
spec:
  containers:
  - name: ubuntu
	image: ubuntu
	imagePullPolicy: IfNotPresent
tolerations:
  - key: "client"
	operator: "Equal"
	value: "tenant1"
	effect: "NoSchedule"

Gcore Managed Kubernetes

The above methods are available for SaaS applications running on Gcore Managed Kubernetes—try it out if you want a secure, high-performance, and scalable service. In addition to virtual machine-based clusters, we offer bare metal clusters, including worker nodes powered by NVIDIA GPUs for AI/ML workloads. Prices for worker nodes are the same as for our Virtual Machines and Bare Metal servers. We also provide free, production-grade cluster management with a 99.9% SLA for your peace of mind. Our engineers are always ready to help you with any challenge or ambitious project and are happy to share their expertise in multitenant best practices.

Conclusion

You have multiple options for isolating compute, network, and storage resources when running a multitenant SaaS application in Gcore Managed Kubernetes. You can share the same infrastructure with different tenants while maintaining private and secure environments for each of them. Check out these practices and let us know what you think!

Explore Gcore Managed Kubernetes

Subscribe and discover the newest
updates, news, and features

We value your inbox and are committed to preventing spam