Group permission misconfiguration exposes Google Kubernetes Engine clusters

Researchers warn that many admins have misunderstood the significance and scope of a default user group in Google Kubernetes Engine (GKE) and assigned dangerous permissions to it. As a result, a large number of clusters can now be potentially exploited by anyone with an internet connection.

At the heart of the problem is a GKE default user group called “system:authenticated.” As its name implies, this is a group that contains all authenticated users of the system — in this case the Kubernetes API server. However, in the context of GKE this actually means anyone with a Google account who is or isn’t necessarily part of your organization. Researchers from Orca Security who investigated this misconfiguration have dubbed it Sys:All because in many cases it provides extensive access to vulnerable clusters.

“These misconfigurations led to the exposure of various sensitive data types, including JWT tokens, GCP [Google Cloud Platform] API keys, AWS keys, Google OAuth credentials, and private keys,” the Orca researchers said in their report. “A notable example involved a publicly traded company where this misconfiguration resulted in extensive unauthorized access, potentially leading to system-wide security breaches.”

GKE uses a different authentication scope from other services

The problem is that in most other systems “authenticated users” are users that the administrators created or defined in the system. This is also the case in privately self-managed Kubernetes clusters or for the most part in clusters set up on other cloud services providers such as Azure or AWS. So, it’s not hard to see how some administrators might conclude that system:authenticated refers to a group of verified users and then decide to use it as an easy method to assign some permissions to all those trusted users.

“GKE, in contrast to Amazon Elastic Kubernetes Service (EKS) and Azure Kubernetes Service (AKS), exposes a far-reaching threat since it supports both anonymous and full OpenID Connect (OIDC) access,” the Orca researchers said. “Unlike AWS and Azure, GCP’s managed Kubernetes solution considers any validated Google account as an authenticated entity. Hence, system:authenticated in GKE becomes a sensitive asset administrators should not overlook.”

The Kubernetes API can integrate with many authentication systems and since access to Google Cloud Platform and all of Google’s services in general is done through Google accounts, it makes sense to also integrate GKE with Google’s IAM and OAuth authentication and authorization system.

GKE also supports anonymous access, and requests made to the Kubernetes API without presenting a client certificate or an authorized bearer token will automatically be executed as the “system:anonymous” user and the “system:unauthenticated” group role. However, if a token or certificate is presented, the API request will be identified as the corresponding identity with its defined roles but also with the roles assigned to the system:authenticated group. By default, this group provides access to some basic discovery URLs that don’t expose sensitive information, but admins could expand the group’s permissions without realizing the implications. “Administrators might think that binding system:authenticated to a new role, to ease their managerial burden of tens or hundreds of users, is completely safe,” the researchers said. “Although this definitely makes sense at first glance, this could actually turn out to be a nightmare scenario.”

To execute authenticated requests to a GKE cluster, all a user needs to do is use Google’s OAuth 2.0 Playground and authorize their account for the Kubernetes Engine API v1. By completing the playgroup authorization process, any user with a Google account can obtain an authorization code that can be exchanged for an access token on the same page. This access token can then be used to send requests to any GKE cluster and successfully identify as system:authenticated, which includes the system:basicuser role.

The system:basicuser allows users to list all the permissions they currently have, including those inherited from the system:authenticated group by querying the SelfSubjectRulesReview object. This provides a simple way for attackers to investigate whether a cluster’s admin has overpermissioned system:authenticated.

The Orca researchers demonstrated the impact with an example where the admin decided to associate any authenticated user with the ability to read all resources across all apiGroups in the cluster. This is “something that can be somewhat useful when there is a real governance around the users which can authenticate to the cluster, but not on GKE,” they said. “Our attacker can now, in the current settings, list all secrets in the cluster and hence achieve a real cluster compromise, acquiring all the passwords of the cluster, including service account tokens.”

Real- world impact of mis-permissions for Google Kubernetes Engine

To see how common this misconfiguration was, the researchers tested all the GKE clusters in one of GCP’s IP ranges. Within a week they were able to scan 250,000 active GKE clusters and identified 1,300 clusters (0.5%) that were potentially vulnerable. The number might seem small, but the researchers estimate that the 250,000 scanned clusters represent only around 2% of all available clusters on GKE, so extrapolating a misconfiguration ratio of 0.5% would result in a very large number of potentially vulnerable clusters.

Of course, not all of them would be impacted in the same way. For example, only 108 of the 1,300 allowed cluster-admin access, cluster-wide listing of secrets or cluster-wide write/delete actions. The rest allowed read permissions over native Kubernetes resources but also custom resources, which can have various levels of impact depending on what those resources are. Orca notified the cluster owners it was able to identify and reported the issue to Google.

Mitigating dangerous permissions in Google Kubernetes Engine

According to Orca, Google responded that this is intended behavior and that it is up to organizations to ensure they don’t make this error. According to the shared responsibility model, users are responsible for configuring access controls. However, Google did block the binding of the system:authenticated group to the cluster-admin role in GKE versions 1.28 and higher and plans to notify users about this possible misconfiguration.

Organizations are strongly advised to upgrade to this GKE version and to practice the principles of least privilege when assigning permissions which dictate that permissions should be as granular as possible for every user based on their role in the system and therefore not assigned in bulk via groups like system:authenticated.

“Google is indeed right,” the researchers said. “Organizations should take responsibility and not deploy their assets and permissions in a way that carries security risks and vulnerabilities. However, the scope of the system:authenticated group is a broadly misunderstood concept with acute consequences, which has been verified as actionable and fruitful. […] This is not very different from the open S3 bucket exploitation phenomenon, which made Amazon take action — even if it took years. The only difference is that at this point, we don’t have any public record of a large-scale attack utilizing this attack vector, but this is most probably just a matter of time.”

Access Control, Authentication, Configuration Management

Go to Source