As a Google Cloud Administrator planning your IAM strategy for how to best use the built-in Google Kubernetes Engine (GKE) IAM Roles, there are a few details that might be confusing and/or surprising that could have unintended consequences.
Clarifying the GKE Predefined IAM Roles
Here’s a listing of the predefined IAM Roles for Kubernetes Engine purposes, their official descriptions (as of the time of this writing), and a few notes on potential areas for confusion:
- Kubernetes Engine Admin
- Description: “Full management of Kubernetes Clusters and their Kubernetes API objects.”
- Intended use: Typically assigned to users or service accounts that manage “everything” about GKE clusters in a project such as a platform team. They have the ability to create/manage/destroy clusters and node pools plus create/manage/destroy all workloads on them.
- Point of confusion: Commonly confused with
Kubernetes Engine Cluster Adminduring assignment, but because it contains all those permissions, nothing “breaks” but that assignment no longer follows least privilege.
- Kubernetes Engine Cluster Admin
- Description: “Management of Kubernetes Clusters.”
- Intended use: Typically assigned to users or automation accounts that are just responsible for creating/managing/destroying GKE clusters but should not have access to the workloads running on them via this role.
- Points of confusion: Commonly confused with the Kubernetes native
ClusterRolewhich grants all access to all Kubernetes API resources but actually has no direct in-cluster permissions included. However, binding this role via IAM and the RBAC
ClusterRoleBindingcan grant near-similar permissions as the IAM-only
Kubernetes Engine Adminapproach.
- Kubernetes Engine Cluster Viewer
- Description: “Get and list access to GKE Clusters.”
- Intended use: Typically assigned to users or automation accounts that need “just enough” IAM access to query the GCP APIs to find and connect to a given GKE cluster but have their permissions to Kubernetes API resources delegated to and managed solely via in-cluster RBAC bindings.
- Point of confusion: Commonly confused with the Kubernetes native RBAC bindings that would give access to in-cluster resources but actually has none.
- Kubernetes Engine Developer
- Description: “Full access to Kubernetes API objects inside Kubernetes Clusters.”
- Intended use: Typically assigned to users who need permissions to work comfortably to manage most Kubernetes API resources except for a few privileged permissions.
- Point of surprise: Being named “Developer”, it appears at first glance to be a much lower privilege than
Kubernetes Engine Admin. While the “Developer” cannot manage the cluster itself, it has near-full control of the Kubernetes API resources in all namespaces, including kube-system.
- Kubernetes Engine Host Service Agent User
- Description: “Allows the Kubernetes Engine service account in the host project to configure shared network resources for cluster management. Also gives access to inspect the firewall rules in the host project.”
- Intended use: A role used for granting the GKE service project’s “robot” account the necessary access to a host project in a Shared VPC scenario. Not terribly useful otherwise.
- Kubernetes Engine Service Agent
- Description: “Gives Kubernetes Engine account access to manage cluster resources. Includes access to service accounts.”
- Intended use: Intended to be assigned only to the project’s GKE “robot” account. Not really ideal for tenant/customer use as it contains ~1000 permissions and
iam.serviceAccounts.actAsalong with lots of
- Kubernetes Engine Viewer
- Description: “Read-only access to Kubernetes Engine resources.”
- Intended use: Typically assigned to users or automation accounts that need to be able to find GKE clusters and have “read-only” access to all non-sensitive Kubernetes API resources.
- Point of surprise: Commonly confused with
Kubernetes Engine Cluster Viewerduring assignment, but because it contains all those permissions, nothing “breaks” but that assignment no longer follows least privilege.
Potential Risks and Privilege Escalation Paths
- Kubernetes Engine Viewer
container.pods.list(and cronjobs, deployments, jobs, and statefulsets list) and
container.configMaps.listcan potentially leak sensitive credentials and/or details from pod environment variables or what is stored in
configMaps. Note that viewing secrets is not allowed by this role.
- Kubernetes Engine Developer
secretscontents in all
namespacesin the GKE cluster, including
secretsare also where Kubernetes
serviceaccounttokens are stored, so a “Developer” is actually the union of all permissions granted to all
serviceaccountsin the cluster. If a controller like Anthos Config Management, GKE Config Sync, Weaveworks’ Flux, or Helm v2 is deployed or any
serviceaccountwas manually bound to the
ClusterRole, a direct path to reading those JWT tokens from their
secretsresource and using them to authenticate against the API server is possible and very likely in most GKE clusters. We previously wrote about the power of
LISTpermissions in this related blog post that covers this scenario in greater detail.
Diving into Kubernetes Engine Admin vs Developer
The following list of permissions are available to the
Kubernetes Engine Admin IAM Role that are not directly assigned to the
Kubernetes Engine Developer IAM Role, but those in bold are what can potentially be gained back via the previously mentioned
container.secrets.list escalation path:
Be aware that the name of these GKE IAM Roles can be confusing and potentially risky if applied without a deeper understanding of their usage in combination with other configuration settings like GKE Basic Authentication and RBAC
ClusterRoleBindings. Our recommendation is to ensure GKE Basic Authentication isn’t in place and to handle authorization for users via native RBAC permissions in-cluster. This allows permissions to be granted to a subset of the clusters in the project and at the per-namespace level which provides the ideal levels of granularity.