Hi guys, When I create a service account and test its permissions without associ . . .

P S:
Hi guys, When I create a service account and test its permissions without associating it with any role/clusterrolebinding, it has access most of the things… is it by default ?

controlplane $ kubectl create serviceaccount testsa1
serviceaccount/testsa1 created
controlplane $ kubectl auth can-i delete deploy --as=system:serviceaccount:default:testsa1
yes
controlplane $ kubectl auth can-i delete nodes --as=system:serviceaccount:default:testsa1
Warning: resource 'nodes' is not namespace scoped
yes
controlplane $ kubectl auth can-i delete service --as=system:serviceaccount:default:testsa1 --namespace=internal
yes

OE:
This is really strange behavior that I was able to re-create in the KodeKloud environment but not on my local environment. I tried it in this lab - https://kodekloud.com/courses/certified-kubernetes-administrator-with-practice-tests-labs/lectures/12038768 - and I was able to get the exact same results.

This is what things look like if I run it in my own lab (RBAC is implemented correctly):

$ kubectl create serviceaccount testsa1
serviceaccount/testsa1 created

$ kubectl auth can-i delete deploy --as=system:serviceaccount:default:testsa1
no

$ kubectl auth can-i delete nodes --as=system:serviceaccount:default:testsa1
Warning: resource 'nodes' is not namespace scoped
no

$ kubectl auth can-i --list --as=system:serviceaccount:default:testsa1
Resources                                       Non-Resource URLs   Resource Names   Verbs
<http://selfsubjectaccessreviews.authorization.k8s.io|selfsubjectaccessreviews.authorization.k8s.io>   []                  []               [create]
<http://selfsubjectrulesreviews.authorization.k8s.io|selfsubjectrulesreviews.authorization.k8s.io>    []                  []               [create]
                                                [/api/*]            []               [get]
                                                [/api]              []               [get]
                                                [/apis/*]           []               [get]
                                                [/apis]             []               [get]
                                                [/healthz]          []               [get]
                                                [/healthz]          []               [get]
                                                [/livez]            []               [get]
                                                [/livez]            []               [get]
                                                [/openapi/*]        []               [get]
                                                [/openapi]          []               [get]
                                                [/readyz]           []               [get]
                                                [/readyz]           []               [get]
                                                [/version/]         []               [get]
                                                [/version/]         []               [get]
                                                [/version]          []               [get]
                                                [/version]          []               [get]

What you are seeing in your case shouldn’t happen unless the AlwaysAllow authorization module is used but I can’t seem to find it in the options passed in the lab (UPDATE I was able to find the reason - see the next message):

kube-apiserver --advertise-address=172.17.0.6 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/pki/ca.crt --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key --etcd-servers=<https://127.0.0.1:2379> --insecure-port=0 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-cluster-ip-range=10.96.0.0/12 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key

OE:
I was able to track it down. It appears that in some labs, they added a ClusterRoleBinding to give all members of the systems:serviceaccount group Cluster Admin privileges. You can see this by:

$ kubectl describe <http://clusterrolebindings.rbac.authorization.k8s.io|clusterrolebindings.rbac.authorization.k8s.io> permissive-binding
Name:         permissive-binding
Labels:       &lt;none&gt;
Annotations:  &lt;none&gt;
Role:
  Kind:  ClusterRole
  Name:  cluster-admin
Subjects:
  Kind   Name                    Namespace
  ----   ----                    ---------
  User   admin
  User   kubelet
  Group  system:serviceaccounts

P S:
ohh, okey… I see that… thanks a lot for checking this…

OE:
You’re welcome. I actually learn quite a bit when someone runs into an error or some strange behavior that I myself haven’t run into. So helping you adds to my knowledge-base :slightly_smiling_face: