hkboss
August 15, 2020, 9:49am
#1
I have enabled podsecuritypolicy in minikube. By default it has created two psp - privileged and restricted.
NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP READONLYROOTFS VOLUMES
privileged true * RunAsAny RunAsAny RunAsAny RunAsAny false *
restricted false RunAsAny MustRunAsNonRoot MustRunAs MustRunAs false configMap,emptyDir,projected,secret,downwardAPI,persistentVolumeClaim
I have also created a linux user - kubexz, for which I have created ClusterRole and RoleBinding to restrict for only managing pods on kubexz namespace, and use the restricted psp.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: only-edit
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["create", "delete", "deletecollection", "patch", "update", "get", "list", "watch"]
- apiGroups: ["policy"]
resources: ["podsecuritypolicies"]
resourceNames: ["restricted"]
verbs: ["use"]
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: kubexz-rolebinding
namespace: kubexz
subjects:
- kind: User
name: kubexz
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: only-edit
I have set the kubeconfig file in my kubexz user $HOME/.kube. The RBAC is working fine - From kubexz user I am only able to create and manage pod resources in the kubexz namespace as expected.
But when I post a pod manifest with securityContext.privileged: true, the restricted podsecuritypolicy is not stopping me to create that pod. I should not be able to create a pod with privilege container. But the pod is getting created. Not sure what am I missing
apiVersion: v1
kind: Pod
metadata:
name: new-pod
spec:
hostPID: true
containers:
- name: justsleep
image: alpine
command: ["/bin/sleep", "999999"]
securityContext:
privileged: true
hkboss
August 15, 2020, 1:50pm
#2
@Ayman Can any one help please?
Ayman
August 24, 2020, 11:49pm
#3
Hi @hkboss ,
Welcome to our community!
You can find the answer here.
opened 01:42PM - 28 Oct 17 UTC
closed 04:04PM - 28 Oct 17 UTC
kind/bug
sig/auth
<!-- This form is for bug reports and feature requests ONLY!
If you're looki… ng for help check [Stack Overflow](https://stackoverflow.com/questions/tagged/kubernetes) and the [troubleshooting guide](https://kubernetes.io/docs/tasks/debug-application-cluster/troubleshooting/).
-->
**Is this a BUG REPORT or FEATURE REQUEST?**:
/kind bug
**What happened**:
First off: PSP in general works for me. If I only create the hereafter mentioned restricted policy and nothin else, I cannot create privileged Pods etc.. That way I think I can validate my general apiserver settings etc are working properly in regard to PSP.
Here's the problem:
I created two PodSecurityPolicies as per the guide (here: https://github.com/kubernetes/examples/blob/master/staging/podsecuritypolicy/rbac/README.md):
```yaml
apiVersion: extensions/v1beta1
kind: PodSecurityPolicy
metadata:
name: privileged
spec:
fsGroup:
rule: RunAsAny
privileged: true
runAsUser:
rule: RunAsAny
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
volumes:
- '*'
allowedCapabilities:
- '*'
hostPID: true
hostIPC: true
hostNetwork: true
hostPorts:
- min: 1
max: 65536
---
apiVersion: extensions/v1beta1
kind: PodSecurityPolicy
metadata:
name: restricted
spec:
privileged: false
fsGroup:
rule: RunAsAny
runAsUser:
rule: RunAsAny
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
volumes:
- 'rbd'
- 'nfs'
- 'configMap'
- 'downwardAPI'
- 'emptyDir'
- 'persistentVolumeClaim'
- 'secret'
- 'projected'
hostPID: false
hostIPC: false
hostNetwork: false
```
I then created two ClusterRoles:
```yaml
# privilegedPSP grants access to use the privileged
# PSP.
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: privileged-psp-user
rules:
- apiGroups:
- extensions
resources:
- podsecuritypolicies
resourceNames:
- privileged
verbs:
- use
---
# privilegedPSP grants access to use the privileged
# PSP.
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: restricted-psp-user
rules:
- apiGroups:
- extensions
resources:
- podsecuritypolicies
resourceNames:
- restricted
verbs:
- use
```
and then enabled my Kubelets to run static Pods with the privileged PSP and tried to restrict all casual users to the restricted PSP via ClusterRoleBindings:
```yaml
# Allow all nodes' kubelets to use the priviliged psp
# required to enable API pods etc to be started by kubelet
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kubelet-priv-psp
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: privileged-psp-user
subjects:
- kind: "Group"
name: "system:nodes"
---
# restrictedPSP grants the restrictedPSP role to
# the groups restricted
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: restricted-psp-users
subjects:
- kind: Group
name: system:authenticated
apiGroup: rbac.authorization.k8s.io
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: restricted-psp-user
```
My Kubelets can start static-pods just fine!
When trying the creation of a privileged Pod with a casual user though, Kubernetes surpisingly allows it:
```shell
$: kubectl auth can-i use psp/privileged
no
$: kubectl auth can-i use psp/restricted
yes
$: cat pod_priv.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
name: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
securityContext:
privileged: true
$: kubectl apply -f pod_priv.yaml
pod "nginx" created
$: kubectl get po
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 23s
```
**What you expected to happen**:
```shell
$: kubectl auth can-i use psp/privileged
no
$: kubectl auth can-i use psp/restricted
yes
$: cat pod_priv.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
name: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
securityContext:
privileged: true
$: kubectl apply -f pod_priv.yaml
Error from server (Forbidden): error when creating "pod_priv.yaml": pods "nginx" is forbidden: unable to validate against any pod security policy: [spec.containers[0].sec
urityContext.privileged: Invalid value: true: Privileged containers are not allowed]
$: kubectl get po
No resources found.
```
**How to reproduce it (as minimally and precisely as possible)**:
Create the above mentioned ressources in an RBAC and PSP activated cluster with K8s 1.7.9 and try to create the privileged Pod.
**Anything else we need to know?**:
dont' think so
**Environment**:
- Kubernetes version (use `kubectl version`):
Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.9", GitCommit:"19fe91923d584c30bd6db5c5a21e9f0d5f742de8", GitTreeState:"clean", BuildDate:"2017-10-19T
17:09:02Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.9+coreos.0", GitCommit:"2ded8a1912d014561208d882cfcc12dfa5374f22", GitTreeState:"clean", BuildDate:"20
17-10-24T13:07:42Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
- Cloud provider or hardware configuration: on-prem
- OS (e.g. from /etc/os-release): Ubuntu 16 Server LTS & CoreOS 1520.7.0 (Laybug)
- Kernel (e.g. `uname -a`): 4.10.0-19 & 4.13.9-coreos
- Install tools: none
- Others: PodSecurityAdmission Controller is activated
/sig auth