Running Suite: Kubernetes e2e suite =================================== Random Seed: 1654905755 - Will randomize all specs Will run 5773 specs Running in parallel across 10 nodes Jun 11 00:02:37.624: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:02:37.626: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jun 11 00:02:37.655: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jun 11 00:02:37.726: INFO: The status of Pod cmk-init-discover-node1-hlbt6 is Succeeded, skipping waiting Jun 11 00:02:37.726: INFO: The status of Pod cmk-init-discover-node2-jxvbr is Succeeded, skipping waiting Jun 11 00:02:37.726: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jun 11 00:02:37.726: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Jun 11 00:02:37.726: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jun 11 00:02:37.743: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) Jun 11 00:02:37.743: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) Jun 11 00:02:37.743: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) Jun 11 00:02:37.743: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) Jun 11 00:02:37.743: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) Jun 11 00:02:37.743: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) Jun 11 00:02:37.743: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) Jun 11 00:02:37.743: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jun 11 00:02:37.743: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) Jun 11 00:02:37.743: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) Jun 11 00:02:37.743: INFO: e2e test version: v1.21.9 Jun 11 00:02:37.745: INFO: kube-apiserver version: v1.21.1 Jun 11 00:02:37.745: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:02:37.751: INFO: Cluster IP family: ipv4 SSSSSSSSSSSS ------------------------------ Jun 11 00:02:37.757: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:02:37.778: INFO: Cluster IP family: ipv4 Jun 11 00:02:37.758: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:02:37.780: INFO: Cluster IP family: ipv4 SSS ------------------------------ Jun 11 00:02:37.762: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:02:37.784: INFO: Cluster IP family: ipv4 S ------------------------------ Jun 11 00:02:37.764: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:02:37.785: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ Jun 11 00:02:37.781: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:02:37.803: INFO: Cluster IP family: ipv4 S ------------------------------ Jun 11 00:02:37.779: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:02:37.805: INFO: Cluster IP family: ipv4 S ------------------------------ Jun 11 00:02:37.781: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:02:37.805: INFO: Cluster IP family: ipv4 SSSSS ------------------------------ Jun 11 00:02:37.789: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:02:37.809: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSS ------------------------------ Jun 11 00:02:37.794: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:02:37.815: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PV Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:02:37.887: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv-protection W0611 00:02:38.451060 34 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 11 00:02:38.451: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 11 00:02:38.453: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PV Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pv_protection.go:51 Jun 11 00:02:38.455: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable STEP: Creating a PV STEP: Waiting for PV to enter phase Available Jun 11 00:02:38.461: INFO: Waiting up to 30s for PersistentVolume hostpath-cw7th to have phase Available Jun 11 00:02:38.463: INFO: PersistentVolume hostpath-cw7th found but phase is Pending instead of Available. Jun 11 00:02:39.467: INFO: PersistentVolume hostpath-cw7th found and phase=Available (1.00549034s) STEP: Checking that PV Protection finalizer is set [It] Verify "immediate" deletion of a PV that is not bound to a PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pv_protection.go:99 STEP: Deleting the PV Jun 11 00:02:39.472: INFO: Waiting up to 3m0s for PersistentVolume hostpath-cw7th to get deleted Jun 11 00:02:39.474: INFO: PersistentVolume hostpath-cw7th found and phase=Available (1.997684ms) Jun 11 00:02:41.478: INFO: PersistentVolume hostpath-cw7th was removed [AfterEach] [sig-storage] PV Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:02:41.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-protection-1108" for this suite. [AfterEach] [sig-storage] PV Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pv_protection.go:92 Jun 11 00:02:41.487: INFO: AfterEach: Cleaning up test resources. Jun 11 00:02:41.487: INFO: pvc is nil Jun 11 00:02:41.487: INFO: Deleting PersistentVolume "hostpath-cw7th" • ------------------------------ {"msg":"PASSED [sig-storage] PV Protection Verify \"immediate\" deletion of a PV that is not bound to a PVC","total":-1,"completed":1,"skipped":29,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:02:37.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath W0611 00:02:37.857707 32 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 11 00:02:37.857: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 11 00:02:37.859: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:37 [It] should support subPath [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:93 STEP: Creating a pod to test hostPath subPath Jun 11 00:02:37.874: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-6094" to be "Succeeded or Failed" Jun 11 00:02:37.877: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.3956ms Jun 11 00:02:39.882: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00772761s Jun 11 00:02:41.885: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010996307s STEP: Saw pod success Jun 11 00:02:41.885: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed" Jun 11 00:02:41.888: INFO: Trying to get logs from node node1 pod pod-host-path-test container test-container-2: STEP: delete the pod Jun 11 00:02:42.300: INFO: Waiting for pod pod-host-path-test to disappear Jun 11 00:02:42.302: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:02:42.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-6094" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] HostPath should support subPath [NodeConformance]","total":-1,"completed":1,"skipped":12,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:02:37.778: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test W0611 00:02:37.805660 29 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 11 00:02:37.805: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 11 00:02:37.809: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Jun 11 00:02:41.846: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-788af84a-6dca-4370-a760-71db26fd9a5f && mount --bind /tmp/local-volume-test-788af84a-6dca-4370-a760-71db26fd9a5f /tmp/local-volume-test-788af84a-6dca-4370-a760-71db26fd9a5f] Namespace:persistent-local-volumes-test-34 PodName:hostexec-node2-f5rqq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:02:41.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 11 00:02:42.264: INFO: Creating a PV followed by a PVC Jun 11 00:02:42.271: INFO: Waiting for PV local-pv2f97p to bind to PVC pvc-84bqk Jun 11 00:02:42.271: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-84bqk] to have phase Bound Jun 11 00:02:42.275: INFO: PersistentVolumeClaim pvc-84bqk found but phase is Pending instead of Bound. Jun 11 00:02:44.279: INFO: PersistentVolumeClaim pvc-84bqk found and phase=Bound (2.007457169s) Jun 11 00:02:44.279: INFO: Waiting up to 3m0s for PersistentVolume local-pv2f97p to have phase Bound Jun 11 00:02:44.281: INFO: PersistentVolume local-pv2f97p found and phase=Bound (2.347311ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 STEP: Checking fsGroup is set STEP: Creating a pod Jun 11 00:02:52.303: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-34 exec pod-2101ce87-5696-4ba2-b1d6-9bf2763811a9 --namespace=persistent-local-volumes-test-34 -- stat -c %g /mnt/volume1' Jun 11 00:02:52.610: INFO: stderr: "" Jun 11 00:02:52.610: INFO: stdout: "1234\n" STEP: Deleting pod STEP: Deleting pod pod-2101ce87-5696-4ba2-b1d6-9bf2763811a9 in namespace persistent-local-volumes-test-34 [AfterEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 11 00:02:52.615: INFO: Deleting PersistentVolumeClaim "pvc-84bqk" Jun 11 00:02:52.618: INFO: Deleting PersistentVolume "local-pv2f97p" STEP: Removing the test directory Jun 11 00:02:52.622: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-788af84a-6dca-4370-a760-71db26fd9a5f && rm -r /tmp/local-volume-test-788af84a-6dca-4370-a760-71db26fd9a5f] Namespace:persistent-local-volumes-test-34 PodName:hostexec-node2-f5rqq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:02:52.622: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:02:52.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-34" for this suite. • [SLOW TEST:14.991 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Set fsGroup for local volume should set fsGroup for one pod [Slow]","total":-1,"completed":1,"skipped":12,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:02:37.830: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test W0611 00:02:37.867931 30 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 11 00:02:37.868: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 11 00:02:37.869: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Jun 11 00:02:39.898: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-1dfd4159-dbe5-40ca-9587-0afec8bcbb2b-backend && mount --bind /tmp/local-volume-test-1dfd4159-dbe5-40ca-9587-0afec8bcbb2b-backend /tmp/local-volume-test-1dfd4159-dbe5-40ca-9587-0afec8bcbb2b-backend && ln -s /tmp/local-volume-test-1dfd4159-dbe5-40ca-9587-0afec8bcbb2b-backend /tmp/local-volume-test-1dfd4159-dbe5-40ca-9587-0afec8bcbb2b] Namespace:persistent-local-volumes-test-8015 PodName:hostexec-node1-2hxfl ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:02:39.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 11 00:02:39.999: INFO: Creating a PV followed by a PVC Jun 11 00:02:40.006: INFO: Waiting for PV local-pvrcmdj to bind to PVC pvc-xfvpz Jun 11 00:02:40.006: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-xfvpz] to have phase Bound Jun 11 00:02:40.008: INFO: PersistentVolumeClaim pvc-xfvpz found but phase is Pending instead of Bound. Jun 11 00:02:42.013: INFO: PersistentVolumeClaim pvc-xfvpz found and phase=Bound (2.006580613s) Jun 11 00:02:42.013: INFO: Waiting up to 3m0s for PersistentVolume local-pvrcmdj to have phase Bound Jun 11 00:02:42.015: INFO: PersistentVolume local-pvrcmdj found and phase=Bound (2.12214ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 STEP: Creating pod1 to write to the PV STEP: Creating a pod Jun 11 00:02:50.043: INFO: pod "pod-2c8b9982-b88e-46f5-917a-49762abf3e57" created on Node "node1" STEP: Writing in pod1 Jun 11 00:02:50.043: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-8015 PodName:pod-2c8b9982-b88e-46f5-917a-49762abf3e57 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:02:50.043: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:02:50.127: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Jun 11 00:02:50.127: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-8015 PodName:pod-2c8b9982-b88e-46f5-917a-49762abf3e57 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:02:50.127: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:02:50.208: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Creating pod2 to read from the PV STEP: Creating a pod Jun 11 00:02:54.233: INFO: pod "pod-dc752bf9-6f92-4633-a6be-9ea1a38b4c7d" created on Node "node1" Jun 11 00:02:54.233: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-8015 PodName:pod-dc752bf9-6f92-4633-a6be-9ea1a38b4c7d ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:02:54.233: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:02:54.319: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod2 Jun 11 00:02:54.319: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-1dfd4159-dbe5-40ca-9587-0afec8bcbb2b > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-8015 PodName:pod-dc752bf9-6f92-4633-a6be-9ea1a38b4c7d ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:02:54.319: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:02:54.398: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-1dfd4159-dbe5-40ca-9587-0afec8bcbb2b > /mnt/volume1/test-file", out: "", stderr: "", err: STEP: Reading in pod1 Jun 11 00:02:54.398: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-8015 PodName:pod-2c8b9982-b88e-46f5-917a-49762abf3e57 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:02:54.398: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:02:54.481: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "/tmp/local-volume-test-1dfd4159-dbe5-40ca-9587-0afec8bcbb2b", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-2c8b9982-b88e-46f5-917a-49762abf3e57 in namespace persistent-local-volumes-test-8015 STEP: Deleting pod2 STEP: Deleting pod pod-dc752bf9-6f92-4633-a6be-9ea1a38b4c7d in namespace persistent-local-volumes-test-8015 [AfterEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 11 00:02:54.493: INFO: Deleting PersistentVolumeClaim "pvc-xfvpz" Jun 11 00:02:54.496: INFO: Deleting PersistentVolume "local-pvrcmdj" STEP: Removing the test directory Jun 11 00:02:54.500: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm /tmp/local-volume-test-1dfd4159-dbe5-40ca-9587-0afec8bcbb2b && umount /tmp/local-volume-test-1dfd4159-dbe5-40ca-9587-0afec8bcbb2b-backend && rm -r /tmp/local-volume-test-1dfd4159-dbe5-40ca-9587-0afec8bcbb2b-backend] Namespace:persistent-local-volumes-test-8015 PodName:hostexec-node1-2hxfl ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:02:54.500: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:02:54.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-8015" for this suite. • [SLOW TEST:16.787 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":1,"skipped":7,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:02:37.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test W0611 00:02:38.600375 37 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 11 00:02:38.600: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 11 00:02:38.602: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node1" using path "/tmp/local-volume-test-f6ba225f-092a-4203-a72d-53280e2cb188" Jun 11 00:02:42.631: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-f6ba225f-092a-4203-a72d-53280e2cb188 && dd if=/dev/zero of=/tmp/local-volume-test-f6ba225f-092a-4203-a72d-53280e2cb188/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-f6ba225f-092a-4203-a72d-53280e2cb188/file] Namespace:persistent-local-volumes-test-5827 PodName:hostexec-node1-tkdbx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:02:42.631: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:02:42.926: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-f6ba225f-092a-4203-a72d-53280e2cb188/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-5827 PodName:hostexec-node1-tkdbx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:02:42.926: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 11 00:02:43.353: INFO: Creating a PV followed by a PVC Jun 11 00:02:43.359: INFO: Waiting for PV local-pvgqmsc to bind to PVC pvc-cj84b Jun 11 00:02:43.359: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-cj84b] to have phase Bound Jun 11 00:02:43.361: INFO: PersistentVolumeClaim pvc-cj84b found but phase is Pending instead of Bound. Jun 11 00:02:45.366: INFO: PersistentVolumeClaim pvc-cj84b found but phase is Pending instead of Bound. Jun 11 00:02:47.373: INFO: PersistentVolumeClaim pvc-cj84b found but phase is Pending instead of Bound. Jun 11 00:02:49.377: INFO: PersistentVolumeClaim pvc-cj84b found but phase is Pending instead of Bound. Jun 11 00:02:51.381: INFO: PersistentVolumeClaim pvc-cj84b found but phase is Pending instead of Bound. Jun 11 00:02:53.383: INFO: PersistentVolumeClaim pvc-cj84b found but phase is Pending instead of Bound. Jun 11 00:02:55.388: INFO: PersistentVolumeClaim pvc-cj84b found but phase is Pending instead of Bound. Jun 11 00:02:57.391: INFO: PersistentVolumeClaim pvc-cj84b found and phase=Bound (14.031834368s) Jun 11 00:02:57.391: INFO: Waiting up to 3m0s for PersistentVolume local-pvgqmsc to have phase Bound Jun 11 00:02:57.393: INFO: PersistentVolume local-pvgqmsc found and phase=Bound (2.380649ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Jun 11 00:02:57.398: INFO: Disabled temporarily, reopen after #73168 is fixed [AfterEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 11 00:02:57.399: INFO: Deleting PersistentVolumeClaim "pvc-cj84b" Jun 11 00:02:57.403: INFO: Deleting PersistentVolume "local-pvgqmsc" Jun 11 00:02:57.406: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-f6ba225f-092a-4203-a72d-53280e2cb188/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-5827 PodName:hostexec-node1-tkdbx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:02:57.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "node1" at path /tmp/local-volume-test-f6ba225f-092a-4203-a72d-53280e2cb188/file Jun 11 00:02:57.499: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-5827 PodName:hostexec-node1-tkdbx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:02:57.499: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-f6ba225f-092a-4203-a72d-53280e2cb188 Jun 11 00:02:57.589: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-f6ba225f-092a-4203-a72d-53280e2cb188] Namespace:persistent-local-volumes-test-5827 PodName:hostexec-node1-tkdbx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:02:57.589: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:02:57.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5827" for this suite. S [SKIPPING] [19.804 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set different fsGroup for second pod if first pod is deleted [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Disabled temporarily, reopen after #73168 is fixed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:287 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:02:57.746: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:112 [It] should be reschedulable [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:326 Jun 11 00:02:57.769: INFO: Only supported for providers [openstack gce gke vsphere azure] (not local) [AfterEach] pods that use multiple volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:322 [AfterEach] [sig-storage] PersistentVolumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:02:57.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-8015" for this suite. S [SKIPPING] [0.040 seconds] [sig-storage] PersistentVolumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Default StorageClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:319 pods that use multiple volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:320 should be reschedulable [Slow] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:326 Only supported for providers [openstack gce gke vsphere azure] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:328 ------------------------------ SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:02:52.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node1" using path "/tmp/local-volume-test-a3e4f0a4-5287-4913-9048-1f378024097f" Jun 11 00:02:54.851: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-a3e4f0a4-5287-4913-9048-1f378024097f && dd if=/dev/zero of=/tmp/local-volume-test-a3e4f0a4-5287-4913-9048-1f378024097f/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-a3e4f0a4-5287-4913-9048-1f378024097f/file] Namespace:persistent-local-volumes-test-9145 PodName:hostexec-node1-wxnqz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:02:54.851: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:02:55.023: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-a3e4f0a4-5287-4913-9048-1f378024097f/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-9145 PodName:hostexec-node1-wxnqz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:02:55.023: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:02:55.338: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkfs -t ext4 /dev/loop1 && mount -t ext4 /dev/loop1 /tmp/local-volume-test-a3e4f0a4-5287-4913-9048-1f378024097f && chmod o+rwx /tmp/local-volume-test-a3e4f0a4-5287-4913-9048-1f378024097f] Namespace:persistent-local-volumes-test-9145 PodName:hostexec-node1-wxnqz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:02:55.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 11 00:02:55.601: INFO: Creating a PV followed by a PVC Jun 11 00:02:55.608: INFO: Waiting for PV local-pvbxqbw to bind to PVC pvc-jmv9w Jun 11 00:02:55.608: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-jmv9w] to have phase Bound Jun 11 00:02:55.611: INFO: PersistentVolumeClaim pvc-jmv9w found but phase is Pending instead of Bound. Jun 11 00:02:57.614: INFO: PersistentVolumeClaim pvc-jmv9w found and phase=Bound (2.006130682s) Jun 11 00:02:57.614: INFO: Waiting up to 3m0s for PersistentVolume local-pvbxqbw to have phase Bound Jun 11 00:02:57.617: INFO: PersistentVolume local-pvbxqbw found and phase=Bound (2.065637ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Jun 11 00:03:03.641: INFO: pod "pod-c387b593-1892-4922-b4ae-09ef1986f1a1" created on Node "node1" STEP: Writing in pod1 Jun 11 00:03:03.641: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9145 PodName:pod-c387b593-1892-4922-b4ae-09ef1986f1a1 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:03:03.641: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:03:03.866: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Jun 11 00:03:03.866: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9145 PodName:pod-c387b593-1892-4922-b4ae-09ef1986f1a1 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:03:03.866: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:03:03.995: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod1 Jun 11 00:03:03.995: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-a3e4f0a4-5287-4913-9048-1f378024097f > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9145 PodName:pod-c387b593-1892-4922-b4ae-09ef1986f1a1 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:03:03.995: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:03:04.075: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-a3e4f0a4-5287-4913-9048-1f378024097f > /mnt/volume1/test-file", out: "", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-c387b593-1892-4922-b4ae-09ef1986f1a1 in namespace persistent-local-volumes-test-9145 [AfterEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 11 00:03:04.080: INFO: Deleting PersistentVolumeClaim "pvc-jmv9w" Jun 11 00:03:04.084: INFO: Deleting PersistentVolume "local-pvbxqbw" Jun 11 00:03:04.088: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-a3e4f0a4-5287-4913-9048-1f378024097f] Namespace:persistent-local-volumes-test-9145 PodName:hostexec-node1-wxnqz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:03:04.088: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:03:04.540: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-a3e4f0a4-5287-4913-9048-1f378024097f/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-9145 PodName:hostexec-node1-wxnqz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:03:04.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop1" on node "node1" at path /tmp/local-volume-test-a3e4f0a4-5287-4913-9048-1f378024097f/file Jun 11 00:03:04.773: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop1] Namespace:persistent-local-volumes-test-9145 PodName:hostexec-node1-wxnqz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:03:04.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-a3e4f0a4-5287-4913-9048-1f378024097f Jun 11 00:03:04.924: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-a3e4f0a4-5287-4913-9048-1f378024097f] Namespace:persistent-local-volumes-test-9145 PodName:hostexec-node1-wxnqz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:03:04.924: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:03:05.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-9145" for this suite. • [SLOW TEST:12.313 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":2,"skipped":23,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:02:54.649: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node2" using path "/tmp/local-volume-test-9562569b-b475-4cf8-838d-1a697fc373a1" Jun 11 00:03:00.705: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-9562569b-b475-4cf8-838d-1a697fc373a1 && dd if=/dev/zero of=/tmp/local-volume-test-9562569b-b475-4cf8-838d-1a697fc373a1/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-9562569b-b475-4cf8-838d-1a697fc373a1/file] Namespace:persistent-local-volumes-test-6403 PodName:hostexec-node2-bxsnr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:03:00.705: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:03:00.890: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-9562569b-b475-4cf8-838d-1a697fc373a1/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-6403 PodName:hostexec-node2-bxsnr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:03:00.890: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:03:01.030: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkfs -t ext4 /dev/loop0 && mount -t ext4 /dev/loop0 /tmp/local-volume-test-9562569b-b475-4cf8-838d-1a697fc373a1 && chmod o+rwx /tmp/local-volume-test-9562569b-b475-4cf8-838d-1a697fc373a1] Namespace:persistent-local-volumes-test-6403 PodName:hostexec-node2-bxsnr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:03:01.030: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 11 00:03:01.375: INFO: Creating a PV followed by a PVC Jun 11 00:03:01.383: INFO: Waiting for PV local-pvsd58r to bind to PVC pvc-w7g8m Jun 11 00:03:01.383: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-w7g8m] to have phase Bound Jun 11 00:03:01.386: INFO: PersistentVolumeClaim pvc-w7g8m found but phase is Pending instead of Bound. Jun 11 00:03:03.391: INFO: PersistentVolumeClaim pvc-w7g8m found but phase is Pending instead of Bound. Jun 11 00:03:05.394: INFO: PersistentVolumeClaim pvc-w7g8m found but phase is Pending instead of Bound. Jun 11 00:03:07.398: INFO: PersistentVolumeClaim pvc-w7g8m found but phase is Pending instead of Bound. Jun 11 00:03:09.402: INFO: PersistentVolumeClaim pvc-w7g8m found but phase is Pending instead of Bound. Jun 11 00:03:11.405: INFO: PersistentVolumeClaim pvc-w7g8m found but phase is Pending instead of Bound. Jun 11 00:03:13.408: INFO: PersistentVolumeClaim pvc-w7g8m found and phase=Bound (12.02507049s) Jun 11 00:03:13.408: INFO: Waiting up to 3m0s for PersistentVolume local-pvsd58r to have phase Bound Jun 11 00:03:13.411: INFO: PersistentVolume local-pvsd58r found and phase=Bound (2.553955ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 STEP: Checking fsGroup is set STEP: Creating a pod Jun 11 00:03:19.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-6403 exec pod-f5130086-e3b4-4338-99fd-161b404c7411 --namespace=persistent-local-volumes-test-6403 -- stat -c %g /mnt/volume1' Jun 11 00:03:19.698: INFO: stderr: "" Jun 11 00:03:19.698: INFO: stdout: "1234\n" STEP: Deleting pod STEP: Deleting pod pod-f5130086-e3b4-4338-99fd-161b404c7411 in namespace persistent-local-volumes-test-6403 [AfterEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 11 00:03:19.703: INFO: Deleting PersistentVolumeClaim "pvc-w7g8m" Jun 11 00:03:19.706: INFO: Deleting PersistentVolume "local-pvsd58r" Jun 11 00:03:19.710: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-9562569b-b475-4cf8-838d-1a697fc373a1] Namespace:persistent-local-volumes-test-6403 PodName:hostexec-node2-bxsnr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:03:19.710: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:03:19.835: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-9562569b-b475-4cf8-838d-1a697fc373a1/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-6403 PodName:hostexec-node2-bxsnr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:03:19.835: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "node2" at path /tmp/local-volume-test-9562569b-b475-4cf8-838d-1a697fc373a1/file Jun 11 00:03:19.923: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-6403 PodName:hostexec-node2-bxsnr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:03:19.923: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-9562569b-b475-4cf8-838d-1a697fc373a1 Jun 11 00:03:20.008: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-9562569b-b475-4cf8-838d-1a697fc373a1] Namespace:persistent-local-volumes-test-6403 PodName:hostexec-node2-bxsnr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:03:20.008: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:03:20.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-6403" for this suite. • [SLOW TEST:25.459 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] Set fsGroup for local volume should set fsGroup for one pod [Slow]","total":-1,"completed":2,"skipped":18,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:03:20.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Jun 11 00:03:20.148: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:03:20.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-3313" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.035 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create metrics for total time taken in volume operations in P/V Controller [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:261 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:03:05.293: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node1" using path "/tmp/local-volume-test-e18d7b63-b9db-4e46-9cd5-00d8f72c48cc" Jun 11 00:03:09.347: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-e18d7b63-b9db-4e46-9cd5-00d8f72c48cc && dd if=/dev/zero of=/tmp/local-volume-test-e18d7b63-b9db-4e46-9cd5-00d8f72c48cc/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-e18d7b63-b9db-4e46-9cd5-00d8f72c48cc/file] Namespace:persistent-local-volumes-test-4703 PodName:hostexec-node1-ll9w4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:03:09.348: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:03:09.528: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-e18d7b63-b9db-4e46-9cd5-00d8f72c48cc/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-4703 PodName:hostexec-node1-ll9w4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:03:09.529: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:03:09.632: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkfs -t ext4 /dev/loop0 && mount -t ext4 /dev/loop0 /tmp/local-volume-test-e18d7b63-b9db-4e46-9cd5-00d8f72c48cc && chmod o+rwx /tmp/local-volume-test-e18d7b63-b9db-4e46-9cd5-00d8f72c48cc] Namespace:persistent-local-volumes-test-4703 PodName:hostexec-node1-ll9w4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:03:09.632: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 11 00:03:09.891: INFO: Creating a PV followed by a PVC Jun 11 00:03:09.899: INFO: Waiting for PV local-pvjg7sr to bind to PVC pvc-lkxd8 Jun 11 00:03:09.899: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-lkxd8] to have phase Bound Jun 11 00:03:09.902: INFO: PersistentVolumeClaim pvc-lkxd8 found but phase is Pending instead of Bound. Jun 11 00:03:11.906: INFO: PersistentVolumeClaim pvc-lkxd8 found and phase=Bound (2.00633122s) Jun 11 00:03:11.906: INFO: Waiting up to 3m0s for PersistentVolume local-pvjg7sr to have phase Bound Jun 11 00:03:11.908: INFO: PersistentVolume local-pvjg7sr found and phase=Bound (2.484723ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 STEP: Create first pod and check fsGroup is set STEP: Creating a pod Jun 11 00:03:15.934: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-4703 exec pod-56c414e4-eaed-4936-bcd9-7943c804ef1b --namespace=persistent-local-volumes-test-4703 -- stat -c %g /mnt/volume1' Jun 11 00:03:16.227: INFO: stderr: "" Jun 11 00:03:16.227: INFO: stdout: "1234\n" STEP: Create second pod with same fsGroup and check fsGroup is correct STEP: Creating a pod Jun 11 00:03:20.246: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-4703 exec pod-cf62a7fa-9ba9-47db-827d-32ad20f82807 --namespace=persistent-local-volumes-test-4703 -- stat -c %g /mnt/volume1' Jun 11 00:03:20.503: INFO: stderr: "" Jun 11 00:03:20.503: INFO: stdout: "1234\n" STEP: Deleting first pod STEP: Deleting pod pod-56c414e4-eaed-4936-bcd9-7943c804ef1b in namespace persistent-local-volumes-test-4703 STEP: Deleting second pod STEP: Deleting pod pod-cf62a7fa-9ba9-47db-827d-32ad20f82807 in namespace persistent-local-volumes-test-4703 [AfterEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 11 00:03:20.513: INFO: Deleting PersistentVolumeClaim "pvc-lkxd8" Jun 11 00:03:20.516: INFO: Deleting PersistentVolume "local-pvjg7sr" Jun 11 00:03:20.520: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-e18d7b63-b9db-4e46-9cd5-00d8f72c48cc] Namespace:persistent-local-volumes-test-4703 PodName:hostexec-node1-ll9w4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:03:20.520: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:03:20.613: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-e18d7b63-b9db-4e46-9cd5-00d8f72c48cc/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-4703 PodName:hostexec-node1-ll9w4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:03:20.613: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "node1" at path /tmp/local-volume-test-e18d7b63-b9db-4e46-9cd5-00d8f72c48cc/file Jun 11 00:03:20.708: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-4703 PodName:hostexec-node1-ll9w4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:03:20.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-e18d7b63-b9db-4e46-9cd5-00d8f72c48cc Jun 11 00:03:20.793: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-e18d7b63-b9db-4e46-9cd5-00d8f72c48cc] Namespace:persistent-local-volumes-test-4703 PodName:hostexec-node1-ll9w4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:03:20.793: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:03:20.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-4703" for this suite. • [SLOW TEST:15.601 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]","total":-1,"completed":3,"skipped":104,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:02:37.797: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes W0611 00:02:37.824731 39 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 11 00:02:37.825: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 11 00:02:37.826: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should preserve attachment policy when no CSIDriver present /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:338 STEP: Building a driver namespace object, basename csi-mock-volumes-5410 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Jun 11 00:02:37.890: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5410-921/csi-attacher Jun 11 00:02:37.892: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-5410 Jun 11 00:02:37.892: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-5410 Jun 11 00:02:37.895: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-5410 Jun 11 00:02:37.899: INFO: creating *v1.Role: csi-mock-volumes-5410-921/external-attacher-cfg-csi-mock-volumes-5410 Jun 11 00:02:37.901: INFO: creating *v1.RoleBinding: csi-mock-volumes-5410-921/csi-attacher-role-cfg Jun 11 00:02:37.904: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5410-921/csi-provisioner Jun 11 00:02:37.907: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-5410 Jun 11 00:02:37.907: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-5410 Jun 11 00:02:37.909: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-5410 Jun 11 00:02:37.911: INFO: creating *v1.Role: csi-mock-volumes-5410-921/external-provisioner-cfg-csi-mock-volumes-5410 Jun 11 00:02:37.915: INFO: creating *v1.RoleBinding: csi-mock-volumes-5410-921/csi-provisioner-role-cfg Jun 11 00:02:37.917: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5410-921/csi-resizer Jun 11 00:02:37.920: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-5410 Jun 11 00:02:37.920: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-5410 Jun 11 00:02:37.922: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-5410 Jun 11 00:02:37.925: INFO: creating *v1.Role: csi-mock-volumes-5410-921/external-resizer-cfg-csi-mock-volumes-5410 Jun 11 00:02:37.928: INFO: creating *v1.RoleBinding: csi-mock-volumes-5410-921/csi-resizer-role-cfg Jun 11 00:02:37.931: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5410-921/csi-snapshotter Jun 11 00:02:37.933: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-5410 Jun 11 00:02:37.933: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-5410 Jun 11 00:02:37.936: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-5410 Jun 11 00:02:37.938: INFO: creating *v1.Role: csi-mock-volumes-5410-921/external-snapshotter-leaderelection-csi-mock-volumes-5410 Jun 11 00:02:37.941: INFO: creating *v1.RoleBinding: csi-mock-volumes-5410-921/external-snapshotter-leaderelection Jun 11 00:02:37.944: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5410-921/csi-mock Jun 11 00:02:37.947: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-5410 Jun 11 00:02:37.949: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-5410 Jun 11 00:02:37.952: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-5410 Jun 11 00:02:37.955: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-5410 Jun 11 00:02:37.957: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-5410 Jun 11 00:02:37.960: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-5410 Jun 11 00:02:37.962: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-5410 Jun 11 00:02:37.965: INFO: creating *v1.StatefulSet: csi-mock-volumes-5410-921/csi-mockplugin Jun 11 00:02:37.970: INFO: creating *v1.StatefulSet: csi-mock-volumes-5410-921/csi-mockplugin-attacher Jun 11 00:02:37.973: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-5410 to register on node node2 STEP: Creating pod Jun 11 00:02:54.243: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jun 11 00:02:54.247: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-5l9qg] to have phase Bound Jun 11 00:02:54.249: INFO: PersistentVolumeClaim pvc-5l9qg found but phase is Pending instead of Bound. Jun 11 00:02:56.253: INFO: PersistentVolumeClaim pvc-5l9qg found and phase=Bound (2.005307727s) STEP: Checking if VolumeAttachment was created for the pod STEP: Deleting pod pvc-volume-tester-n2dhz Jun 11 00:03:04.282: INFO: Deleting pod "pvc-volume-tester-n2dhz" in namespace "csi-mock-volumes-5410" Jun 11 00:03:04.288: INFO: Wait up to 5m0s for pod "pvc-volume-tester-n2dhz" to be fully deleted STEP: Deleting claim pvc-5l9qg Jun 11 00:03:08.300: INFO: Waiting up to 2m0s for PersistentVolume pvc-e477e9f2-75b7-49d3-841c-ee79def689b0 to get deleted Jun 11 00:03:08.303: INFO: PersistentVolume pvc-e477e9f2-75b7-49d3-841c-ee79def689b0 found and phase=Bound (2.120749ms) Jun 11 00:03:10.306: INFO: PersistentVolume pvc-e477e9f2-75b7-49d3-841c-ee79def689b0 was removed STEP: Deleting storageclass csi-mock-volumes-5410-sc9k8qt STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-5410 STEP: Waiting for namespaces [csi-mock-volumes-5410] to vanish STEP: uninstalling csi mock driver Jun 11 00:03:16.317: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5410-921/csi-attacher Jun 11 00:03:16.320: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-5410 Jun 11 00:03:16.324: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-5410 Jun 11 00:03:16.327: INFO: deleting *v1.Role: csi-mock-volumes-5410-921/external-attacher-cfg-csi-mock-volumes-5410 Jun 11 00:03:16.330: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5410-921/csi-attacher-role-cfg Jun 11 00:03:16.334: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5410-921/csi-provisioner Jun 11 00:03:16.338: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-5410 Jun 11 00:03:16.342: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-5410 Jun 11 00:03:16.346: INFO: deleting *v1.Role: csi-mock-volumes-5410-921/external-provisioner-cfg-csi-mock-volumes-5410 Jun 11 00:03:16.349: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5410-921/csi-provisioner-role-cfg Jun 11 00:03:16.353: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5410-921/csi-resizer Jun 11 00:03:16.359: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-5410 Jun 11 00:03:16.363: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-5410 Jun 11 00:03:16.369: INFO: deleting *v1.Role: csi-mock-volumes-5410-921/external-resizer-cfg-csi-mock-volumes-5410 Jun 11 00:03:16.375: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5410-921/csi-resizer-role-cfg Jun 11 00:03:16.379: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5410-921/csi-snapshotter Jun 11 00:03:16.382: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-5410 Jun 11 00:03:16.387: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-5410 Jun 11 00:03:16.390: INFO: deleting *v1.Role: csi-mock-volumes-5410-921/external-snapshotter-leaderelection-csi-mock-volumes-5410 Jun 11 00:03:16.394: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5410-921/external-snapshotter-leaderelection Jun 11 00:03:16.398: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5410-921/csi-mock Jun 11 00:03:16.402: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-5410 Jun 11 00:03:16.406: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-5410 Jun 11 00:03:16.409: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-5410 Jun 11 00:03:16.412: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-5410 Jun 11 00:03:16.415: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-5410 Jun 11 00:03:16.420: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-5410 Jun 11 00:03:16.423: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-5410 Jun 11 00:03:16.426: INFO: deleting *v1.StatefulSet: csi-mock-volumes-5410-921/csi-mockplugin Jun 11 00:03:16.431: INFO: deleting *v1.StatefulSet: csi-mock-volumes-5410-921/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-5410-921 STEP: Waiting for namespaces [csi-mock-volumes-5410-921] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:03:22.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:44.654 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI attach test using mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:316 should preserve attachment policy when no CSIDriver present /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:338 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should preserve attachment policy when no CSIDriver present","total":-1,"completed":1,"skipped":4,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:03:22.472: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-disks STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:74 Jun 11 00:03:22.497: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:03:22.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-disks-446" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.034 seconds] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 schedule pods each with a PD, delete pod and verify detach [Slow] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:93 for RW PD with pod delete grace period of "default (30s)" /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:135 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:75 ------------------------------ SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:03:22.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:106 STEP: Creating a pod to test downward API volume plugin Jun 11 00:03:22.570: INFO: Waiting up to 5m0s for pod "metadata-volume-58542863-f6a0-4be2-b533-88dd19c8bbae" in namespace "projected-9265" to be "Succeeded or Failed" Jun 11 00:03:22.573: INFO: Pod "metadata-volume-58542863-f6a0-4be2-b533-88dd19c8bbae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.264157ms Jun 11 00:03:24.577: INFO: Pod "metadata-volume-58542863-f6a0-4be2-b533-88dd19c8bbae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006477641s Jun 11 00:03:26.580: INFO: Pod "metadata-volume-58542863-f6a0-4be2-b533-88dd19c8bbae": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009144013s Jun 11 00:03:28.583: INFO: Pod "metadata-volume-58542863-f6a0-4be2-b533-88dd19c8bbae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.01254286s STEP: Saw pod success Jun 11 00:03:28.583: INFO: Pod "metadata-volume-58542863-f6a0-4be2-b533-88dd19c8bbae" satisfied condition "Succeeded or Failed" Jun 11 00:03:28.586: INFO: Trying to get logs from node node2 pod metadata-volume-58542863-f6a0-4be2-b533-88dd19c8bbae container client-container: STEP: delete the pod Jun 11 00:03:28.598: INFO: Waiting for pod metadata-volume-58542863-f6a0-4be2-b533-88dd19c8bbae to disappear Jun 11 00:03:28.600: INFO: Pod metadata-volume-58542863-f6a0-4be2-b533-88dd19c8bbae no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:03:28.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9265" for this suite. • [SLOW TEST:6.072 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:106 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":2,"skipped":24,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:03:21.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-block-dev STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:325 STEP: Create a pod for further testing Jun 11 00:03:21.059: INFO: The status of Pod test-hostpath-type-67nbc is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:03:23.063: INFO: The status of Pod test-hostpath-type-67nbc is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:03:25.063: INFO: The status of Pod test-hostpath-type-67nbc is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:03:27.062: INFO: The status of Pod test-hostpath-type-67nbc is Running (Ready = true) STEP: running on node node2 STEP: Create a block device for further testing Jun 11 00:03:27.066: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/ablkdev b 89 1] Namespace:host-path-type-block-dev-5159 PodName:test-hostpath-type-67nbc ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:03:27.066: INFO: >>> kubeConfig: /root/.kube/config [It] Should fail on mounting block device 'ablkdev' when HostPathType is HostPathSocket /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:364 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:03:30.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-block-dev-5159" for this suite. • [SLOW TEST:9.249 seconds] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting block device 'ablkdev' when HostPathType is HostPathSocket /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:364 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Block Device [Slow] Should fail on mounting block device 'ablkdev' when HostPathType is HostPathSocket","total":-1,"completed":4,"skipped":161,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:03:28.621: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50 [It] new files should be created with FSGroup ownership when container is non-root /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:59 STEP: Creating a pod to test emptydir 0644 on tmpfs Jun 11 00:03:28.655: INFO: Waiting up to 5m0s for pod "pod-b6827025-b4d2-46eb-8376-0d18170b4d93" in namespace "emptydir-4722" to be "Succeeded or Failed" Jun 11 00:03:28.658: INFO: Pod "pod-b6827025-b4d2-46eb-8376-0d18170b4d93": Phase="Pending", Reason="", readiness=false. Elapsed: 3.342512ms Jun 11 00:03:30.662: INFO: Pod "pod-b6827025-b4d2-46eb-8376-0d18170b4d93": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006730631s Jun 11 00:03:32.667: INFO: Pod "pod-b6827025-b4d2-46eb-8376-0d18170b4d93": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011865047s STEP: Saw pod success Jun 11 00:03:32.667: INFO: Pod "pod-b6827025-b4d2-46eb-8376-0d18170b4d93" satisfied condition "Succeeded or Failed" Jun 11 00:03:32.671: INFO: Trying to get logs from node node2 pod pod-b6827025-b4d2-46eb-8376-0d18170b4d93 container test-container: STEP: delete the pod Jun 11 00:03:32.690: INFO: Waiting for pod pod-b6827025-b4d2-46eb-8376-0d18170b4d93 to disappear Jun 11 00:03:32.692: INFO: Pod pod-b6827025-b4d2-46eb-8376-0d18170b4d93 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:03:32.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4722" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is non-root","total":-1,"completed":3,"skipped":28,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:03:30.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-block-dev STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:325 STEP: Create a pod for further testing Jun 11 00:03:30.494: INFO: The status of Pod test-hostpath-type-f5s6x is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:03:32.497: INFO: The status of Pod test-hostpath-type-f5s6x is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:03:34.498: INFO: The status of Pod test-hostpath-type-f5s6x is Running (Ready = true) STEP: running on node node1 STEP: Create a block device for further testing Jun 11 00:03:34.501: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/ablkdev b 89 1] Namespace:host-path-type-block-dev-304 PodName:test-hostpath-type-f5s6x ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:03:34.501: INFO: >>> kubeConfig: /root/.kube/config [It] Should fail on mounting non-existent block device 'does-not-exist-blk-dev' when HostPathType is HostPathBlockDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:340 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:03:36.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-block-dev-304" for this suite. • [SLOW TEST:6.172 seconds] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting non-existent block device 'does-not-exist-blk-dev' when HostPathType is HostPathBlockDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:340 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Block Device [Slow] Should fail on mounting non-existent block device 'does-not-exist-blk-dev' when HostPathType is HostPathBlockDev","total":-1,"completed":5,"skipped":246,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:02:37.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes W0611 00:02:39.401596 28 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 11 00:02:39.401: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 11 00:02:39.404: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] token should not be plumbed down when csiServiceAccountTokenEnabled=false /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1524 STEP: Building a driver namespace object, basename csi-mock-volumes-5081 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Jun 11 00:02:40.681: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5081-9543/csi-attacher Jun 11 00:02:40.685: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-5081 Jun 11 00:02:40.685: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-5081 Jun 11 00:02:40.689: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-5081 Jun 11 00:02:40.692: INFO: creating *v1.Role: csi-mock-volumes-5081-9543/external-attacher-cfg-csi-mock-volumes-5081 Jun 11 00:02:40.695: INFO: creating *v1.RoleBinding: csi-mock-volumes-5081-9543/csi-attacher-role-cfg Jun 11 00:02:40.698: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5081-9543/csi-provisioner Jun 11 00:02:40.700: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-5081 Jun 11 00:02:40.701: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-5081 Jun 11 00:02:40.704: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-5081 Jun 11 00:02:40.706: INFO: creating *v1.Role: csi-mock-volumes-5081-9543/external-provisioner-cfg-csi-mock-volumes-5081 Jun 11 00:02:40.709: INFO: creating *v1.RoleBinding: csi-mock-volumes-5081-9543/csi-provisioner-role-cfg Jun 11 00:02:40.711: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5081-9543/csi-resizer Jun 11 00:02:40.714: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-5081 Jun 11 00:02:40.714: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-5081 Jun 11 00:02:40.717: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-5081 Jun 11 00:02:40.720: INFO: creating *v1.Role: csi-mock-volumes-5081-9543/external-resizer-cfg-csi-mock-volumes-5081 Jun 11 00:02:40.722: INFO: creating *v1.RoleBinding: csi-mock-volumes-5081-9543/csi-resizer-role-cfg Jun 11 00:02:40.726: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5081-9543/csi-snapshotter Jun 11 00:02:40.729: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-5081 Jun 11 00:02:40.729: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-5081 Jun 11 00:02:40.732: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-5081 Jun 11 00:02:40.735: INFO: creating *v1.Role: csi-mock-volumes-5081-9543/external-snapshotter-leaderelection-csi-mock-volumes-5081 Jun 11 00:02:40.738: INFO: creating *v1.RoleBinding: csi-mock-volumes-5081-9543/external-snapshotter-leaderelection Jun 11 00:02:40.740: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5081-9543/csi-mock Jun 11 00:02:40.743: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-5081 Jun 11 00:02:40.745: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-5081 Jun 11 00:02:40.751: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-5081 Jun 11 00:02:40.754: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-5081 Jun 11 00:02:40.756: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-5081 Jun 11 00:02:40.762: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-5081 Jun 11 00:02:40.765: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-5081 Jun 11 00:02:40.770: INFO: creating *v1.StatefulSet: csi-mock-volumes-5081-9543/csi-mockplugin Jun 11 00:02:40.779: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-5081 Jun 11 00:02:40.782: INFO: creating *v1.StatefulSet: csi-mock-volumes-5081-9543/csi-mockplugin-attacher Jun 11 00:02:40.785: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-5081" Jun 11 00:02:40.788: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-5081 to register on node node1 STEP: Creating pod Jun 11 00:02:57.061: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jun 11 00:02:57.066: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-mjvds] to have phase Bound Jun 11 00:02:57.068: INFO: PersistentVolumeClaim pvc-mjvds found but phase is Pending instead of Bound. Jun 11 00:02:59.072: INFO: PersistentVolumeClaim pvc-mjvds found and phase=Bound (2.006160433s) STEP: Deleting the previously created pod Jun 11 00:03:07.097: INFO: Deleting pod "pvc-volume-tester-f59dm" in namespace "csi-mock-volumes-5081" Jun 11 00:03:07.102: INFO: Wait up to 5m0s for pod "pvc-volume-tester-f59dm" to be fully deleted STEP: Checking CSI driver logs Jun 11 00:03:17.121: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/eed88bcb-6faa-45d2-9870-c7277dd50668/volumes/kubernetes.io~csi/pvc-86af4496-64d1-4958-b480-99c98d042d16/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:} STEP: Deleting pod pvc-volume-tester-f59dm Jun 11 00:03:17.121: INFO: Deleting pod "pvc-volume-tester-f59dm" in namespace "csi-mock-volumes-5081" STEP: Deleting claim pvc-mjvds Jun 11 00:03:17.130: INFO: Waiting up to 2m0s for PersistentVolume pvc-86af4496-64d1-4958-b480-99c98d042d16 to get deleted Jun 11 00:03:17.133: INFO: PersistentVolume pvc-86af4496-64d1-4958-b480-99c98d042d16 found and phase=Bound (2.450211ms) Jun 11 00:03:19.136: INFO: PersistentVolume pvc-86af4496-64d1-4958-b480-99c98d042d16 found and phase=Released (2.005272431s) Jun 11 00:03:21.139: INFO: PersistentVolume pvc-86af4496-64d1-4958-b480-99c98d042d16 found and phase=Released (4.009029161s) Jun 11 00:03:23.143: INFO: PersistentVolume pvc-86af4496-64d1-4958-b480-99c98d042d16 found and phase=Released (6.013006008s) Jun 11 00:03:25.148: INFO: PersistentVolume pvc-86af4496-64d1-4958-b480-99c98d042d16 was removed STEP: Deleting storageclass csi-mock-volumes-5081-scp5dhw STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-5081 STEP: Waiting for namespaces [csi-mock-volumes-5081] to vanish STEP: uninstalling csi mock driver Jun 11 00:03:31.160: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5081-9543/csi-attacher Jun 11 00:03:31.163: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-5081 Jun 11 00:03:31.166: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-5081 Jun 11 00:03:31.170: INFO: deleting *v1.Role: csi-mock-volumes-5081-9543/external-attacher-cfg-csi-mock-volumes-5081 Jun 11 00:03:31.173: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5081-9543/csi-attacher-role-cfg Jun 11 00:03:31.177: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5081-9543/csi-provisioner Jun 11 00:03:31.180: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-5081 Jun 11 00:03:31.183: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-5081 Jun 11 00:03:31.186: INFO: deleting *v1.Role: csi-mock-volumes-5081-9543/external-provisioner-cfg-csi-mock-volumes-5081 Jun 11 00:03:31.194: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5081-9543/csi-provisioner-role-cfg Jun 11 00:03:31.201: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5081-9543/csi-resizer Jun 11 00:03:31.208: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-5081 Jun 11 00:03:31.215: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-5081 Jun 11 00:03:31.218: INFO: deleting *v1.Role: csi-mock-volumes-5081-9543/external-resizer-cfg-csi-mock-volumes-5081 Jun 11 00:03:31.221: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5081-9543/csi-resizer-role-cfg Jun 11 00:03:31.224: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5081-9543/csi-snapshotter Jun 11 00:03:31.227: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-5081 Jun 11 00:03:31.231: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-5081 Jun 11 00:03:31.234: INFO: deleting *v1.Role: csi-mock-volumes-5081-9543/external-snapshotter-leaderelection-csi-mock-volumes-5081 Jun 11 00:03:31.237: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5081-9543/external-snapshotter-leaderelection Jun 11 00:03:31.240: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5081-9543/csi-mock Jun 11 00:03:31.244: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-5081 Jun 11 00:03:31.247: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-5081 Jun 11 00:03:31.250: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-5081 Jun 11 00:03:31.254: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-5081 Jun 11 00:03:31.257: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-5081 Jun 11 00:03:31.259: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-5081 Jun 11 00:03:31.263: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-5081 Jun 11 00:03:31.267: INFO: deleting *v1.StatefulSet: csi-mock-volumes-5081-9543/csi-mockplugin Jun 11 00:03:31.270: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-5081 Jun 11 00:03:31.273: INFO: deleting *v1.StatefulSet: csi-mock-volumes-5081-9543/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-5081-9543 STEP: Waiting for namespaces [csi-mock-volumes-5081-9543] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:03:43.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:65.365 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSIServiceAccountToken /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1496 token should not be plumbed down when csiServiceAccountTokenEnabled=false /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1524 ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:03:32.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node1" using path "/tmp/local-volume-test-6c90d59a-d0b9-4e68-b03f-9d792224b590" Jun 11 00:03:34.767: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-6c90d59a-d0b9-4e68-b03f-9d792224b590 && dd if=/dev/zero of=/tmp/local-volume-test-6c90d59a-d0b9-4e68-b03f-9d792224b590/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-6c90d59a-d0b9-4e68-b03f-9d792224b590/file] Namespace:persistent-local-volumes-test-1438 PodName:hostexec-node1-bmv72 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:03:34.767: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:03:34.891: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-6c90d59a-d0b9-4e68-b03f-9d792224b590/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-1438 PodName:hostexec-node1-bmv72 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:03:34.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 11 00:03:34.981: INFO: Creating a PV followed by a PVC Jun 11 00:03:34.988: INFO: Waiting for PV local-pvbcl6w to bind to PVC pvc-4sdvs Jun 11 00:03:34.988: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-4sdvs] to have phase Bound Jun 11 00:03:34.990: INFO: PersistentVolumeClaim pvc-4sdvs found but phase is Pending instead of Bound. Jun 11 00:03:36.994: INFO: PersistentVolumeClaim pvc-4sdvs found but phase is Pending instead of Bound. Jun 11 00:03:38.998: INFO: PersistentVolumeClaim pvc-4sdvs found but phase is Pending instead of Bound. Jun 11 00:03:41.002: INFO: PersistentVolumeClaim pvc-4sdvs found but phase is Pending instead of Bound. Jun 11 00:03:43.005: INFO: PersistentVolumeClaim pvc-4sdvs found and phase=Bound (8.016696425s) Jun 11 00:03:43.005: INFO: Waiting up to 3m0s for PersistentVolume local-pvbcl6w to have phase Bound Jun 11 00:03:43.007: INFO: PersistentVolume local-pvbcl6w found and phase=Bound (2.16078ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 Jun 11 00:03:43.013: INFO: We don't set fsGroup on block device, skipped. [AfterEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 11 00:03:43.015: INFO: Deleting PersistentVolumeClaim "pvc-4sdvs" Jun 11 00:03:43.019: INFO: Deleting PersistentVolume "local-pvbcl6w" Jun 11 00:03:43.023: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-6c90d59a-d0b9-4e68-b03f-9d792224b590/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-1438 PodName:hostexec-node1-bmv72 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:03:43.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "node1" at path /tmp/local-volume-test-6c90d59a-d0b9-4e68-b03f-9d792224b590/file Jun 11 00:03:43.112: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-1438 PodName:hostexec-node1-bmv72 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:03:43.112: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-6c90d59a-d0b9-4e68-b03f-9d792224b590 Jun 11 00:03:43.199: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-6c90d59a-d0b9-4e68-b03f-9d792224b590] Namespace:persistent-local-volumes-test-1438 PodName:hostexec-node1-bmv72 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:03:43.199: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:03:43.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-1438" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [10.579 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set different fsGroup for second pod if first pod is deleted [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 We don't set fsGroup on block device, skipped. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:263 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSIServiceAccountToken token should not be plumbed down when csiServiceAccountTokenEnabled=false","total":-1,"completed":1,"skipped":60,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:03:43.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Jun 11 00:03:43.356: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:03:43.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-9793" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.029 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create volume metrics in Volume Manager [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:292 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:02:37.837: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes W0611 00:02:37.871006 31 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 11 00:02:37.871: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 11 00:02:37.873: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should expand volume by restarting pod if attach=off, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590 STEP: Building a driver namespace object, basename csi-mock-volumes-5348 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Jun 11 00:02:37.936: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5348-5186/csi-attacher Jun 11 00:02:37.939: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-5348 Jun 11 00:02:37.939: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-5348 Jun 11 00:02:37.941: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-5348 Jun 11 00:02:37.944: INFO: creating *v1.Role: csi-mock-volumes-5348-5186/external-attacher-cfg-csi-mock-volumes-5348 Jun 11 00:02:37.946: INFO: creating *v1.RoleBinding: csi-mock-volumes-5348-5186/csi-attacher-role-cfg Jun 11 00:02:37.949: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5348-5186/csi-provisioner Jun 11 00:02:37.952: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-5348 Jun 11 00:02:37.952: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-5348 Jun 11 00:02:37.955: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-5348 Jun 11 00:02:37.957: INFO: creating *v1.Role: csi-mock-volumes-5348-5186/external-provisioner-cfg-csi-mock-volumes-5348 Jun 11 00:02:37.960: INFO: creating *v1.RoleBinding: csi-mock-volumes-5348-5186/csi-provisioner-role-cfg Jun 11 00:02:37.963: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5348-5186/csi-resizer Jun 11 00:02:37.966: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-5348 Jun 11 00:02:37.966: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-5348 Jun 11 00:02:37.969: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-5348 Jun 11 00:02:37.971: INFO: creating *v1.Role: csi-mock-volumes-5348-5186/external-resizer-cfg-csi-mock-volumes-5348 Jun 11 00:02:37.974: INFO: creating *v1.RoleBinding: csi-mock-volumes-5348-5186/csi-resizer-role-cfg Jun 11 00:02:37.977: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5348-5186/csi-snapshotter Jun 11 00:02:37.979: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-5348 Jun 11 00:02:37.979: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-5348 Jun 11 00:02:37.982: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-5348 Jun 11 00:02:37.985: INFO: creating *v1.Role: csi-mock-volumes-5348-5186/external-snapshotter-leaderelection-csi-mock-volumes-5348 Jun 11 00:02:37.987: INFO: creating *v1.RoleBinding: csi-mock-volumes-5348-5186/external-snapshotter-leaderelection Jun 11 00:02:37.990: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5348-5186/csi-mock Jun 11 00:02:37.993: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-5348 Jun 11 00:02:37.997: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-5348 Jun 11 00:02:38.000: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-5348 Jun 11 00:02:38.003: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-5348 Jun 11 00:02:38.006: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-5348 Jun 11 00:02:38.008: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-5348 Jun 11 00:02:38.011: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-5348 Jun 11 00:02:38.014: INFO: creating *v1.StatefulSet: csi-mock-volumes-5348-5186/csi-mockplugin Jun 11 00:02:38.019: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-5348 Jun 11 00:02:38.022: INFO: creating *v1.StatefulSet: csi-mock-volumes-5348-5186/csi-mockplugin-resizer Jun 11 00:02:38.025: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-5348" Jun 11 00:02:38.027: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-5348 to register on node node2 STEP: Creating pod Jun 11 00:03:04.423: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jun 11 00:03:04.428: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-mjdls] to have phase Bound Jun 11 00:03:04.430: INFO: PersistentVolumeClaim pvc-mjdls found but phase is Pending instead of Bound. Jun 11 00:03:06.432: INFO: PersistentVolumeClaim pvc-mjdls found and phase=Bound (2.004756976s) STEP: Expanding current pvc STEP: Waiting for persistent volume resize to finish STEP: Checking for conditions on pvc STEP: Deleting the previously created pod Jun 11 00:03:12.470: INFO: Deleting pod "pvc-volume-tester-9rc5m" in namespace "csi-mock-volumes-5348" Jun 11 00:03:12.476: INFO: Wait up to 5m0s for pod "pvc-volume-tester-9rc5m" to be fully deleted STEP: Creating a new pod with same volume STEP: Waiting for PVC resize to finish STEP: Deleting pod pvc-volume-tester-9rc5m Jun 11 00:03:28.494: INFO: Deleting pod "pvc-volume-tester-9rc5m" in namespace "csi-mock-volumes-5348" STEP: Deleting pod pvc-volume-tester-xd4t8 Jun 11 00:03:28.496: INFO: Deleting pod "pvc-volume-tester-xd4t8" in namespace "csi-mock-volumes-5348" Jun 11 00:03:28.500: INFO: Wait up to 5m0s for pod "pvc-volume-tester-xd4t8" to be fully deleted STEP: Deleting claim pvc-mjdls Jun 11 00:03:30.510: INFO: Waiting up to 2m0s for PersistentVolume pvc-c3336e9e-f6d6-4063-a94e-c9ebf5d53894 to get deleted Jun 11 00:03:30.512: INFO: PersistentVolume pvc-c3336e9e-f6d6-4063-a94e-c9ebf5d53894 found and phase=Bound (2.015501ms) Jun 11 00:03:32.515: INFO: PersistentVolume pvc-c3336e9e-f6d6-4063-a94e-c9ebf5d53894 was removed STEP: Deleting storageclass csi-mock-volumes-5348-sccgqdx STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-5348 STEP: Waiting for namespaces [csi-mock-volumes-5348] to vanish STEP: uninstalling csi mock driver Jun 11 00:03:38.527: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5348-5186/csi-attacher Jun 11 00:03:38.531: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-5348 Jun 11 00:03:38.536: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-5348 Jun 11 00:03:38.539: INFO: deleting *v1.Role: csi-mock-volumes-5348-5186/external-attacher-cfg-csi-mock-volumes-5348 Jun 11 00:03:38.542: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5348-5186/csi-attacher-role-cfg Jun 11 00:03:38.546: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5348-5186/csi-provisioner Jun 11 00:03:38.549: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-5348 Jun 11 00:03:38.552: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-5348 Jun 11 00:03:38.556: INFO: deleting *v1.Role: csi-mock-volumes-5348-5186/external-provisioner-cfg-csi-mock-volumes-5348 Jun 11 00:03:38.560: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5348-5186/csi-provisioner-role-cfg Jun 11 00:03:38.564: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5348-5186/csi-resizer Jun 11 00:03:38.567: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-5348 Jun 11 00:03:38.571: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-5348 Jun 11 00:03:38.575: INFO: deleting *v1.Role: csi-mock-volumes-5348-5186/external-resizer-cfg-csi-mock-volumes-5348 Jun 11 00:03:38.578: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5348-5186/csi-resizer-role-cfg Jun 11 00:03:38.582: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5348-5186/csi-snapshotter Jun 11 00:03:38.585: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-5348 Jun 11 00:03:38.589: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-5348 Jun 11 00:03:38.592: INFO: deleting *v1.Role: csi-mock-volumes-5348-5186/external-snapshotter-leaderelection-csi-mock-volumes-5348 Jun 11 00:03:38.596: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5348-5186/external-snapshotter-leaderelection Jun 11 00:03:38.599: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5348-5186/csi-mock Jun 11 00:03:38.604: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-5348 Jun 11 00:03:38.608: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-5348 Jun 11 00:03:38.611: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-5348 Jun 11 00:03:38.615: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-5348 Jun 11 00:03:38.619: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-5348 Jun 11 00:03:38.624: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-5348 Jun 11 00:03:38.627: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-5348 Jun 11 00:03:38.631: INFO: deleting *v1.StatefulSet: csi-mock-volumes-5348-5186/csi-mockplugin Jun 11 00:03:38.635: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-5348 Jun 11 00:03:38.639: INFO: deleting *v1.StatefulSet: csi-mock-volumes-5348-5186/csi-mockplugin-resizer STEP: deleting the driver namespace: csi-mock-volumes-5348-5186 STEP: Waiting for namespaces [csi-mock-volumes-5348-5186] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:03:44.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:66.823 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI Volume expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:561 should expand volume by restarting pod if attach=off, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=off, nodeExpansion=on","total":-1,"completed":1,"skipped":11,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:02:41.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] contain ephemeral=true when using inline volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 STEP: Building a driver namespace object, basename csi-mock-volumes-1507 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Jun 11 00:02:41.732: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1507-5710/csi-attacher Jun 11 00:02:41.735: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-1507 Jun 11 00:02:41.735: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-1507 Jun 11 00:02:41.738: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-1507 Jun 11 00:02:41.741: INFO: creating *v1.Role: csi-mock-volumes-1507-5710/external-attacher-cfg-csi-mock-volumes-1507 Jun 11 00:02:41.743: INFO: creating *v1.RoleBinding: csi-mock-volumes-1507-5710/csi-attacher-role-cfg Jun 11 00:02:41.746: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1507-5710/csi-provisioner Jun 11 00:02:41.748: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-1507 Jun 11 00:02:41.748: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-1507 Jun 11 00:02:41.751: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-1507 Jun 11 00:02:41.753: INFO: creating *v1.Role: csi-mock-volumes-1507-5710/external-provisioner-cfg-csi-mock-volumes-1507 Jun 11 00:02:41.756: INFO: creating *v1.RoleBinding: csi-mock-volumes-1507-5710/csi-provisioner-role-cfg Jun 11 00:02:41.760: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1507-5710/csi-resizer Jun 11 00:02:41.763: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-1507 Jun 11 00:02:41.763: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-1507 Jun 11 00:02:41.767: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-1507 Jun 11 00:02:41.771: INFO: creating *v1.Role: csi-mock-volumes-1507-5710/external-resizer-cfg-csi-mock-volumes-1507 Jun 11 00:02:41.773: INFO: creating *v1.RoleBinding: csi-mock-volumes-1507-5710/csi-resizer-role-cfg Jun 11 00:02:41.776: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1507-5710/csi-snapshotter Jun 11 00:02:41.779: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-1507 Jun 11 00:02:41.779: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-1507 Jun 11 00:02:41.782: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-1507 Jun 11 00:02:41.784: INFO: creating *v1.Role: csi-mock-volumes-1507-5710/external-snapshotter-leaderelection-csi-mock-volumes-1507 Jun 11 00:02:41.787: INFO: creating *v1.RoleBinding: csi-mock-volumes-1507-5710/external-snapshotter-leaderelection Jun 11 00:02:41.789: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1507-5710/csi-mock Jun 11 00:02:41.792: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-1507 Jun 11 00:02:41.795: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-1507 Jun 11 00:02:41.797: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-1507 Jun 11 00:02:41.800: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-1507 Jun 11 00:02:41.802: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-1507 Jun 11 00:02:41.805: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-1507 Jun 11 00:02:41.808: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-1507 Jun 11 00:02:41.811: INFO: creating *v1.StatefulSet: csi-mock-volumes-1507-5710/csi-mockplugin Jun 11 00:02:41.816: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-1507 Jun 11 00:02:41.819: INFO: creating *v1.StatefulSet: csi-mock-volumes-1507-5710/csi-mockplugin-attacher Jun 11 00:02:41.823: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-1507" Jun 11 00:02:41.825: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-1507 to register on node node1 STEP: Creating pod STEP: checking for CSIInlineVolumes feature Jun 11 00:03:04.122: INFO: Pod inline-volume-d9lq4 has the following logs: Jun 11 00:03:04.124: INFO: Deleting pod "inline-volume-d9lq4" in namespace "csi-mock-volumes-1507" Jun 11 00:03:04.128: INFO: Wait up to 5m0s for pod "inline-volume-d9lq4" to be fully deleted STEP: Deleting the previously created pod Jun 11 00:03:06.135: INFO: Deleting pod "pvc-volume-tester-tvnvb" in namespace "csi-mock-volumes-1507" Jun 11 00:03:06.140: INFO: Wait up to 5m0s for pod "pvc-volume-tester-tvnvb" to be fully deleted STEP: Checking CSI driver logs Jun 11 00:03:18.152: INFO: Found volume attribute csi.storage.k8s.io/ephemeral: true Jun 11 00:03:18.152: INFO: Found volume attribute csi.storage.k8s.io/pod.name: pvc-volume-tester-tvnvb Jun 11 00:03:18.152: INFO: Found volume attribute csi.storage.k8s.io/pod.namespace: csi-mock-volumes-1507 Jun 11 00:03:18.152: INFO: Found volume attribute csi.storage.k8s.io/pod.uid: a4826512-cf7e-4000-8e70-d8de015da5b7 Jun 11 00:03:18.152: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.name: default Jun 11 00:03:18.152: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"csi-24d524e141b91ef553f8917826e1fcdfa6c6a4009e0116be6f97426cb3bf2ba8","target_path":"/var/lib/kubelet/pods/a4826512-cf7e-4000-8e70-d8de015da5b7/volumes/kubernetes.io~csi/my-volume/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:} STEP: Deleting pod pvc-volume-tester-tvnvb Jun 11 00:03:18.152: INFO: Deleting pod "pvc-volume-tester-tvnvb" in namespace "csi-mock-volumes-1507" STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-1507 STEP: Waiting for namespaces [csi-mock-volumes-1507] to vanish STEP: uninstalling csi mock driver Jun 11 00:03:24.166: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1507-5710/csi-attacher Jun 11 00:03:24.170: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-1507 Jun 11 00:03:24.174: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-1507 Jun 11 00:03:24.177: INFO: deleting *v1.Role: csi-mock-volumes-1507-5710/external-attacher-cfg-csi-mock-volumes-1507 Jun 11 00:03:24.185: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1507-5710/csi-attacher-role-cfg Jun 11 00:03:24.188: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1507-5710/csi-provisioner Jun 11 00:03:24.191: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-1507 Jun 11 00:03:24.197: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-1507 Jun 11 00:03:24.203: INFO: deleting *v1.Role: csi-mock-volumes-1507-5710/external-provisioner-cfg-csi-mock-volumes-1507 Jun 11 00:03:24.209: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1507-5710/csi-provisioner-role-cfg Jun 11 00:03:24.217: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1507-5710/csi-resizer Jun 11 00:03:24.223: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-1507 Jun 11 00:03:24.227: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-1507 Jun 11 00:03:24.229: INFO: deleting *v1.Role: csi-mock-volumes-1507-5710/external-resizer-cfg-csi-mock-volumes-1507 Jun 11 00:03:24.233: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1507-5710/csi-resizer-role-cfg Jun 11 00:03:24.236: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1507-5710/csi-snapshotter Jun 11 00:03:24.239: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-1507 Jun 11 00:03:24.243: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-1507 Jun 11 00:03:24.248: INFO: deleting *v1.Role: csi-mock-volumes-1507-5710/external-snapshotter-leaderelection-csi-mock-volumes-1507 Jun 11 00:03:24.251: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1507-5710/external-snapshotter-leaderelection Jun 11 00:03:24.254: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1507-5710/csi-mock Jun 11 00:03:24.257: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-1507 Jun 11 00:03:24.260: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-1507 Jun 11 00:03:24.263: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-1507 Jun 11 00:03:24.267: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-1507 Jun 11 00:03:24.270: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-1507 Jun 11 00:03:24.273: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-1507 Jun 11 00:03:24.276: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-1507 Jun 11 00:03:24.279: INFO: deleting *v1.StatefulSet: csi-mock-volumes-1507-5710/csi-mockplugin Jun 11 00:03:24.283: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-1507 Jun 11 00:03:24.287: INFO: deleting *v1.StatefulSet: csi-mock-volumes-1507-5710/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-1507-5710 STEP: Waiting for namespaces [csi-mock-volumes-1507-5710] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:03:52.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:70.768 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI workload information using mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:443 contain ephemeral=true when using inline volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","total":-1,"completed":2,"skipped":49,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:03:36.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-5574f2ac-6f0d-423b-b870-af3fc393b470" Jun 11 00:03:38.742: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-5574f2ac-6f0d-423b-b870-af3fc393b470" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-5574f2ac-6f0d-423b-b870-af3fc393b470" "/tmp/local-volume-test-5574f2ac-6f0d-423b-b870-af3fc393b470"] Namespace:persistent-local-volumes-test-4121 PodName:hostexec-node1-g2lcr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:03:38.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 11 00:03:38.825: INFO: Creating a PV followed by a PVC Jun 11 00:03:38.832: INFO: Waiting for PV local-pvsn8ff to bind to PVC pvc-rcg7x Jun 11 00:03:38.832: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-rcg7x] to have phase Bound Jun 11 00:03:38.834: INFO: PersistentVolumeClaim pvc-rcg7x found but phase is Pending instead of Bound. Jun 11 00:03:40.838: INFO: PersistentVolumeClaim pvc-rcg7x found but phase is Pending instead of Bound. Jun 11 00:03:42.842: INFO: PersistentVolumeClaim pvc-rcg7x found and phase=Bound (4.010130508s) Jun 11 00:03:42.842: INFO: Waiting up to 3m0s for PersistentVolume local-pvsn8ff to have phase Bound Jun 11 00:03:42.844: INFO: PersistentVolume local-pvsn8ff found and phase=Bound (1.972727ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 STEP: Create first pod and check fsGroup is set STEP: Creating a pod Jun 11 00:03:46.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-4121 exec pod-2f48e5e5-baf7-4dc6-ad8c-fd00e0a8b990 --namespace=persistent-local-volumes-test-4121 -- stat -c %g /mnt/volume1' Jun 11 00:03:47.159: INFO: stderr: "" Jun 11 00:03:47.159: INFO: stdout: "1234\n" STEP: Create second pod with same fsGroup and check fsGroup is correct STEP: Creating a pod Jun 11 00:03:53.178: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-4121 exec pod-05eb4b83-8467-4573-8913-7edebc32d737 --namespace=persistent-local-volumes-test-4121 -- stat -c %g /mnt/volume1' Jun 11 00:03:53.460: INFO: stderr: "" Jun 11 00:03:53.460: INFO: stdout: "1234\n" STEP: Deleting first pod STEP: Deleting pod pod-2f48e5e5-baf7-4dc6-ad8c-fd00e0a8b990 in namespace persistent-local-volumes-test-4121 STEP: Deleting second pod STEP: Deleting pod pod-05eb4b83-8467-4573-8913-7edebc32d737 in namespace persistent-local-volumes-test-4121 [AfterEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 11 00:03:53.469: INFO: Deleting PersistentVolumeClaim "pvc-rcg7x" Jun 11 00:03:53.473: INFO: Deleting PersistentVolume "local-pvsn8ff" STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-5574f2ac-6f0d-423b-b870-af3fc393b470" Jun 11 00:03:53.476: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-5574f2ac-6f0d-423b-b870-af3fc393b470"] Namespace:persistent-local-volumes-test-4121 PodName:hostexec-node1-g2lcr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:03:53.476: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 11 00:03:53.616: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-5574f2ac-6f0d-423b-b870-af3fc393b470] Namespace:persistent-local-volumes-test-4121 PodName:hostexec-node1-g2lcr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:03:53.616: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:03:53.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-4121" for this suite. • [SLOW TEST:17.102 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: tmpfs] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]","total":-1,"completed":6,"skipped":274,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:03:53.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 Jun 11 00:03:57.940: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-8922 PodName:hostexec-node1-cxt4n ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:03:57.940: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:03:58.044: INFO: exec node1: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Jun 11 00:03:58.044: INFO: exec node1: stdout: "0\n" Jun 11 00:03:58.044: INFO: exec node1: stderr: "ls: cannot access /mnt/disks/by-uuid/google-local-ssds-scsi-fs/: No such file or directory\n" Jun 11 00:03:58.044: INFO: exec node1: exit code: 0 Jun 11 00:03:58.044: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:03:58.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-8922" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [4.162 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1254 ------------------------------ SSSSSS ------------------------------ [BeforeEach] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:03:20.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:49 [It] should allow deletion of pod with invalid volume : projected /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55 Jun 11 00:03:50.221: INFO: Deleting pod "pv-6117"/"pod-ephm-test-projected-sjfm" Jun 11 00:03:50.221: INFO: Deleting pod "pod-ephm-test-projected-sjfm" in namespace "pv-6117" Jun 11 00:03:50.226: INFO: Wait up to 5m0s for pod "pod-ephm-test-projected-sjfm" to be fully deleted [AfterEach] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:03:58.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-6117" for this suite. • [SLOW TEST:38.055 seconds] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 When pod refers to non-existent ephemeral storage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:53 should allow deletion of pod with invalid volume : projected /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55 ------------------------------ {"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : projected","total":-1,"completed":3,"skipped":34,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:03:43.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-file STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:124 STEP: Create a pod for further testing Jun 11 00:03:43.362: INFO: The status of Pod test-hostpath-type-nxpwk is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:03:45.366: INFO: The status of Pod test-hostpath-type-nxpwk is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:03:47.366: INFO: The status of Pod test-hostpath-type-nxpwk is Running (Ready = true) STEP: running on node node1 STEP: Should automatically create a new file 'afile' when HostPathType is HostPathFileOrCreate [It] Should be able to mount file 'afile' successfully when HostPathType is HostPathFile /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:143 [AfterEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:03:59.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-file-8488" for this suite. • [SLOW TEST:16.096 seconds] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should be able to mount file 'afile' successfully when HostPathType is HostPathFile /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:143 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType File [Slow] Should be able to mount file 'afile' successfully when HostPathType is HostPathFile","total":-1,"completed":4,"skipped":48,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:03:59.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename regional-pd STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:68 Jun 11 00:03:59.478: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:03:59.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "regional-pd-6129" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.033 seconds] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 RegionalPD [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:76 should provision storage [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:77 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:72 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:03:59.554: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume-provisioning STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:146 [It] deletion should be idempotent /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:557 Jun 11 00:03:59.577: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:03:59.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-provisioning-7042" for this suite. S [SKIPPING] [0.032 seconds] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 DynamicProvisioner [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:152 deletion should be idempotent [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:557 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:563 ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:03:43.405: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Jun 11 00:03:45.449: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-03303458-dbc0-4db0-ba1b-880bdde5f71f] Namespace:persistent-local-volumes-test-3056 PodName:hostexec-node2-zdj2l ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:03:45.449: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 11 00:03:45.578: INFO: Creating a PV followed by a PVC Jun 11 00:03:45.585: INFO: Waiting for PV local-pvj4kvr to bind to PVC pvc-t75ch Jun 11 00:03:45.585: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-t75ch] to have phase Bound Jun 11 00:03:45.587: INFO: PersistentVolumeClaim pvc-t75ch found but phase is Pending instead of Bound. Jun 11 00:03:47.591: INFO: PersistentVolumeClaim pvc-t75ch found but phase is Pending instead of Bound. Jun 11 00:03:49.595: INFO: PersistentVolumeClaim pvc-t75ch found but phase is Pending instead of Bound. Jun 11 00:03:51.599: INFO: PersistentVolumeClaim pvc-t75ch found but phase is Pending instead of Bound. Jun 11 00:03:53.602: INFO: PersistentVolumeClaim pvc-t75ch found but phase is Pending instead of Bound. Jun 11 00:03:55.608: INFO: PersistentVolumeClaim pvc-t75ch found but phase is Pending instead of Bound. Jun 11 00:03:57.611: INFO: PersistentVolumeClaim pvc-t75ch found and phase=Bound (12.026582007s) Jun 11 00:03:57.611: INFO: Waiting up to 3m0s for PersistentVolume local-pvj4kvr to have phase Bound Jun 11 00:03:57.614: INFO: PersistentVolume local-pvj4kvr found and phase=Bound (2.175704ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 STEP: Creating pod1 to write to the PV STEP: Creating a pod Jun 11 00:04:01.641: INFO: pod "pod-3d8c9a7c-8081-4cae-a6b8-c6f946df8be1" created on Node "node2" STEP: Writing in pod1 Jun 11 00:04:01.641: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3056 PodName:pod-3d8c9a7c-8081-4cae-a6b8-c6f946df8be1 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:04:01.641: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:04:01.718: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Jun 11 00:04:01.718: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3056 PodName:pod-3d8c9a7c-8081-4cae-a6b8-c6f946df8be1 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:04:01.718: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:04:01.794: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Creating pod2 to read from the PV STEP: Creating a pod Jun 11 00:04:07.817: INFO: pod "pod-258b282f-e708-480e-a712-98ab14a8628a" created on Node "node2" Jun 11 00:04:07.817: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3056 PodName:pod-258b282f-e708-480e-a712-98ab14a8628a ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:04:07.817: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:04:07.896: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod2 Jun 11 00:04:07.896: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-03303458-dbc0-4db0-ba1b-880bdde5f71f > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3056 PodName:pod-258b282f-e708-480e-a712-98ab14a8628a ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:04:07.897: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:04:07.976: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-03303458-dbc0-4db0-ba1b-880bdde5f71f > /mnt/volume1/test-file", out: "", stderr: "", err: STEP: Reading in pod1 Jun 11 00:04:07.976: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3056 PodName:pod-3d8c9a7c-8081-4cae-a6b8-c6f946df8be1 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:04:07.976: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:04:08.055: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "/tmp/local-volume-test-03303458-dbc0-4db0-ba1b-880bdde5f71f", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-3d8c9a7c-8081-4cae-a6b8-c6f946df8be1 in namespace persistent-local-volumes-test-3056 STEP: Deleting pod2 STEP: Deleting pod pod-258b282f-e708-480e-a712-98ab14a8628a in namespace persistent-local-volumes-test-3056 [AfterEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 11 00:04:08.067: INFO: Deleting PersistentVolumeClaim "pvc-t75ch" Jun 11 00:04:08.071: INFO: Deleting PersistentVolume "local-pvj4kvr" STEP: Removing the test directory Jun 11 00:04:08.076: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-03303458-dbc0-4db0-ba1b-880bdde5f71f] Namespace:persistent-local-volumes-test-3056 PodName:hostexec-node2-zdj2l ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:04:08.076: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:04:08.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-3056" for this suite. • [SLOW TEST:24.760 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":2,"skipped":98,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:03:59.588: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Jun 11 00:04:05.634: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-f077c12f-0287-4907-85bd-9f301f186552 && mount --bind /tmp/local-volume-test-f077c12f-0287-4907-85bd-9f301f186552 /tmp/local-volume-test-f077c12f-0287-4907-85bd-9f301f186552] Namespace:persistent-local-volumes-test-4925 PodName:hostexec-node1-4zg9v ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:04:05.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 11 00:04:05.768: INFO: Creating a PV followed by a PVC Jun 11 00:04:05.775: INFO: Waiting for PV local-pvlhjgj to bind to PVC pvc-w42xc Jun 11 00:04:05.775: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-w42xc] to have phase Bound Jun 11 00:04:05.777: INFO: PersistentVolumeClaim pvc-w42xc found but phase is Pending instead of Bound. Jun 11 00:04:07.781: INFO: PersistentVolumeClaim pvc-w42xc found but phase is Pending instead of Bound. Jun 11 00:04:09.784: INFO: PersistentVolumeClaim pvc-w42xc found but phase is Pending instead of Bound. Jun 11 00:04:11.789: INFO: PersistentVolumeClaim pvc-w42xc found and phase=Bound (6.013547035s) Jun 11 00:04:11.789: INFO: Waiting up to 3m0s for PersistentVolume local-pvlhjgj to have phase Bound Jun 11 00:04:11.791: INFO: PersistentVolume local-pvlhjgj found and phase=Bound (2.028651ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Jun 11 00:04:11.825: INFO: Disabled temporarily, reopen after #73168 is fixed [AfterEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 11 00:04:11.827: INFO: Deleting PersistentVolumeClaim "pvc-w42xc" Jun 11 00:04:11.830: INFO: Deleting PersistentVolume "local-pvlhjgj" STEP: Removing the test directory Jun 11 00:04:11.834: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-f077c12f-0287-4907-85bd-9f301f186552 && rm -r /tmp/local-volume-test-f077c12f-0287-4907-85bd-9f301f186552] Namespace:persistent-local-volumes-test-4925 PodName:hostexec-node1-4zg9v ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:04:11.834: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:04:12.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-4925" for this suite. S [SKIPPING] [12.481 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set different fsGroup for second pod if first pod is deleted [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Disabled temporarily, reopen after #73168 is fixed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:287 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:02:42.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] CSIStorageCapacity used, have capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1299 STEP: Building a driver namespace object, basename csi-mock-volumes-2091 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Jun 11 00:02:42.781: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2091-3785/csi-attacher Jun 11 00:02:42.784: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-2091 Jun 11 00:02:42.784: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-2091 Jun 11 00:02:42.788: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-2091 Jun 11 00:02:42.791: INFO: creating *v1.Role: csi-mock-volumes-2091-3785/external-attacher-cfg-csi-mock-volumes-2091 Jun 11 00:02:42.793: INFO: creating *v1.RoleBinding: csi-mock-volumes-2091-3785/csi-attacher-role-cfg Jun 11 00:02:42.796: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2091-3785/csi-provisioner Jun 11 00:02:42.799: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-2091 Jun 11 00:02:42.799: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-2091 Jun 11 00:02:42.803: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-2091 Jun 11 00:02:42.805: INFO: creating *v1.Role: csi-mock-volumes-2091-3785/external-provisioner-cfg-csi-mock-volumes-2091 Jun 11 00:02:42.808: INFO: creating *v1.RoleBinding: csi-mock-volumes-2091-3785/csi-provisioner-role-cfg Jun 11 00:02:42.810: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2091-3785/csi-resizer Jun 11 00:02:42.812: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-2091 Jun 11 00:02:42.812: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-2091 Jun 11 00:02:42.815: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-2091 Jun 11 00:02:42.818: INFO: creating *v1.Role: csi-mock-volumes-2091-3785/external-resizer-cfg-csi-mock-volumes-2091 Jun 11 00:02:42.821: INFO: creating *v1.RoleBinding: csi-mock-volumes-2091-3785/csi-resizer-role-cfg Jun 11 00:02:42.823: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2091-3785/csi-snapshotter Jun 11 00:02:42.826: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-2091 Jun 11 00:02:42.826: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-2091 Jun 11 00:02:42.829: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-2091 Jun 11 00:02:42.831: INFO: creating *v1.Role: csi-mock-volumes-2091-3785/external-snapshotter-leaderelection-csi-mock-volumes-2091 Jun 11 00:02:42.834: INFO: creating *v1.RoleBinding: csi-mock-volumes-2091-3785/external-snapshotter-leaderelection Jun 11 00:02:42.836: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2091-3785/csi-mock Jun 11 00:02:42.839: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-2091 Jun 11 00:02:42.841: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-2091 Jun 11 00:02:42.843: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-2091 Jun 11 00:02:42.846: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-2091 Jun 11 00:02:42.848: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-2091 Jun 11 00:02:42.850: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-2091 Jun 11 00:02:42.852: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-2091 Jun 11 00:02:42.855: INFO: creating *v1.StatefulSet: csi-mock-volumes-2091-3785/csi-mockplugin Jun 11 00:02:42.859: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-2091 Jun 11 00:02:42.861: INFO: creating *v1.StatefulSet: csi-mock-volumes-2091-3785/csi-mockplugin-attacher Jun 11 00:02:42.864: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-2091" Jun 11 00:02:42.866: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-2091 to register on node node2 STEP: Creating pod Jun 11 00:03:04.147: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil STEP: Deleting the previously created pod Jun 11 00:03:16.170: INFO: Deleting pod "pvc-volume-tester-sjz4z" in namespace "csi-mock-volumes-2091" Jun 11 00:03:16.175: INFO: Wait up to 5m0s for pod "pvc-volume-tester-sjz4z" to be fully deleted STEP: Deleting pod pvc-volume-tester-sjz4z Jun 11 00:03:22.182: INFO: Deleting pod "pvc-volume-tester-sjz4z" in namespace "csi-mock-volumes-2091" STEP: Deleting claim pvc-jk4fm Jun 11 00:03:22.193: INFO: Waiting up to 2m0s for PersistentVolume pvc-555a2d28-2d1e-4650-a401-c191029a9270 to get deleted Jun 11 00:03:22.195: INFO: PersistentVolume pvc-555a2d28-2d1e-4650-a401-c191029a9270 found and phase=Bound (2.107144ms) Jun 11 00:03:24.204: INFO: PersistentVolume pvc-555a2d28-2d1e-4650-a401-c191029a9270 found and phase=Released (2.011174147s) Jun 11 00:03:26.208: INFO: PersistentVolume pvc-555a2d28-2d1e-4650-a401-c191029a9270 found and phase=Released (4.014993102s) Jun 11 00:03:28.211: INFO: PersistentVolume pvc-555a2d28-2d1e-4650-a401-c191029a9270 found and phase=Released (6.018277153s) Jun 11 00:03:30.217: INFO: PersistentVolume pvc-555a2d28-2d1e-4650-a401-c191029a9270 found and phase=Released (8.024612132s) Jun 11 00:03:32.224: INFO: PersistentVolume pvc-555a2d28-2d1e-4650-a401-c191029a9270 found and phase=Released (10.030814122s) Jun 11 00:03:34.229: INFO: PersistentVolume pvc-555a2d28-2d1e-4650-a401-c191029a9270 found and phase=Released (12.036201925s) Jun 11 00:03:36.232: INFO: PersistentVolume pvc-555a2d28-2d1e-4650-a401-c191029a9270 found and phase=Released (14.039523614s) Jun 11 00:03:38.235: INFO: PersistentVolume pvc-555a2d28-2d1e-4650-a401-c191029a9270 was removed STEP: Deleting storageclass mock-csi-storage-capacity-csi-mock-volumes-2091 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-2091 STEP: Waiting for namespaces [csi-mock-volumes-2091] to vanish STEP: uninstalling csi mock driver Jun 11 00:03:44.251: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2091-3785/csi-attacher Jun 11 00:03:44.255: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-2091 Jun 11 00:03:44.259: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-2091 Jun 11 00:03:44.263: INFO: deleting *v1.Role: csi-mock-volumes-2091-3785/external-attacher-cfg-csi-mock-volumes-2091 Jun 11 00:03:44.267: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2091-3785/csi-attacher-role-cfg Jun 11 00:03:44.271: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2091-3785/csi-provisioner Jun 11 00:03:44.275: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-2091 Jun 11 00:03:44.279: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-2091 Jun 11 00:03:44.282: INFO: deleting *v1.Role: csi-mock-volumes-2091-3785/external-provisioner-cfg-csi-mock-volumes-2091 Jun 11 00:03:44.286: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2091-3785/csi-provisioner-role-cfg Jun 11 00:03:44.290: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2091-3785/csi-resizer Jun 11 00:03:44.294: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-2091 Jun 11 00:03:44.298: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-2091 Jun 11 00:03:44.301: INFO: deleting *v1.Role: csi-mock-volumes-2091-3785/external-resizer-cfg-csi-mock-volumes-2091 Jun 11 00:03:44.304: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2091-3785/csi-resizer-role-cfg Jun 11 00:03:44.308: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2091-3785/csi-snapshotter Jun 11 00:03:44.311: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-2091 Jun 11 00:03:44.314: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-2091 Jun 11 00:03:44.318: INFO: deleting *v1.Role: csi-mock-volumes-2091-3785/external-snapshotter-leaderelection-csi-mock-volumes-2091 Jun 11 00:03:44.321: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2091-3785/external-snapshotter-leaderelection Jun 11 00:03:44.324: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2091-3785/csi-mock Jun 11 00:03:44.327: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-2091 Jun 11 00:03:44.331: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-2091 Jun 11 00:03:44.334: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-2091 Jun 11 00:03:44.338: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-2091 Jun 11 00:03:44.341: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-2091 Jun 11 00:03:44.344: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-2091 Jun 11 00:03:44.347: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-2091 Jun 11 00:03:44.351: INFO: deleting *v1.StatefulSet: csi-mock-volumes-2091-3785/csi-mockplugin Jun 11 00:03:44.354: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-2091 Jun 11 00:03:44.358: INFO: deleting *v1.StatefulSet: csi-mock-volumes-2091-3785/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-2091-3785 STEP: Waiting for namespaces [csi-mock-volumes-2091-3785] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:04:12.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:90.037 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSIStorageCapacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1256 CSIStorageCapacity used, have capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1299 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity","total":-1,"completed":2,"skipped":24,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:03:58.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node2" using path "/tmp/local-volume-test-464fa540-3f77-45f1-b84b-72534ddda68f" Jun 11 00:04:00.121: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-464fa540-3f77-45f1-b84b-72534ddda68f && dd if=/dev/zero of=/tmp/local-volume-test-464fa540-3f77-45f1-b84b-72534ddda68f/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-464fa540-3f77-45f1-b84b-72534ddda68f/file] Namespace:persistent-local-volumes-test-4012 PodName:hostexec-node2-crb8n ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:04:00.121: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:04:00.238: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-464fa540-3f77-45f1-b84b-72534ddda68f/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-4012 PodName:hostexec-node2-crb8n ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:04:00.238: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 11 00:04:00.337: INFO: Creating a PV followed by a PVC Jun 11 00:04:00.343: INFO: Waiting for PV local-pvsrv9w to bind to PVC pvc-hlx2j Jun 11 00:04:00.343: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-hlx2j] to have phase Bound Jun 11 00:04:00.345: INFO: PersistentVolumeClaim pvc-hlx2j found but phase is Pending instead of Bound. Jun 11 00:04:02.350: INFO: PersistentVolumeClaim pvc-hlx2j found but phase is Pending instead of Bound. Jun 11 00:04:04.356: INFO: PersistentVolumeClaim pvc-hlx2j found but phase is Pending instead of Bound. Jun 11 00:04:06.359: INFO: PersistentVolumeClaim pvc-hlx2j found but phase is Pending instead of Bound. Jun 11 00:04:08.363: INFO: PersistentVolumeClaim pvc-hlx2j found but phase is Pending instead of Bound. Jun 11 00:04:10.369: INFO: PersistentVolumeClaim pvc-hlx2j found but phase is Pending instead of Bound. Jun 11 00:04:12.375: INFO: PersistentVolumeClaim pvc-hlx2j found and phase=Bound (12.032268269s) Jun 11 00:04:12.375: INFO: Waiting up to 3m0s for PersistentVolume local-pvsrv9w to have phase Bound Jun 11 00:04:12.377: INFO: PersistentVolume local-pvsrv9w found and phase=Bound (2.183122ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Jun 11 00:04:26.402: INFO: pod "pod-2afcb31d-a694-43a9-8d44-1af15db8f188" created on Node "node2" STEP: Writing in pod1 Jun 11 00:04:26.402: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4012 PodName:pod-2afcb31d-a694-43a9-8d44-1af15db8f188 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:04:26.402: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:04:26.490: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Jun 11 00:04:26.490: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4012 PodName:pod-2afcb31d-a694-43a9-8d44-1af15db8f188 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:04:26.490: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:04:26.569: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod1 Jun 11 00:04:26.569: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /dev/loop0 > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4012 PodName:pod-2afcb31d-a694-43a9-8d44-1af15db8f188 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:04:26.569: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:04:26.685: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /dev/loop0 > /mnt/volume1/test-file", out: "", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-2afcb31d-a694-43a9-8d44-1af15db8f188 in namespace persistent-local-volumes-test-4012 [AfterEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 11 00:04:26.690: INFO: Deleting PersistentVolumeClaim "pvc-hlx2j" Jun 11 00:04:26.694: INFO: Deleting PersistentVolume "local-pvsrv9w" Jun 11 00:04:26.700: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-464fa540-3f77-45f1-b84b-72534ddda68f/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-4012 PodName:hostexec-node2-crb8n ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:04:26.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "node2" at path /tmp/local-volume-test-464fa540-3f77-45f1-b84b-72534ddda68f/file Jun 11 00:04:27.485: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-4012 PodName:hostexec-node2-crb8n ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:04:27.485: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-464fa540-3f77-45f1-b84b-72534ddda68f Jun 11 00:04:28.287: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-464fa540-3f77-45f1-b84b-72534ddda68f] Namespace:persistent-local-volumes-test-4012 PodName:hostexec-node2-crb8n ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:04:28.287: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:04:28.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-4012" for this suite. • [SLOW TEST:30.511 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":7,"skipped":325,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:02:37.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes W0611 00:02:37.869544 40 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 11 00:02:37.869: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 11 00:02:37.871: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should retry NodeStage after NodeStage final error /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:828 STEP: Building a driver namespace object, basename csi-mock-volumes-8920 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock proxy Jun 11 00:02:37.922: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8920-9054/csi-attacher Jun 11 00:02:37.924: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-8920 Jun 11 00:02:37.924: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-8920 Jun 11 00:02:37.927: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-8920 Jun 11 00:02:37.929: INFO: creating *v1.Role: csi-mock-volumes-8920-9054/external-attacher-cfg-csi-mock-volumes-8920 Jun 11 00:02:37.932: INFO: creating *v1.RoleBinding: csi-mock-volumes-8920-9054/csi-attacher-role-cfg Jun 11 00:02:37.934: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8920-9054/csi-provisioner Jun 11 00:02:37.936: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-8920 Jun 11 00:02:37.936: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-8920 Jun 11 00:02:37.939: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-8920 Jun 11 00:02:37.942: INFO: creating *v1.Role: csi-mock-volumes-8920-9054/external-provisioner-cfg-csi-mock-volumes-8920 Jun 11 00:02:37.944: INFO: creating *v1.RoleBinding: csi-mock-volumes-8920-9054/csi-provisioner-role-cfg Jun 11 00:02:37.947: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8920-9054/csi-resizer Jun 11 00:02:37.950: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-8920 Jun 11 00:02:37.950: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-8920 Jun 11 00:02:37.953: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-8920 Jun 11 00:02:37.955: INFO: creating *v1.Role: csi-mock-volumes-8920-9054/external-resizer-cfg-csi-mock-volumes-8920 Jun 11 00:02:37.958: INFO: creating *v1.RoleBinding: csi-mock-volumes-8920-9054/csi-resizer-role-cfg Jun 11 00:02:37.960: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8920-9054/csi-snapshotter Jun 11 00:02:37.962: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-8920 Jun 11 00:02:37.962: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-8920 Jun 11 00:02:37.965: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-8920 Jun 11 00:02:37.968: INFO: creating *v1.Role: csi-mock-volumes-8920-9054/external-snapshotter-leaderelection-csi-mock-volumes-8920 Jun 11 00:02:37.970: INFO: creating *v1.RoleBinding: csi-mock-volumes-8920-9054/external-snapshotter-leaderelection Jun 11 00:02:37.973: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8920-9054/csi-mock Jun 11 00:02:37.976: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-8920 Jun 11 00:02:37.979: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-8920 Jun 11 00:02:37.982: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-8920 Jun 11 00:02:37.985: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-8920 Jun 11 00:02:37.988: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-8920 Jun 11 00:02:37.990: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-8920 Jun 11 00:02:37.993: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-8920 Jun 11 00:02:37.998: INFO: creating *v1.StatefulSet: csi-mock-volumes-8920-9054/csi-mockplugin Jun 11 00:02:38.002: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-8920 Jun 11 00:02:38.006: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-8920" Jun 11 00:02:38.008: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-8920 to register on node node2 I0611 00:02:55.132148 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-8920","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I0611 00:02:55.218677 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null} I0611 00:02:55.220460 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-8920","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I0611 00:02:55.222516 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":"","FullError":null} I0611 00:02:55.224809 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null} I0611 00:02:55.397753 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-8920"},"Error":"","FullError":null} STEP: Creating pod Jun 11 00:03:04.403: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jun 11 00:03:04.409: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-c544s] to have phase Bound Jun 11 00:03:04.411: INFO: PersistentVolumeClaim pvc-c544s found but phase is Pending instead of Bound. I0611 00:03:04.417576 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-d2090548-6df4-4884-ad39-bd24186d82ad","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-d2090548-6df4-4884-ad39-bd24186d82ad"}}},"Error":"","FullError":null} Jun 11 00:03:06.414: INFO: PersistentVolumeClaim pvc-c544s found and phase=Bound (2.005351359s) Jun 11 00:03:06.429: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-c544s] to have phase Bound Jun 11 00:03:06.431: INFO: PersistentVolumeClaim pvc-c544s found and phase=Bound (2.080529ms) STEP: Waiting for expected CSI calls I0611 00:03:06.712254 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0611 00:03:06.715199 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-d2090548-6df4-4884-ad39-bd24186d82ad/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-d2090548-6df4-4884-ad39-bd24186d82ad","storage.kubernetes.io/csiProvisionerIdentity":"1654905775224-8081-csi-mock-csi-mock-volumes-8920"}},"Response":null,"Error":"rpc error: code = InvalidArgument desc = fake error","FullError":{"code":3,"message":"fake error"}} I0611 00:03:07.316622 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0611 00:03:07.318943 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-d2090548-6df4-4884-ad39-bd24186d82ad/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-d2090548-6df4-4884-ad39-bd24186d82ad","storage.kubernetes.io/csiProvisionerIdentity":"1654905775224-8081-csi-mock-csi-mock-volumes-8920"}},"Response":null,"Error":"rpc error: code = InvalidArgument desc = fake error","FullError":{"code":3,"message":"fake error"}} I0611 00:03:08.330966 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0611 00:03:08.333065 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-d2090548-6df4-4884-ad39-bd24186d82ad/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-d2090548-6df4-4884-ad39-bd24186d82ad","storage.kubernetes.io/csiProvisionerIdentity":"1654905775224-8081-csi-mock-csi-mock-volumes-8920"}},"Response":null,"Error":"rpc error: code = InvalidArgument desc = fake error","FullError":{"code":3,"message":"fake error"}} I0611 00:03:10.411970 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Jun 11 00:03:10.413: INFO: >>> kubeConfig: /root/.kube/config I0611 00:03:10.526557 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-d2090548-6df4-4884-ad39-bd24186d82ad/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-d2090548-6df4-4884-ad39-bd24186d82ad","storage.kubernetes.io/csiProvisionerIdentity":"1654905775224-8081-csi-mock-csi-mock-volumes-8920"}},"Response":{},"Error":"","FullError":null} I0611 00:03:10.531981 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Jun 11 00:03:10.534: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:03:10.628: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:03:10.721: INFO: >>> kubeConfig: /root/.kube/config I0611 00:03:10.912604 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-d2090548-6df4-4884-ad39-bd24186d82ad/globalmount","target_path":"/var/lib/kubelet/pods/0818d804-96be-4b29-8896-86dd460a9133/volumes/kubernetes.io~csi/pvc-d2090548-6df4-4884-ad39-bd24186d82ad/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-d2090548-6df4-4884-ad39-bd24186d82ad","storage.kubernetes.io/csiProvisionerIdentity":"1654905775224-8081-csi-mock-csi-mock-volumes-8920"}},"Response":{},"Error":"","FullError":null} STEP: Waiting for pod to be running STEP: Deleting the previously created pod Jun 11 00:03:15.440: INFO: Deleting pod "pvc-volume-tester-gz44j" in namespace "csi-mock-volumes-8920" Jun 11 00:03:15.445: INFO: Wait up to 5m0s for pod "pvc-volume-tester-gz44j" to be fully deleted Jun 11 00:03:18.802: INFO: >>> kubeConfig: /root/.kube/config I0611 00:03:18.969896 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/0818d804-96be-4b29-8896-86dd460a9133/volumes/kubernetes.io~csi/pvc-d2090548-6df4-4884-ad39-bd24186d82ad/mount"},"Response":{},"Error":"","FullError":null} I0611 00:03:19.005163 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0611 00:03:19.006926 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-d2090548-6df4-4884-ad39-bd24186d82ad/globalmount"},"Response":{},"Error":"","FullError":null} STEP: Waiting for all remaining expected CSI calls STEP: Deleting pod pvc-volume-tester-gz44j Jun 11 00:03:24.449: INFO: Deleting pod "pvc-volume-tester-gz44j" in namespace "csi-mock-volumes-8920" STEP: Deleting claim pvc-c544s Jun 11 00:03:24.459: INFO: Waiting up to 2m0s for PersistentVolume pvc-d2090548-6df4-4884-ad39-bd24186d82ad to get deleted Jun 11 00:03:24.462: INFO: PersistentVolume pvc-d2090548-6df4-4884-ad39-bd24186d82ad found and phase=Bound (2.314292ms) I0611 00:03:24.474147 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null} Jun 11 00:03:26.465: INFO: PersistentVolume pvc-d2090548-6df4-4884-ad39-bd24186d82ad was removed STEP: Deleting storageclass csi-mock-volumes-8920-scv8dt7 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-8920 STEP: Waiting for namespaces [csi-mock-volumes-8920] to vanish STEP: uninstalling csi mock driver Jun 11 00:03:32.495: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8920-9054/csi-attacher Jun 11 00:03:32.499: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-8920 Jun 11 00:03:32.503: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-8920 Jun 11 00:03:32.506: INFO: deleting *v1.Role: csi-mock-volumes-8920-9054/external-attacher-cfg-csi-mock-volumes-8920 Jun 11 00:03:32.511: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8920-9054/csi-attacher-role-cfg Jun 11 00:03:32.515: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8920-9054/csi-provisioner Jun 11 00:03:32.520: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-8920 Jun 11 00:03:32.524: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-8920 Jun 11 00:03:32.527: INFO: deleting *v1.Role: csi-mock-volumes-8920-9054/external-provisioner-cfg-csi-mock-volumes-8920 Jun 11 00:03:32.531: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8920-9054/csi-provisioner-role-cfg Jun 11 00:03:32.534: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8920-9054/csi-resizer Jun 11 00:03:32.537: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-8920 Jun 11 00:03:32.540: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-8920 Jun 11 00:03:32.544: INFO: deleting *v1.Role: csi-mock-volumes-8920-9054/external-resizer-cfg-csi-mock-volumes-8920 Jun 11 00:03:32.548: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8920-9054/csi-resizer-role-cfg Jun 11 00:03:32.552: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8920-9054/csi-snapshotter Jun 11 00:03:32.555: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-8920 Jun 11 00:03:32.559: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-8920 Jun 11 00:03:32.562: INFO: deleting *v1.Role: csi-mock-volumes-8920-9054/external-snapshotter-leaderelection-csi-mock-volumes-8920 Jun 11 00:03:32.566: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8920-9054/external-snapshotter-leaderelection Jun 11 00:03:32.570: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8920-9054/csi-mock Jun 11 00:03:32.574: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-8920 Jun 11 00:03:32.578: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-8920 Jun 11 00:03:32.581: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-8920 Jun 11 00:03:32.585: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-8920 Jun 11 00:03:32.589: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-8920 Jun 11 00:03:32.593: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-8920 Jun 11 00:03:32.596: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-8920 Jun 11 00:03:32.599: INFO: deleting *v1.StatefulSet: csi-mock-volumes-8920-9054/csi-mockplugin Jun 11 00:03:32.603: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-8920 STEP: deleting the driver namespace: csi-mock-volumes-8920-9054 STEP: Waiting for namespaces [csi-mock-volumes-8920-9054] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:04:28.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:110.789 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI NodeStage error cases [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:734 should retry NodeStage after NodeStage final error /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:828 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI NodeStage error cases [Slow] should retry NodeStage after NodeStage final error","total":-1,"completed":1,"skipped":10,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:03:44.671: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] StatefulSet with pod affinity [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:391 STEP: Setting up local volumes on node "node1" STEP: Initializing test volumes Jun 11 00:03:48.723: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-fd9a7429-1fd3-4929-9646-e44aa49ae8a9] Namespace:persistent-local-volumes-test-5253 PodName:hostexec-node1-vhkrs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:03:48.723: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:03:48.911: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-17a73688-554b-4dce-95ca-48d5f2ae376e] Namespace:persistent-local-volumes-test-5253 PodName:hostexec-node1-vhkrs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:03:48.911: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:03:49.222: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-2155aac8-a9a1-4b1e-98a4-af8453d80a25] Namespace:persistent-local-volumes-test-5253 PodName:hostexec-node1-vhkrs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:03:49.222: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:03:49.488: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-13e5680c-40e6-4162-ad82-109065ff325f] Namespace:persistent-local-volumes-test-5253 PodName:hostexec-node1-vhkrs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:03:49.488: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:03:49.725: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-3cadeca0-c237-461f-9927-0e8c0849c089] Namespace:persistent-local-volumes-test-5253 PodName:hostexec-node1-vhkrs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:03:49.725: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:03:49.875: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-6bb47a1d-8a8d-4eed-b15b-a6a1b7bcffae] Namespace:persistent-local-volumes-test-5253 PodName:hostexec-node1-vhkrs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:03:49.875: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 11 00:03:50.106: INFO: Creating a PV followed by a PVC Jun 11 00:03:50.114: INFO: Creating a PV followed by a PVC Jun 11 00:03:50.119: INFO: Creating a PV followed by a PVC Jun 11 00:03:50.127: INFO: Creating a PV followed by a PVC Jun 11 00:03:50.133: INFO: Creating a PV followed by a PVC Jun 11 00:03:50.138: INFO: Creating a PV followed by a PVC Jun 11 00:04:00.184: INFO: PVCs were not bound within 10s (that's good) STEP: Setting up local volumes on node "node2" STEP: Initializing test volumes Jun 11 00:04:02.200: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-68996b34-9d58-44a9-8248-c81dd146e5d7] Namespace:persistent-local-volumes-test-5253 PodName:hostexec-node2-2q8wt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:04:02.201: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:04:02.304: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-ecf3e6f9-c26c-48d1-8882-c5e8f85b5957] Namespace:persistent-local-volumes-test-5253 PodName:hostexec-node2-2q8wt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:04:02.304: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:04:02.598: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-1137db61-cd41-4df7-9874-1de192f3a52f] Namespace:persistent-local-volumes-test-5253 PodName:hostexec-node2-2q8wt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:04:02.598: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:04:02.735: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-894311f6-2205-4494-8c3a-7b7902370ea8] Namespace:persistent-local-volumes-test-5253 PodName:hostexec-node2-2q8wt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:04:02.735: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:04:02.824: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-38da558e-2309-49a2-9edd-5294c19f3da9] Namespace:persistent-local-volumes-test-5253 PodName:hostexec-node2-2q8wt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:04:02.824: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:04:03.162: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-3b0225c0-6b8a-4fbb-84a6-27de5a9fe1cd] Namespace:persistent-local-volumes-test-5253 PodName:hostexec-node2-2q8wt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:04:03.162: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 11 00:04:03.782: INFO: Creating a PV followed by a PVC Jun 11 00:04:03.788: INFO: Creating a PV followed by a PVC Jun 11 00:04:03.795: INFO: Creating a PV followed by a PVC Jun 11 00:04:03.801: INFO: Creating a PV followed by a PVC Jun 11 00:04:03.806: INFO: Creating a PV followed by a PVC Jun 11 00:04:03.813: INFO: Creating a PV followed by a PVC Jun 11 00:04:13.854: INFO: PVCs were not bound within 10s (that's good) [It] should use volumes on one node when pod management is parallel and pod has affinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:434 STEP: Creating a StatefulSet with pod affinity on nodes Jun 11 00:04:13.861: INFO: Found 0 stateful pods, waiting for 3 Jun 11 00:04:23.866: INFO: Waiting for pod local-volume-statefulset-0 to enter Running - Ready=true, currently Pending - Ready=false Jun 11 00:04:33.865: INFO: Waiting for pod local-volume-statefulset-0 to enter Running - Ready=true, currently Running - Ready=true Jun 11 00:04:33.865: INFO: Waiting for pod local-volume-statefulset-1 to enter Running - Ready=true, currently Running - Ready=true Jun 11 00:04:33.865: INFO: Waiting for pod local-volume-statefulset-2 to enter Running - Ready=true, currently Running - Ready=true Jun 11 00:04:33.868: INFO: Waiting up to timeout=1s for PersistentVolumeClaims [vol1-local-volume-statefulset-0] to have phase Bound Jun 11 00:04:33.871: INFO: PersistentVolumeClaim vol1-local-volume-statefulset-0 found and phase=Bound (2.686482ms) Jun 11 00:04:33.871: INFO: Waiting up to timeout=1s for PersistentVolumeClaims [vol1-local-volume-statefulset-1] to have phase Bound Jun 11 00:04:33.873: INFO: PersistentVolumeClaim vol1-local-volume-statefulset-1 found and phase=Bound (2.076882ms) Jun 11 00:04:33.873: INFO: Waiting up to timeout=1s for PersistentVolumeClaims [vol1-local-volume-statefulset-2] to have phase Bound Jun 11 00:04:33.875: INFO: PersistentVolumeClaim vol1-local-volume-statefulset-2 found and phase=Bound (2.056335ms) [AfterEach] StatefulSet with pod affinity [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:403 STEP: Cleaning up PVC and PV Jun 11 00:04:33.875: INFO: Deleting PersistentVolumeClaim "pvc-xd2d9" Jun 11 00:04:33.879: INFO: Deleting PersistentVolume "local-pvsbmwd" STEP: Cleaning up PVC and PV Jun 11 00:04:33.883: INFO: Deleting PersistentVolumeClaim "pvc-8qzg8" Jun 11 00:04:33.887: INFO: Deleting PersistentVolume "local-pvg4lv2" STEP: Cleaning up PVC and PV Jun 11 00:04:33.890: INFO: Deleting PersistentVolumeClaim "pvc-tv4sz" Jun 11 00:04:33.893: INFO: Deleting PersistentVolume "local-pvm25bc" STEP: Cleaning up PVC and PV Jun 11 00:04:33.897: INFO: Deleting PersistentVolumeClaim "pvc-nmfkb" Jun 11 00:04:33.902: INFO: Deleting PersistentVolume "local-pv4xv78" STEP: Cleaning up PVC and PV Jun 11 00:04:33.905: INFO: Deleting PersistentVolumeClaim "pvc-rbr9l" Jun 11 00:04:33.908: INFO: Deleting PersistentVolume "local-pvh5hvr" STEP: Cleaning up PVC and PV Jun 11 00:04:33.912: INFO: Deleting PersistentVolumeClaim "pvc-7b7c4" Jun 11 00:04:33.915: INFO: Deleting PersistentVolume "local-pvvcjk5" STEP: Removing the test directory Jun 11 00:04:33.919: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-fd9a7429-1fd3-4929-9646-e44aa49ae8a9] Namespace:persistent-local-volumes-test-5253 PodName:hostexec-node1-vhkrs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:04:33.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 11 00:04:34.027: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-17a73688-554b-4dce-95ca-48d5f2ae376e] Namespace:persistent-local-volumes-test-5253 PodName:hostexec-node1-vhkrs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:04:34.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 11 00:04:34.184: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-2155aac8-a9a1-4b1e-98a4-af8453d80a25] Namespace:persistent-local-volumes-test-5253 PodName:hostexec-node1-vhkrs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:04:34.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 11 00:04:34.356: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-13e5680c-40e6-4162-ad82-109065ff325f] Namespace:persistent-local-volumes-test-5253 PodName:hostexec-node1-vhkrs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:04:34.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 11 00:04:34.481: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-3cadeca0-c237-461f-9927-0e8c0849c089] Namespace:persistent-local-volumes-test-5253 PodName:hostexec-node1-vhkrs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:04:34.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 11 00:04:34.661: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-6bb47a1d-8a8d-4eed-b15b-a6a1b7bcffae] Namespace:persistent-local-volumes-test-5253 PodName:hostexec-node1-vhkrs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:04:34.661: INFO: >>> kubeConfig: /root/.kube/config STEP: Cleaning up PVC and PV Jun 11 00:04:34.806: INFO: Deleting PersistentVolumeClaim "pvc-ggrkk" Jun 11 00:04:34.809: INFO: Deleting PersistentVolume "local-pv78r6k" STEP: Cleaning up PVC and PV Jun 11 00:04:34.813: INFO: Deleting PersistentVolumeClaim "pvc-22gzh" Jun 11 00:04:34.816: INFO: Deleting PersistentVolume "local-pvh2z2z" STEP: Cleaning up PVC and PV Jun 11 00:04:34.820: INFO: Deleting PersistentVolumeClaim "pvc-lxh4c" Jun 11 00:04:34.823: INFO: Deleting PersistentVolume "local-pvdjn7c" STEP: Cleaning up PVC and PV Jun 11 00:04:34.827: INFO: Deleting PersistentVolumeClaim "pvc-hmqj8" Jun 11 00:04:34.830: INFO: Deleting PersistentVolume "local-pvmkwdh" STEP: Cleaning up PVC and PV Jun 11 00:04:34.834: INFO: Deleting PersistentVolumeClaim "pvc-bvg96" Jun 11 00:04:34.838: INFO: Deleting PersistentVolume "local-pvcddmx" STEP: Cleaning up PVC and PV Jun 11 00:04:34.841: INFO: Deleting PersistentVolumeClaim "pvc-jxtk4" Jun 11 00:04:34.845: INFO: Deleting PersistentVolume "local-pvljxhr" STEP: Removing the test directory Jun 11 00:04:34.849: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-68996b34-9d58-44a9-8248-c81dd146e5d7] Namespace:persistent-local-volumes-test-5253 PodName:hostexec-node2-2q8wt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:04:34.849: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 11 00:04:34.935: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-ecf3e6f9-c26c-48d1-8882-c5e8f85b5957] Namespace:persistent-local-volumes-test-5253 PodName:hostexec-node2-2q8wt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:04:34.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 11 00:04:35.141: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-1137db61-cd41-4df7-9874-1de192f3a52f] Namespace:persistent-local-volumes-test-5253 PodName:hostexec-node2-2q8wt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:04:35.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 11 00:04:35.262: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-894311f6-2205-4494-8c3a-7b7902370ea8] Namespace:persistent-local-volumes-test-5253 PodName:hostexec-node2-2q8wt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:04:35.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 11 00:04:35.372: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-38da558e-2309-49a2-9edd-5294c19f3da9] Namespace:persistent-local-volumes-test-5253 PodName:hostexec-node2-2q8wt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:04:35.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 11 00:04:35.517: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-3b0225c0-6b8a-4fbb-84a6-27de5a9fe1cd] Namespace:persistent-local-volumes-test-5253 PodName:hostexec-node2-2q8wt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:04:35.517: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:04:35.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5253" for this suite. • [SLOW TEST:50.947 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 StatefulSet with pod affinity [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:384 should use volumes on one node when pod management is parallel and pod has affinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:434 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local StatefulSet with pod affinity [Slow] should use volumes on one node when pod management is parallel and pod has affinity","total":-1,"completed":2,"skipped":14,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:04:08.224: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Jun 11 00:04:18.272: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-77312e53-b06e-4e1b-af13-32fc343665b6] Namespace:persistent-local-volumes-test-4491 PodName:hostexec-node2-bvfsd ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:04:18.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 11 00:04:18.600: INFO: Creating a PV followed by a PVC Jun 11 00:04:18.606: INFO: Waiting for PV local-pvg5lzd to bind to PVC pvc-s6lrz Jun 11 00:04:18.607: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-s6lrz] to have phase Bound Jun 11 00:04:18.609: INFO: PersistentVolumeClaim pvc-s6lrz found but phase is Pending instead of Bound. Jun 11 00:04:20.615: INFO: PersistentVolumeClaim pvc-s6lrz found but phase is Pending instead of Bound. Jun 11 00:04:22.619: INFO: PersistentVolumeClaim pvc-s6lrz found but phase is Pending instead of Bound. Jun 11 00:04:24.622: INFO: PersistentVolumeClaim pvc-s6lrz found but phase is Pending instead of Bound. Jun 11 00:04:26.625: INFO: PersistentVolumeClaim pvc-s6lrz found but phase is Pending instead of Bound. Jun 11 00:04:28.628: INFO: PersistentVolumeClaim pvc-s6lrz found and phase=Bound (10.021252737s) Jun 11 00:04:28.628: INFO: Waiting up to 3m0s for PersistentVolume local-pvg5lzd to have phase Bound Jun 11 00:04:28.630: INFO: PersistentVolume local-pvg5lzd found and phase=Bound (2.584055ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Jun 11 00:04:38.659: INFO: pod "pod-77b0f47b-9b4e-411e-aed9-35995eb5a23b" created on Node "node2" STEP: Writing in pod1 Jun 11 00:04:38.659: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4491 PodName:pod-77b0f47b-9b4e-411e-aed9-35995eb5a23b ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:04:38.659: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:04:38.811: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 STEP: Reading in pod1 Jun 11 00:04:38.811: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4491 PodName:pod-77b0f47b-9b4e-411e-aed9-35995eb5a23b ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:04:38.811: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:04:38.895: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-77b0f47b-9b4e-411e-aed9-35995eb5a23b in namespace persistent-local-volumes-test-4491 [AfterEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 11 00:04:38.902: INFO: Deleting PersistentVolumeClaim "pvc-s6lrz" Jun 11 00:04:38.907: INFO: Deleting PersistentVolume "local-pvg5lzd" STEP: Removing the test directory Jun 11 00:04:38.911: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-77312e53-b06e-4e1b-af13-32fc343665b6] Namespace:persistent-local-volumes-test-4491 PodName:hostexec-node2-bvfsd ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:04:38.911: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:04:39.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-4491" for this suite. • [SLOW TEST:30.787 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":3,"skipped":122,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:04:39.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pvc-protection STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:72 Jun 11 00:04:39.132: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable STEP: Creating a PVC Jun 11 00:04:39.137: INFO: error finding default storageClass : No default storage class found [AfterEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:04:39.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pvc-protection-7714" for this suite. [AfterEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:108 S [SKIPPING] in Spec Setup (BeforeEach) [0.042 seconds] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Verify "immediate" deletion of a PVC that is not in active use by a pod [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:114 error finding default storageClass : No default storage class found /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pv/pv.go:819 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:04:28.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-block-dev STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:325 STEP: Create a pod for further testing Jun 11 00:04:28.682: INFO: The status of Pod test-hostpath-type-65fj9 is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:04:30.686: INFO: The status of Pod test-hostpath-type-65fj9 is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:04:32.686: INFO: The status of Pod test-hostpath-type-65fj9 is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:04:34.688: INFO: The status of Pod test-hostpath-type-65fj9 is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:04:36.687: INFO: The status of Pod test-hostpath-type-65fj9 is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:04:38.687: INFO: The status of Pod test-hostpath-type-65fj9 is Running (Ready = true) STEP: running on node node2 STEP: Create a block device for further testing Jun 11 00:04:38.690: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/ablkdev b 89 1] Namespace:host-path-type-block-dev-4265 PodName:test-hostpath-type-65fj9 ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:04:38.690: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to mount block device 'ablkdev' successfully when HostPathType is HostPathBlockDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:346 [AfterEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:04:50.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-block-dev-4265" for this suite. • [SLOW TEST:22.239 seconds] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should be able to mount block device 'ablkdev' successfully when HostPathType is HostPathBlockDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:346 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Block Device [Slow] Should be able to mount block device 'ablkdev' successfully when HostPathType is HostPathBlockDev","total":-1,"completed":2,"skipped":14,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:04:39.214: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-block-dev STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:325 STEP: Create a pod for further testing Jun 11 00:04:39.250: INFO: The status of Pod test-hostpath-type-xr6z6 is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:04:41.253: INFO: The status of Pod test-hostpath-type-xr6z6 is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:04:43.254: INFO: The status of Pod test-hostpath-type-xr6z6 is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:04:45.253: INFO: The status of Pod test-hostpath-type-xr6z6 is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:04:47.256: INFO: The status of Pod test-hostpath-type-xr6z6 is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:04:49.254: INFO: The status of Pod test-hostpath-type-xr6z6 is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:04:51.253: INFO: The status of Pod test-hostpath-type-xr6z6 is Running (Ready = true) STEP: running on node node2 STEP: Create a block device for further testing Jun 11 00:04:51.255: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/ablkdev b 89 1] Namespace:host-path-type-block-dev-2315 PodName:test-hostpath-type-xr6z6 ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:04:51.255: INFO: >>> kubeConfig: /root/.kube/config [It] Should fail on mounting block device 'ablkdev' when HostPathType is HostPathDirectory /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:354 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:04:53.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-block-dev-2315" for this suite. • [SLOW TEST:14.181 seconds] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting block device 'ablkdev' when HostPathType is HostPathDirectory /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:354 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Block Device [Slow] Should fail on mounting block device 'ablkdev' when HostPathType is HostPathDirectory","total":-1,"completed":4,"skipped":201,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:04:53.426: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-disks STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:74 Jun 11 00:04:53.478: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:04:53.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-disks-1327" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.062 seconds] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 schedule pods each with a PD, delete pod and verify detach [Slow] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:93 for RW PD with pod delete grace period of "immediate (0s)" /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:135 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:75 ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:03:58.304: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should expand volume without restarting pod if attach=on, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:687 STEP: Building a driver namespace object, basename csi-mock-volumes-9868 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Jun 11 00:03:58.376: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9868-89/csi-attacher Jun 11 00:03:58.380: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-9868 Jun 11 00:03:58.380: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-9868 Jun 11 00:03:58.382: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-9868 Jun 11 00:03:58.385: INFO: creating *v1.Role: csi-mock-volumes-9868-89/external-attacher-cfg-csi-mock-volumes-9868 Jun 11 00:03:58.389: INFO: creating *v1.RoleBinding: csi-mock-volumes-9868-89/csi-attacher-role-cfg Jun 11 00:03:58.391: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9868-89/csi-provisioner Jun 11 00:03:58.394: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-9868 Jun 11 00:03:58.394: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-9868 Jun 11 00:03:58.397: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-9868 Jun 11 00:03:58.400: INFO: creating *v1.Role: csi-mock-volumes-9868-89/external-provisioner-cfg-csi-mock-volumes-9868 Jun 11 00:03:58.403: INFO: creating *v1.RoleBinding: csi-mock-volumes-9868-89/csi-provisioner-role-cfg Jun 11 00:03:58.406: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9868-89/csi-resizer Jun 11 00:03:58.409: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-9868 Jun 11 00:03:58.409: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-9868 Jun 11 00:03:58.411: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-9868 Jun 11 00:03:58.414: INFO: creating *v1.Role: csi-mock-volumes-9868-89/external-resizer-cfg-csi-mock-volumes-9868 Jun 11 00:03:58.416: INFO: creating *v1.RoleBinding: csi-mock-volumes-9868-89/csi-resizer-role-cfg Jun 11 00:03:58.419: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9868-89/csi-snapshotter Jun 11 00:03:58.421: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-9868 Jun 11 00:03:58.421: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-9868 Jun 11 00:03:58.424: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-9868 Jun 11 00:03:58.426: INFO: creating *v1.Role: csi-mock-volumes-9868-89/external-snapshotter-leaderelection-csi-mock-volumes-9868 Jun 11 00:03:58.429: INFO: creating *v1.RoleBinding: csi-mock-volumes-9868-89/external-snapshotter-leaderelection Jun 11 00:03:58.431: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9868-89/csi-mock Jun 11 00:03:58.434: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-9868 Jun 11 00:03:58.438: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-9868 Jun 11 00:03:58.440: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-9868 Jun 11 00:03:58.444: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-9868 Jun 11 00:03:58.446: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-9868 Jun 11 00:03:58.449: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-9868 Jun 11 00:03:58.452: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-9868 Jun 11 00:03:58.455: INFO: creating *v1.StatefulSet: csi-mock-volumes-9868-89/csi-mockplugin Jun 11 00:03:58.460: INFO: creating *v1.StatefulSet: csi-mock-volumes-9868-89/csi-mockplugin-attacher Jun 11 00:03:58.464: INFO: creating *v1.StatefulSet: csi-mock-volumes-9868-89/csi-mockplugin-resizer Jun 11 00:03:58.467: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-9868 to register on node node1 STEP: Creating pod Jun 11 00:04:14.738: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jun 11 00:04:14.743: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-nkdfc] to have phase Bound Jun 11 00:04:14.745: INFO: PersistentVolumeClaim pvc-nkdfc found but phase is Pending instead of Bound. Jun 11 00:04:16.748: INFO: PersistentVolumeClaim pvc-nkdfc found and phase=Bound (2.004907343s) STEP: Expanding current pvc STEP: Waiting for persistent volume resize to finish STEP: Waiting for PVC resize to finish STEP: Deleting pod pvc-volume-tester-x7fbx Jun 11 00:04:38.790: INFO: Deleting pod "pvc-volume-tester-x7fbx" in namespace "csi-mock-volumes-9868" Jun 11 00:04:38.797: INFO: Wait up to 5m0s for pod "pvc-volume-tester-x7fbx" to be fully deleted STEP: Deleting claim pvc-nkdfc Jun 11 00:04:46.810: INFO: Waiting up to 2m0s for PersistentVolume pvc-bbbd406d-47ed-4a37-bab9-ec50b103f109 to get deleted Jun 11 00:04:46.813: INFO: PersistentVolume pvc-bbbd406d-47ed-4a37-bab9-ec50b103f109 found and phase=Bound (2.448606ms) Jun 11 00:04:48.816: INFO: PersistentVolume pvc-bbbd406d-47ed-4a37-bab9-ec50b103f109 found and phase=Released (2.005925675s) Jun 11 00:04:50.820: INFO: PersistentVolume pvc-bbbd406d-47ed-4a37-bab9-ec50b103f109 found and phase=Released (4.009499328s) Jun 11 00:04:52.824: INFO: PersistentVolume pvc-bbbd406d-47ed-4a37-bab9-ec50b103f109 found and phase=Released (6.013486867s) Jun 11 00:04:54.828: INFO: PersistentVolume pvc-bbbd406d-47ed-4a37-bab9-ec50b103f109 was removed STEP: Deleting storageclass csi-mock-volumes-9868-sckcfgx STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-9868 STEP: Waiting for namespaces [csi-mock-volumes-9868] to vanish STEP: uninstalling csi mock driver Jun 11 00:05:00.843: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9868-89/csi-attacher Jun 11 00:05:00.847: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-9868 Jun 11 00:05:00.850: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-9868 Jun 11 00:05:00.853: INFO: deleting *v1.Role: csi-mock-volumes-9868-89/external-attacher-cfg-csi-mock-volumes-9868 Jun 11 00:05:00.858: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9868-89/csi-attacher-role-cfg Jun 11 00:05:00.862: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9868-89/csi-provisioner Jun 11 00:05:00.865: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-9868 Jun 11 00:05:00.868: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-9868 Jun 11 00:05:00.871: INFO: deleting *v1.Role: csi-mock-volumes-9868-89/external-provisioner-cfg-csi-mock-volumes-9868 Jun 11 00:05:00.875: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9868-89/csi-provisioner-role-cfg Jun 11 00:05:00.879: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9868-89/csi-resizer Jun 11 00:05:00.882: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-9868 Jun 11 00:05:00.886: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-9868 Jun 11 00:05:00.889: INFO: deleting *v1.Role: csi-mock-volumes-9868-89/external-resizer-cfg-csi-mock-volumes-9868 Jun 11 00:05:00.892: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9868-89/csi-resizer-role-cfg Jun 11 00:05:00.896: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9868-89/csi-snapshotter Jun 11 00:05:00.899: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-9868 Jun 11 00:05:00.904: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-9868 Jun 11 00:05:00.908: INFO: deleting *v1.Role: csi-mock-volumes-9868-89/external-snapshotter-leaderelection-csi-mock-volumes-9868 Jun 11 00:05:00.911: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9868-89/external-snapshotter-leaderelection Jun 11 00:05:00.914: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9868-89/csi-mock Jun 11 00:05:00.918: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-9868 Jun 11 00:05:00.922: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-9868 Jun 11 00:05:00.925: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-9868 Jun 11 00:05:00.929: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-9868 Jun 11 00:05:00.933: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-9868 Jun 11 00:05:00.937: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-9868 Jun 11 00:05:00.940: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-9868 Jun 11 00:05:00.944: INFO: deleting *v1.StatefulSet: csi-mock-volumes-9868-89/csi-mockplugin Jun 11 00:05:00.948: INFO: deleting *v1.StatefulSet: csi-mock-volumes-9868-89/csi-mockplugin-attacher Jun 11 00:05:00.951: INFO: deleting *v1.StatefulSet: csi-mock-volumes-9868-89/csi-mockplugin-resizer STEP: deleting the driver namespace: csi-mock-volumes-9868-89 STEP: Waiting for namespaces [csi-mock-volumes-9868-89] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:05:12.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:74.662 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI online volume expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:672 should expand volume without restarting pod if attach=on, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:687 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI online volume expansion should expand volume without restarting pod if attach=on, nodeExpansion=on","total":-1,"completed":4,"skipped":61,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:04:50.936: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node2" using path "/tmp/local-volume-test-d6b3d57c-2045-47f6-b5a0-a4aa805d3184" Jun 11 00:05:00.986: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-d6b3d57c-2045-47f6-b5a0-a4aa805d3184 && dd if=/dev/zero of=/tmp/local-volume-test-d6b3d57c-2045-47f6-b5a0-a4aa805d3184/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-d6b3d57c-2045-47f6-b5a0-a4aa805d3184/file] Namespace:persistent-local-volumes-test-9070 PodName:hostexec-node2-msn64 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:00.986: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:05:01.128: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-d6b3d57c-2045-47f6-b5a0-a4aa805d3184/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-9070 PodName:hostexec-node2-msn64 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:01.128: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 11 00:05:01.349: INFO: Creating a PV followed by a PVC Jun 11 00:05:01.356: INFO: Waiting for PV local-pv65xw4 to bind to PVC pvc-blj9w Jun 11 00:05:01.356: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-blj9w] to have phase Bound Jun 11 00:05:01.358: INFO: PersistentVolumeClaim pvc-blj9w found but phase is Pending instead of Bound. Jun 11 00:05:03.362: INFO: PersistentVolumeClaim pvc-blj9w found and phase=Bound (2.005450078s) Jun 11 00:05:03.362: INFO: Waiting up to 3m0s for PersistentVolume local-pv65xw4 to have phase Bound Jun 11 00:05:03.364: INFO: PersistentVolume local-pv65xw4 found and phase=Bound (1.970438ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 STEP: Checking fsGroup is set STEP: Creating a pod Jun 11 00:05:15.388: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-9070 exec pod-56e032de-e239-4fd9-885c-2d3a5f6c3e5a --namespace=persistent-local-volumes-test-9070 -- stat -c %g /mnt/volume1' Jun 11 00:05:15.631: INFO: stderr: "" Jun 11 00:05:15.631: INFO: stdout: "1234\n" STEP: Deleting pod STEP: Deleting pod pod-56e032de-e239-4fd9-885c-2d3a5f6c3e5a in namespace persistent-local-volumes-test-9070 [AfterEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 11 00:05:15.638: INFO: Deleting PersistentVolumeClaim "pvc-blj9w" Jun 11 00:05:15.642: INFO: Deleting PersistentVolume "local-pv65xw4" Jun 11 00:05:15.645: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-d6b3d57c-2045-47f6-b5a0-a4aa805d3184/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-9070 PodName:hostexec-node2-msn64 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:15.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "node2" at path /tmp/local-volume-test-d6b3d57c-2045-47f6-b5a0-a4aa805d3184/file Jun 11 00:05:15.737: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-9070 PodName:hostexec-node2-msn64 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:15.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-d6b3d57c-2045-47f6-b5a0-a4aa805d3184 Jun 11 00:05:15.853: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-d6b3d57c-2045-47f6-b5a0-a4aa805d3184] Namespace:persistent-local-volumes-test-9070 PodName:hostexec-node2-msn64 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:15.853: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:05:15.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-9070" for this suite. • [SLOW TEST:25.070 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Set fsGroup for local volume should set fsGroup for one pod [Slow]","total":-1,"completed":3,"skipped":36,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:04:53.491: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-453db0ff-e3c4-49d9-965c-c85d17dcef6f" Jun 11 00:05:11.538: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-453db0ff-e3c4-49d9-965c-c85d17dcef6f" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-453db0ff-e3c4-49d9-965c-c85d17dcef6f" "/tmp/local-volume-test-453db0ff-e3c4-49d9-965c-c85d17dcef6f"] Namespace:persistent-local-volumes-test-1567 PodName:hostexec-node2-cv679 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:11.538: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 11 00:05:11.703: INFO: Creating a PV followed by a PVC Jun 11 00:05:11.710: INFO: Waiting for PV local-pvkpxqw to bind to PVC pvc-tfmss Jun 11 00:05:11.710: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-tfmss] to have phase Bound Jun 11 00:05:11.713: INFO: PersistentVolumeClaim pvc-tfmss found but phase is Pending instead of Bound. Jun 11 00:05:13.717: INFO: PersistentVolumeClaim pvc-tfmss found and phase=Bound (2.006237478s) Jun 11 00:05:13.717: INFO: Waiting up to 3m0s for PersistentVolume local-pvkpxqw to have phase Bound Jun 11 00:05:13.719: INFO: PersistentVolume local-pvkpxqw found and phase=Bound (2.613966ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 STEP: Checking fsGroup is set STEP: Creating a pod Jun 11 00:05:25.744: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-1567 exec pod-26532658-6be6-47c4-a88d-59ec0ff114f5 --namespace=persistent-local-volumes-test-1567 -- stat -c %g /mnt/volume1' Jun 11 00:05:26.032: INFO: stderr: "" Jun 11 00:05:26.032: INFO: stdout: "1234\n" STEP: Deleting pod STEP: Deleting pod pod-26532658-6be6-47c4-a88d-59ec0ff114f5 in namespace persistent-local-volumes-test-1567 [AfterEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 11 00:05:26.036: INFO: Deleting PersistentVolumeClaim "pvc-tfmss" Jun 11 00:05:26.040: INFO: Deleting PersistentVolume "local-pvkpxqw" STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-453db0ff-e3c4-49d9-965c-c85d17dcef6f" Jun 11 00:05:26.044: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-453db0ff-e3c4-49d9-965c-c85d17dcef6f"] Namespace:persistent-local-volumes-test-1567 PodName:hostexec-node2-cv679 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:26.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 11 00:05:26.219: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-453db0ff-e3c4-49d9-965c-c85d17dcef6f] Namespace:persistent-local-volumes-test-1567 PodName:hostexec-node2-cv679 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:26.219: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:05:26.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-1567" for this suite. • [SLOW TEST:33.052 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: tmpfs] Set fsGroup for local volume should set fsGroup for one pod [Slow]","total":-1,"completed":5,"skipped":214,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:05:13.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node2" using path "/tmp/local-volume-test-c6c3b66d-385a-42d1-bb97-f7b3f3a4dd14" Jun 11 00:05:23.069: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-c6c3b66d-385a-42d1-bb97-f7b3f3a4dd14 && dd if=/dev/zero of=/tmp/local-volume-test-c6c3b66d-385a-42d1-bb97-f7b3f3a4dd14/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-c6c3b66d-385a-42d1-bb97-f7b3f3a4dd14/file] Namespace:persistent-local-volumes-test-1463 PodName:hostexec-node2-tp4ld ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:23.069: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:05:23.449: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-c6c3b66d-385a-42d1-bb97-f7b3f3a4dd14/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-1463 PodName:hostexec-node2-tp4ld ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:23.450: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 11 00:05:23.656: INFO: Creating a PV followed by a PVC Jun 11 00:05:23.663: INFO: Waiting for PV local-pvm6tcl to bind to PVC pvc-sx9c5 Jun 11 00:05:23.663: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-sx9c5] to have phase Bound Jun 11 00:05:23.666: INFO: PersistentVolumeClaim pvc-sx9c5 found but phase is Pending instead of Bound. Jun 11 00:05:25.671: INFO: PersistentVolumeClaim pvc-sx9c5 found but phase is Pending instead of Bound. Jun 11 00:05:27.675: INFO: PersistentVolumeClaim pvc-sx9c5 found and phase=Bound (4.011879681s) Jun 11 00:05:27.675: INFO: Waiting up to 3m0s for PersistentVolume local-pvm6tcl to have phase Bound Jun 11 00:05:27.678: INFO: PersistentVolume local-pvm6tcl found and phase=Bound (2.279007ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Jun 11 00:05:35.710: INFO: pod "pod-1961fab0-ff09-4517-a000-c4ae8e1cf1d6" created on Node "node2" STEP: Writing in pod1 Jun 11 00:05:35.710: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1463 PodName:pod-1961fab0-ff09-4517-a000-c4ae8e1cf1d6 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:05:35.710: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:05:35.806: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 STEP: Reading in pod1 Jun 11 00:05:35.806: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1463 PodName:pod-1961fab0-ff09-4517-a000-c4ae8e1cf1d6 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:05:35.806: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:05:35.887: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-1961fab0-ff09-4517-a000-c4ae8e1cf1d6 in namespace persistent-local-volumes-test-1463 [AfterEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 11 00:05:35.892: INFO: Deleting PersistentVolumeClaim "pvc-sx9c5" Jun 11 00:05:35.896: INFO: Deleting PersistentVolume "local-pvm6tcl" Jun 11 00:05:35.900: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-c6c3b66d-385a-42d1-bb97-f7b3f3a4dd14/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-1463 PodName:hostexec-node2-tp4ld ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:35.900: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop1" on node "node2" at path /tmp/local-volume-test-c6c3b66d-385a-42d1-bb97-f7b3f3a4dd14/file Jun 11 00:05:35.995: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop1] Namespace:persistent-local-volumes-test-1463 PodName:hostexec-node2-tp4ld ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:35.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-c6c3b66d-385a-42d1-bb97-f7b3f3a4dd14 Jun 11 00:05:36.088: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-c6c3b66d-385a-42d1-bb97-f7b3f3a4dd14] Namespace:persistent-local-volumes-test-1463 PodName:hostexec-node2-tp4ld ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:36.088: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:05:36.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-1463" for this suite. • [SLOW TEST:23.190 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":5,"skipped":84,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:03:52.316: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] Stress with local volumes [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:455 STEP: Setting up 10 local volumes on node "node1" STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-26f795e1-f9d8-4b8a-919f-7eb9eb84b56c" Jun 11 00:03:56.369: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-26f795e1-f9d8-4b8a-919f-7eb9eb84b56c" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-26f795e1-f9d8-4b8a-919f-7eb9eb84b56c" "/tmp/local-volume-test-26f795e1-f9d8-4b8a-919f-7eb9eb84b56c"] Namespace:persistent-local-volumes-test-8899 PodName:hostexec-node1-n5bf8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:03:56.369: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-78d54473-5b35-4dc0-9819-bf43e8bf4777" Jun 11 00:03:57.180: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-78d54473-5b35-4dc0-9819-bf43e8bf4777" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-78d54473-5b35-4dc0-9819-bf43e8bf4777" "/tmp/local-volume-test-78d54473-5b35-4dc0-9819-bf43e8bf4777"] Namespace:persistent-local-volumes-test-8899 PodName:hostexec-node1-n5bf8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:03:57.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-d420a0e7-0111-4c98-a9fd-2597d29c545f" Jun 11 00:03:57.266: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-d420a0e7-0111-4c98-a9fd-2597d29c545f" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-d420a0e7-0111-4c98-a9fd-2597d29c545f" "/tmp/local-volume-test-d420a0e7-0111-4c98-a9fd-2597d29c545f"] Namespace:persistent-local-volumes-test-8899 PodName:hostexec-node1-n5bf8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:03:57.266: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-e83949ea-e1a0-470d-be71-6170bd1da9c5" Jun 11 00:03:57.362: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-e83949ea-e1a0-470d-be71-6170bd1da9c5" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-e83949ea-e1a0-470d-be71-6170bd1da9c5" "/tmp/local-volume-test-e83949ea-e1a0-470d-be71-6170bd1da9c5"] Namespace:persistent-local-volumes-test-8899 PodName:hostexec-node1-n5bf8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:03:57.362: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-c50c0242-e291-48dd-b487-f7a959f9ffb6" Jun 11 00:03:57.449: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-c50c0242-e291-48dd-b487-f7a959f9ffb6" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-c50c0242-e291-48dd-b487-f7a959f9ffb6" "/tmp/local-volume-test-c50c0242-e291-48dd-b487-f7a959f9ffb6"] Namespace:persistent-local-volumes-test-8899 PodName:hostexec-node1-n5bf8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:03:57.449: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-25f372fd-fc23-4a00-a413-5d08f47985dd" Jun 11 00:03:57.550: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-25f372fd-fc23-4a00-a413-5d08f47985dd" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-25f372fd-fc23-4a00-a413-5d08f47985dd" "/tmp/local-volume-test-25f372fd-fc23-4a00-a413-5d08f47985dd"] Namespace:persistent-local-volumes-test-8899 PodName:hostexec-node1-n5bf8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:03:57.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-bedc1b10-dba6-4f39-96ea-ddfd0ed6cc81" Jun 11 00:03:57.637: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-bedc1b10-dba6-4f39-96ea-ddfd0ed6cc81" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-bedc1b10-dba6-4f39-96ea-ddfd0ed6cc81" "/tmp/local-volume-test-bedc1b10-dba6-4f39-96ea-ddfd0ed6cc81"] Namespace:persistent-local-volumes-test-8899 PodName:hostexec-node1-n5bf8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:03:57.637: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-19f7bcbd-b305-48ef-97bf-24563d0763af" Jun 11 00:03:57.724: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-19f7bcbd-b305-48ef-97bf-24563d0763af" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-19f7bcbd-b305-48ef-97bf-24563d0763af" "/tmp/local-volume-test-19f7bcbd-b305-48ef-97bf-24563d0763af"] Namespace:persistent-local-volumes-test-8899 PodName:hostexec-node1-n5bf8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:03:57.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-2c848609-9414-46f5-a268-245331c8a044" Jun 11 00:03:57.814: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-2c848609-9414-46f5-a268-245331c8a044" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-2c848609-9414-46f5-a268-245331c8a044" "/tmp/local-volume-test-2c848609-9414-46f5-a268-245331c8a044"] Namespace:persistent-local-volumes-test-8899 PodName:hostexec-node1-n5bf8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:03:57.814: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-e4e5a403-2029-46fd-a06a-5021308093e2" Jun 11 00:03:57.928: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-e4e5a403-2029-46fd-a06a-5021308093e2" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-e4e5a403-2029-46fd-a06a-5021308093e2" "/tmp/local-volume-test-e4e5a403-2029-46fd-a06a-5021308093e2"] Namespace:persistent-local-volumes-test-8899 PodName:hostexec-node1-n5bf8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:03:57.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Setting up 10 local volumes on node "node2" STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-255f7fbf-a15d-4896-9864-6d46ff85465b" Jun 11 00:04:00.041: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-255f7fbf-a15d-4896-9864-6d46ff85465b" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-255f7fbf-a15d-4896-9864-6d46ff85465b" "/tmp/local-volume-test-255f7fbf-a15d-4896-9864-6d46ff85465b"] Namespace:persistent-local-volumes-test-8899 PodName:hostexec-node2-djgjz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:04:00.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-cad9189e-9a8c-45f0-b565-ca347df5b35c" Jun 11 00:04:00.165: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-cad9189e-9a8c-45f0-b565-ca347df5b35c" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-cad9189e-9a8c-45f0-b565-ca347df5b35c" "/tmp/local-volume-test-cad9189e-9a8c-45f0-b565-ca347df5b35c"] Namespace:persistent-local-volumes-test-8899 PodName:hostexec-node2-djgjz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:04:00.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-5e9892ac-1047-4627-a1b5-7d51b5aa1d0f" Jun 11 00:04:00.284: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-5e9892ac-1047-4627-a1b5-7d51b5aa1d0f" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-5e9892ac-1047-4627-a1b5-7d51b5aa1d0f" "/tmp/local-volume-test-5e9892ac-1047-4627-a1b5-7d51b5aa1d0f"] Namespace:persistent-local-volumes-test-8899 PodName:hostexec-node2-djgjz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:04:00.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-420352c9-4c5f-4d24-8750-eaf099423a0f" Jun 11 00:04:00.365: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-420352c9-4c5f-4d24-8750-eaf099423a0f" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-420352c9-4c5f-4d24-8750-eaf099423a0f" "/tmp/local-volume-test-420352c9-4c5f-4d24-8750-eaf099423a0f"] Namespace:persistent-local-volumes-test-8899 PodName:hostexec-node2-djgjz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:04:00.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-24ed5a40-2c91-4703-b824-d7531d9e6794" Jun 11 00:04:00.449: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-24ed5a40-2c91-4703-b824-d7531d9e6794" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-24ed5a40-2c91-4703-b824-d7531d9e6794" "/tmp/local-volume-test-24ed5a40-2c91-4703-b824-d7531d9e6794"] Namespace:persistent-local-volumes-test-8899 PodName:hostexec-node2-djgjz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:04:00.449: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-94eddc8c-47b4-44a6-bf99-8667d36fc2b9" Jun 11 00:04:00.531: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-94eddc8c-47b4-44a6-bf99-8667d36fc2b9" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-94eddc8c-47b4-44a6-bf99-8667d36fc2b9" "/tmp/local-volume-test-94eddc8c-47b4-44a6-bf99-8667d36fc2b9"] Namespace:persistent-local-volumes-test-8899 PodName:hostexec-node2-djgjz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:04:00.531: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-a55730a7-05d9-43e7-bfcf-b1782abcf05f" Jun 11 00:04:00.659: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-a55730a7-05d9-43e7-bfcf-b1782abcf05f" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-a55730a7-05d9-43e7-bfcf-b1782abcf05f" "/tmp/local-volume-test-a55730a7-05d9-43e7-bfcf-b1782abcf05f"] Namespace:persistent-local-volumes-test-8899 PodName:hostexec-node2-djgjz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:04:00.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-9120109d-aadc-454d-ad07-5af24861cfe0" Jun 11 00:04:00.749: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-9120109d-aadc-454d-ad07-5af24861cfe0" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-9120109d-aadc-454d-ad07-5af24861cfe0" "/tmp/local-volume-test-9120109d-aadc-454d-ad07-5af24861cfe0"] Namespace:persistent-local-volumes-test-8899 PodName:hostexec-node2-djgjz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:04:00.749: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-ebefec66-2823-47d7-924b-0cf1d8e203e9" Jun 11 00:04:00.850: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-ebefec66-2823-47d7-924b-0cf1d8e203e9" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-ebefec66-2823-47d7-924b-0cf1d8e203e9" "/tmp/local-volume-test-ebefec66-2823-47d7-924b-0cf1d8e203e9"] Namespace:persistent-local-volumes-test-8899 PodName:hostexec-node2-djgjz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:04:00.850: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-cf5ec20a-3e38-4d29-a6cd-66feb2d0d44e" Jun 11 00:04:00.934: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-cf5ec20a-3e38-4d29-a6cd-66feb2d0d44e" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-cf5ec20a-3e38-4d29-a6cd-66feb2d0d44e" "/tmp/local-volume-test-cf5ec20a-3e38-4d29-a6cd-66feb2d0d44e"] Namespace:persistent-local-volumes-test-8899 PodName:hostexec-node2-djgjz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:04:00.934: INFO: >>> kubeConfig: /root/.kube/config STEP: Create 20 PVs STEP: Start a goroutine to recycle unbound PVs [It] should be able to process many pods and reuse local volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:531 STEP: Creating 7 pods periodically STEP: Waiting for all pods to complete successfully STEP: Delete "local-pvlhjgj" and create a new PV for same local volume storage Jun 11 00:04:12.212: INFO: Deleting pod pod-1abc8efe-afe6-449d-8ef3-6b7f0fca656d Jun 11 00:04:12.219: INFO: Deleting PersistentVolumeClaim "pvc-fbhxs" Jun 11 00:04:12.223: INFO: Deleting PersistentVolumeClaim "pvc-c2sgd" Jun 11 00:04:12.226: INFO: Deleting PersistentVolumeClaim "pvc-gnd2n" Jun 11 00:04:12.230: INFO: 1/28 pods finished STEP: Delete "local-pvhx5fr" and create a new PV for same local volume storage STEP: Delete "local-pvklvnf" and create a new PV for same local volume storage STEP: Delete "local-pvn6v4g" and create a new PV for same local volume storage Jun 11 00:04:13.214: INFO: Deleting pod pod-d9487863-ee15-4c41-90d9-912294047df1 Jun 11 00:04:13.220: INFO: Deleting PersistentVolumeClaim "pvc-v9t5h" Jun 11 00:04:13.224: INFO: Deleting PersistentVolumeClaim "pvc-n84s8" Jun 11 00:04:13.227: INFO: Deleting PersistentVolumeClaim "pvc-whfhn" Jun 11 00:04:13.231: INFO: 2/28 pods finished STEP: Delete "local-pvj4kvr" and create a new PV for same local volume storage STEP: Delete "local-pvjgkpg" and create a new PV for same local volume storage STEP: Delete "local-pvjgkpg" and create a new PV for same local volume storage STEP: Delete "local-pvbmdzn" and create a new PV for same local volume storage STEP: Delete "local-pvz9g4q" and create a new PV for same local volume storage Jun 11 00:04:14.212: INFO: Deleting pod pod-957cf826-b243-46c6-ab87-ee721bfdc0ac Jun 11 00:04:14.219: INFO: Deleting PersistentVolumeClaim "pvc-f96tm" Jun 11 00:04:14.224: INFO: Deleting PersistentVolumeClaim "pvc-htblv" Jun 11 00:04:14.227: INFO: Deleting PersistentVolumeClaim "pvc-6sk8f" Jun 11 00:04:14.231: INFO: 3/28 pods finished STEP: Delete "local-pv65rwh" and create a new PV for same local volume storage STEP: Delete "local-pvpthwj" and create a new PV for same local volume storage STEP: Delete "local-pvq6w9r" and create a new PV for same local volume storage Jun 11 00:04:16.211: INFO: Deleting pod pod-7007c8ba-28e0-4122-b6c0-8d8d4a56abf8 Jun 11 00:04:16.217: INFO: Deleting PersistentVolumeClaim "pvc-5j7dc" Jun 11 00:04:16.222: INFO: Deleting PersistentVolumeClaim "pvc-9gl8n" Jun 11 00:04:16.225: INFO: Deleting PersistentVolumeClaim "pvc-hpn72" Jun 11 00:04:16.230: INFO: 4/28 pods finished STEP: Delete "local-pv7rckb" and create a new PV for same local volume storage STEP: Delete "local-pv8q2hg" and create a new PV for same local volume storage STEP: Delete "local-pvd8wkw" and create a new PV for same local volume storage Jun 11 00:04:17.211: INFO: Deleting pod pod-4800d55b-0086-4daa-84fd-3702a94b7f71 Jun 11 00:04:17.217: INFO: Deleting PersistentVolumeClaim "pvc-pv9nw" Jun 11 00:04:17.220: INFO: Deleting PersistentVolumeClaim "pvc-gf5jc" Jun 11 00:04:17.224: INFO: Deleting PersistentVolumeClaim "pvc-knv7r" Jun 11 00:04:17.227: INFO: 5/28 pods finished STEP: Delete "local-pvffj9w" and create a new PV for same local volume storage STEP: Delete "local-pv878fs" and create a new PV for same local volume storage STEP: Delete "local-pv49vvn" and create a new PV for same local volume storage Jun 11 00:04:21.212: INFO: Deleting pod pod-8ac7cf8c-b0a4-4c64-ab51-773c9df9f155 Jun 11 00:04:21.219: INFO: Deleting PersistentVolumeClaim "pvc-q5wh4" Jun 11 00:04:21.223: INFO: Deleting PersistentVolumeClaim "pvc-k5kmb" Jun 11 00:04:21.228: INFO: Deleting PersistentVolumeClaim "pvc-xvpxn" Jun 11 00:04:21.232: INFO: 6/28 pods finished STEP: Delete "local-pvg6mrw" and create a new PV for same local volume storage STEP: Delete "local-pvld6sk" and create a new PV for same local volume storage STEP: Delete "local-pvg5fnc" and create a new PV for same local volume storage Jun 11 00:04:29.212: INFO: Deleting pod pod-ad6bae59-682d-4a66-b319-358be0fed7c1 Jun 11 00:04:29.219: INFO: Deleting PersistentVolumeClaim "pvc-tdpqs" Jun 11 00:04:29.223: INFO: Deleting PersistentVolumeClaim "pvc-wphc4" Jun 11 00:04:29.227: INFO: Deleting PersistentVolumeClaim "pvc-nvfps" Jun 11 00:04:29.231: INFO: 7/28 pods finished Jun 11 00:04:29.231: INFO: Deleting pod pod-b1004545-53fe-4011-8a82-77ce79d30b4b Jun 11 00:04:29.237: INFO: Deleting PersistentVolumeClaim "pvc-ngz2z" STEP: Delete "local-pvk4kpv" and create a new PV for same local volume storage Jun 11 00:04:29.241: INFO: Deleting PersistentVolumeClaim "pvc-zcdsq" Jun 11 00:04:29.244: INFO: Deleting PersistentVolumeClaim "pvc-dfwqk" Jun 11 00:04:29.248: INFO: 8/28 pods finished STEP: Delete "local-pvw86lz" and create a new PV for same local volume storage STEP: Delete "local-pvvxbl2" and create a new PV for same local volume storage STEP: Delete "local-pvgmzbz" and create a new PV for same local volume storage STEP: Delete "local-pvvhbbf" and create a new PV for same local volume storage STEP: Delete "local-pvhk7tr" and create a new PV for same local volume storage Jun 11 00:04:31.212: INFO: Deleting pod pod-1df95420-ff49-4618-ac97-0a8f02f14aa0 Jun 11 00:04:31.220: INFO: Deleting PersistentVolumeClaim "pvc-57x4r" Jun 11 00:04:31.223: INFO: Deleting PersistentVolumeClaim "pvc-grbcr" Jun 11 00:04:31.228: INFO: Deleting PersistentVolumeClaim "pvc-r9vgg" Jun 11 00:04:31.231: INFO: 9/28 pods finished STEP: Delete "local-pvxxdp4" and create a new PV for same local volume storage STEP: Delete "local-pvxpw65" and create a new PV for same local volume storage STEP: Delete "local-pvvf6t4" and create a new PV for same local volume storage STEP: Delete "local-pvsrv9w" and create a new PV for same local volume storage Jun 11 00:04:35.212: INFO: Deleting pod pod-2f1fd255-5dd5-4a21-a8a6-dbf5b88a3628 Jun 11 00:04:35.218: INFO: Deleting PersistentVolumeClaim "pvc-22bd8" Jun 11 00:04:35.222: INFO: Deleting PersistentVolumeClaim "pvc-s5jrm" Jun 11 00:04:35.225: INFO: Deleting PersistentVolumeClaim "pvc-wxpjw" Jun 11 00:04:35.229: INFO: 10/28 pods finished STEP: Delete "local-pvjxrfq" and create a new PV for same local volume storage STEP: Delete "local-pvzbbwg" and create a new PV for same local volume storage STEP: Delete "local-pvp9v9f" and create a new PV for same local volume storage Jun 11 00:04:36.211: INFO: Deleting pod pod-9f742c81-a66a-4628-8c6c-9de1c31d7a21 Jun 11 00:04:36.217: INFO: Deleting PersistentVolumeClaim "pvc-l2z98" Jun 11 00:04:36.220: INFO: Deleting PersistentVolumeClaim "pvc-xngrh" Jun 11 00:04:36.224: INFO: Deleting PersistentVolumeClaim "pvc-ptt9k" Jun 11 00:04:36.228: INFO: 11/28 pods finished STEP: Delete "local-pvgl6mb" and create a new PV for same local volume storage STEP: Delete "local-pvwh72v" and create a new PV for same local volume storage STEP: Delete "local-pv9pw74" and create a new PV for same local volume storage Jun 11 00:04:40.213: INFO: Deleting pod pod-d85d76a1-0e7a-4977-b261-21616e0b1e3c Jun 11 00:04:40.220: INFO: Deleting PersistentVolumeClaim "pvc-9zstk" Jun 11 00:04:40.225: INFO: Deleting PersistentVolumeClaim "pvc-7rdlr" Jun 11 00:04:40.228: INFO: Deleting PersistentVolumeClaim "pvc-fgpdp" Jun 11 00:04:40.233: INFO: 12/28 pods finished STEP: Delete "local-pvz9f2m" and create a new PV for same local volume storage STEP: Delete "local-pvz9f2m" and create a new PV for same local volume storage STEP: Delete "local-pvwff5s" and create a new PV for same local volume storage STEP: Delete "local-pvwnkj8" and create a new PV for same local volume storage STEP: Delete "local-pvljxhr" and create a new PV for same local volume storage STEP: Delete "local-pv78r6k" and create a new PV for same local volume storage STEP: Delete "local-pvdjn7c" and create a new PV for same local volume storage STEP: Delete "local-pvg5lzd" and create a new PV for same local volume storage Jun 11 00:04:45.212: INFO: Deleting pod pod-96920f03-7f6f-4ff8-8dd9-2c06b5b20453 Jun 11 00:04:45.219: INFO: Deleting PersistentVolumeClaim "pvc-rdh2k" Jun 11 00:04:45.222: INFO: Deleting PersistentVolumeClaim "pvc-ppq6q" Jun 11 00:04:45.226: INFO: Deleting PersistentVolumeClaim "pvc-pqjks" Jun 11 00:04:45.229: INFO: 13/28 pods finished STEP: Delete "local-pvlhjcv" and create a new PV for same local volume storage STEP: Delete "local-pvg7l5j" and create a new PV for same local volume storage STEP: Delete "local-pv5ztj7" and create a new PV for same local volume storage STEP: Delete "pvc-bbbd406d-47ed-4a37-bab9-ec50b103f109" and create a new PV for same local volume storage Jun 11 00:04:50.213: INFO: Deleting pod pod-bbc02331-f1ad-4151-91a8-a9a96afa4d1f Jun 11 00:04:50.219: INFO: Deleting PersistentVolumeClaim "pvc-mzstb" Jun 11 00:04:50.223: INFO: Deleting PersistentVolumeClaim "pvc-jqlq5" Jun 11 00:04:50.227: INFO: Deleting PersistentVolumeClaim "pvc-mlckx" Jun 11 00:04:50.230: INFO: 14/28 pods finished STEP: Delete "local-pvvh6ns" and create a new PV for same local volume storage STEP: Delete "local-pvp8blh" and create a new PV for same local volume storage STEP: Delete "local-pvk5bkb" and create a new PV for same local volume storage Jun 11 00:04:51.212: INFO: Deleting pod pod-ead5c407-86b9-45b4-9be3-3d05e1147aca Jun 11 00:04:51.218: INFO: Deleting PersistentVolumeClaim "pvc-xv4ms" Jun 11 00:04:51.222: INFO: Deleting PersistentVolumeClaim "pvc-4js2z" Jun 11 00:04:51.226: INFO: Deleting PersistentVolumeClaim "pvc-shk9g" Jun 11 00:04:51.230: INFO: 15/28 pods finished STEP: Delete "local-pvknqx2" and create a new PV for same local volume storage STEP: Delete "local-pvlbkfz" and create a new PV for same local volume storage STEP: Delete "local-pvzfkx8" and create a new PV for same local volume storage Jun 11 00:04:53.211: INFO: Deleting pod pod-e2a63fd1-6d5c-4394-be79-6bdd745c10d4 Jun 11 00:04:53.218: INFO: Deleting PersistentVolumeClaim "pvc-dkznm" Jun 11 00:04:53.222: INFO: Deleting PersistentVolumeClaim "pvc-gc8jk" Jun 11 00:04:53.226: INFO: Deleting PersistentVolumeClaim "pvc-z785r" Jun 11 00:04:53.229: INFO: 16/28 pods finished STEP: Delete "local-pvh8q4p" and create a new PV for same local volume storage STEP: Delete "local-pvcvk5r" and create a new PV for same local volume storage STEP: Delete "local-pv725td" and create a new PV for same local volume storage STEP: Delete "pvc-bbbd406d-47ed-4a37-bab9-ec50b103f109" and create a new PV for same local volume storage STEP: Delete "pvc-bbbd406d-47ed-4a37-bab9-ec50b103f109" and create a new PV for same local volume storage Jun 11 00:04:55.213: INFO: Deleting pod pod-11b42992-a476-4f71-8b19-6ee8feef7671 Jun 11 00:04:55.219: INFO: Deleting PersistentVolumeClaim "pvc-6slzf" Jun 11 00:04:55.223: INFO: Deleting PersistentVolumeClaim "pvc-srvhr" Jun 11 00:04:55.226: INFO: Deleting PersistentVolumeClaim "pvc-n6pdd" Jun 11 00:04:55.231: INFO: 17/28 pods finished STEP: Delete "local-pvkf729" and create a new PV for same local volume storage STEP: Delete "local-pvctff2" and create a new PV for same local volume storage STEP: Delete "local-pvwbp9z" and create a new PV for same local volume storage Jun 11 00:04:59.213: INFO: Deleting pod pod-688f8387-f277-42a6-93fa-3481ee5eb0b6 Jun 11 00:04:59.221: INFO: Deleting PersistentVolumeClaim "pvc-jxn6q" Jun 11 00:04:59.225: INFO: Deleting PersistentVolumeClaim "pvc-j8ktw" Jun 11 00:04:59.228: INFO: Deleting PersistentVolumeClaim "pvc-zxd8r" Jun 11 00:04:59.232: INFO: 18/28 pods finished Jun 11 00:04:59.232: INFO: Deleting pod pod-f1b6f0ce-5b5d-4db7-8ff1-14ab5b874d45 Jun 11 00:04:59.238: INFO: Deleting PersistentVolumeClaim "pvc-9d2nw" STEP: Delete "local-pvqxgzn" and create a new PV for same local volume storage Jun 11 00:04:59.242: INFO: Deleting PersistentVolumeClaim "pvc-pbxnb" Jun 11 00:04:59.245: INFO: Deleting PersistentVolumeClaim "pvc-qsms7" Jun 11 00:04:59.249: INFO: 19/28 pods finished STEP: Delete "local-pvwz995" and create a new PV for same local volume storage STEP: Delete "local-pvdg9cg" and create a new PV for same local volume storage STEP: Delete "local-pvm7lxg" and create a new PV for same local volume storage STEP: Delete "local-pv4d7br" and create a new PV for same local volume storage STEP: Delete "local-pvq8wpz" and create a new PV for same local volume storage STEP: Delete "pvc-fb637a56-526c-4041-8ac7-84b4e8a1f0e8" and create a new PV for same local volume storage STEP: Delete "pvc-fb637a56-526c-4041-8ac7-84b4e8a1f0e8" and create a new PV for same local volume storage Jun 11 00:05:09.214: INFO: Deleting pod pod-499d0c18-53af-44c3-97a9-ac67e9e84163 Jun 11 00:05:09.223: INFO: Deleting PersistentVolumeClaim "pvc-t4wdw" Jun 11 00:05:09.227: INFO: Deleting PersistentVolumeClaim "pvc-r84c6" Jun 11 00:05:09.231: INFO: Deleting PersistentVolumeClaim "pvc-cgq4m" Jun 11 00:05:09.234: INFO: 20/28 pods finished Jun 11 00:05:09.234: INFO: Deleting pod pod-b99ac35a-1cb2-47d3-a934-daf97aa921bd Jun 11 00:05:09.242: INFO: Deleting PersistentVolumeClaim "pvc-pdtjk" STEP: Delete "local-pvgfb2g" and create a new PV for same local volume storage Jun 11 00:05:09.245: INFO: Deleting PersistentVolumeClaim "pvc-8jfbg" Jun 11 00:05:09.249: INFO: Deleting PersistentVolumeClaim "pvc-xrvfx" Jun 11 00:05:09.253: INFO: 21/28 pods finished STEP: Delete "local-pvwx297" and create a new PV for same local volume storage STEP: Delete "local-pvnzjm5" and create a new PV for same local volume storage STEP: Delete "local-pvcfjbf" and create a new PV for same local volume storage STEP: Delete "local-pvm2ttl" and create a new PV for same local volume storage STEP: Delete "local-pvfzsc7" and create a new PV for same local volume storage Jun 11 00:05:11.212: INFO: Deleting pod pod-c39eb6a2-d053-478c-b168-282622a403ba Jun 11 00:05:11.220: INFO: Deleting PersistentVolumeClaim "pvc-57c58" Jun 11 00:05:11.224: INFO: Deleting PersistentVolumeClaim "pvc-gtnxh" Jun 11 00:05:11.229: INFO: Deleting PersistentVolumeClaim "pvc-s689w" Jun 11 00:05:11.233: INFO: 22/28 pods finished STEP: Delete "local-pvk6889" and create a new PV for same local volume storage STEP: Delete "local-pvb8gqj" and create a new PV for same local volume storage STEP: Delete "local-pvk9b8m" and create a new PV for same local volume storage Jun 11 00:05:13.211: INFO: Deleting pod pod-38d1ade9-db31-4dcc-be54-01110a250c8c Jun 11 00:05:13.218: INFO: Deleting PersistentVolumeClaim "pvc-zr55d" Jun 11 00:05:13.222: INFO: Deleting PersistentVolumeClaim "pvc-7zxqf" Jun 11 00:05:13.226: INFO: Deleting PersistentVolumeClaim "pvc-kn9vv" Jun 11 00:05:13.229: INFO: 23/28 pods finished STEP: Delete "local-pvt6jtc" and create a new PV for same local volume storage STEP: Delete "local-pv8d9rg" and create a new PV for same local volume storage STEP: Delete "local-pvxqrzz" and create a new PV for same local volume storage STEP: Delete "pvc-c80ffa01-ba34-4ebd-ab4b-9dbb49b10260" and create a new PV for same local volume storage STEP: Delete "pvc-c80ffa01-ba34-4ebd-ab4b-9dbb49b10260" and create a new PV for same local volume storage Jun 11 00:05:15.216: INFO: Deleting pod pod-9068c270-1fa5-4d43-a470-3a6659acfd84 Jun 11 00:05:15.222: INFO: Deleting PersistentVolumeClaim "pvc-hnbh6" Jun 11 00:05:15.225: INFO: Deleting PersistentVolumeClaim "pvc-6j7cb" Jun 11 00:05:15.229: INFO: Deleting PersistentVolumeClaim "pvc-q5cf2" Jun 11 00:05:15.233: INFO: 24/28 pods finished STEP: Delete "local-pvdt4fw" and create a new PV for same local volume storage STEP: Delete "local-pvfv9fg" and create a new PV for same local volume storage STEP: Delete "local-pvh29gr" and create a new PV for same local volume storage Jun 11 00:05:19.213: INFO: Deleting pod pod-b5b187fb-c733-4ebe-bae9-23085785aefc Jun 11 00:05:19.220: INFO: Deleting PersistentVolumeClaim "pvc-pp6wp" Jun 11 00:05:19.224: INFO: Deleting PersistentVolumeClaim "pvc-chwfr" Jun 11 00:05:19.228: INFO: Deleting PersistentVolumeClaim "pvc-r4bwq" Jun 11 00:05:19.232: INFO: 25/28 pods finished STEP: Delete "local-pvl4qf4" and create a new PV for same local volume storage STEP: Delete "local-pvpvhbm" and create a new PV for same local volume storage STEP: Delete "local-pvmzr55" and create a new PV for same local volume storage STEP: Delete "local-pv65xw4" and create a new PV for same local volume storage Jun 11 00:05:29.212: INFO: Deleting pod pod-aa5e3d0c-0d6e-46a3-a945-6102af692b4a Jun 11 00:05:29.219: INFO: Deleting PersistentVolumeClaim "pvc-nnqcc" Jun 11 00:05:29.223: INFO: Deleting PersistentVolumeClaim "pvc-z4hb4" Jun 11 00:05:29.227: INFO: Deleting PersistentVolumeClaim "pvc-lhkgr" Jun 11 00:05:29.231: INFO: 26/28 pods finished STEP: Delete "local-pvdzkh6" and create a new PV for same local volume storage STEP: Delete "local-pvz7pst" and create a new PV for same local volume storage STEP: Delete "local-pvkm6wc" and create a new PV for same local volume storage Jun 11 00:05:31.210: INFO: Deleting pod pod-807321d2-f0aa-4691-953f-521af13e6177 Jun 11 00:05:31.216: INFO: Deleting PersistentVolumeClaim "pvc-4jv5s" Jun 11 00:05:31.220: INFO: Deleting PersistentVolumeClaim "pvc-r5npr" Jun 11 00:05:31.224: INFO: Deleting PersistentVolumeClaim "pvc-4rtjt" Jun 11 00:05:31.227: INFO: 27/28 pods finished STEP: Delete "local-pvq6m65" and create a new PV for same local volume storage STEP: Delete "local-pvtgwsb" and create a new PV for same local volume storage STEP: Delete "local-pvgbwkd" and create a new PV for same local volume storage STEP: Delete "local-pvkpxqw" and create a new PV for same local volume storage Jun 11 00:05:33.211: INFO: Deleting pod pod-dd435f6a-1046-4a81-a3b8-95dcc180d680 Jun 11 00:05:33.218: INFO: Deleting PersistentVolumeClaim "pvc-2pdkl" Jun 11 00:05:33.221: INFO: Deleting PersistentVolumeClaim "pvc-kkkpd" Jun 11 00:05:33.224: INFO: Deleting PersistentVolumeClaim "pvc-wnm8x" Jun 11 00:05:33.228: INFO: 28/28 pods finished [AfterEach] Stress with local volumes [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:519 STEP: Stop and wait for recycle goroutine to finish STEP: Clean all PVs STEP: Cleaning up 10 local volumes on node "node1" STEP: Cleaning up PVC and PV Jun 11 00:05:33.228: INFO: pvc is nil Jun 11 00:05:33.228: INFO: Deleting PersistentVolume "local-pv4bx8v" STEP: Cleaning up PVC and PV Jun 11 00:05:33.231: INFO: pvc is nil Jun 11 00:05:33.231: INFO: Deleting PersistentVolume "local-pvvrgf8" STEP: Cleaning up PVC and PV Jun 11 00:05:33.235: INFO: pvc is nil Jun 11 00:05:33.235: INFO: Deleting PersistentVolume "local-pvfsp9d" STEP: Cleaning up PVC and PV Jun 11 00:05:33.238: INFO: pvc is nil Jun 11 00:05:33.238: INFO: Deleting PersistentVolume "local-pvcf82b" STEP: Cleaning up PVC and PV Jun 11 00:05:33.241: INFO: pvc is nil Jun 11 00:05:33.241: INFO: Deleting PersistentVolume "local-pvgmls6" STEP: Cleaning up PVC and PV Jun 11 00:05:33.245: INFO: pvc is nil Jun 11 00:05:33.245: INFO: Deleting PersistentVolume "local-pvfdsgp" STEP: Cleaning up PVC and PV Jun 11 00:05:33.248: INFO: pvc is nil Jun 11 00:05:33.248: INFO: Deleting PersistentVolume "local-pvt47wh" STEP: Cleaning up PVC and PV Jun 11 00:05:33.252: INFO: pvc is nil Jun 11 00:05:33.252: INFO: Deleting PersistentVolume "local-pv6l9st" STEP: Cleaning up PVC and PV Jun 11 00:05:33.255: INFO: pvc is nil Jun 11 00:05:33.255: INFO: Deleting PersistentVolume "local-pvspzsq" STEP: Cleaning up PVC and PV Jun 11 00:05:33.259: INFO: pvc is nil Jun 11 00:05:33.259: INFO: Deleting PersistentVolume "local-pvsvnh8" STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-26f795e1-f9d8-4b8a-919f-7eb9eb84b56c" Jun 11 00:05:33.263: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-26f795e1-f9d8-4b8a-919f-7eb9eb84b56c"] Namespace:persistent-local-volumes-test-8899 PodName:hostexec-node1-n5bf8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:33.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 11 00:05:33.380: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-26f795e1-f9d8-4b8a-919f-7eb9eb84b56c] Namespace:persistent-local-volumes-test-8899 PodName:hostexec-node1-n5bf8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:33.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-78d54473-5b35-4dc0-9819-bf43e8bf4777" Jun 11 00:05:33.474: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-78d54473-5b35-4dc0-9819-bf43e8bf4777"] Namespace:persistent-local-volumes-test-8899 PodName:hostexec-node1-n5bf8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:33.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 11 00:05:33.571: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-78d54473-5b35-4dc0-9819-bf43e8bf4777] Namespace:persistent-local-volumes-test-8899 PodName:hostexec-node1-n5bf8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:33.571: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-d420a0e7-0111-4c98-a9fd-2597d29c545f" Jun 11 00:05:33.658: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-d420a0e7-0111-4c98-a9fd-2597d29c545f"] Namespace:persistent-local-volumes-test-8899 PodName:hostexec-node1-n5bf8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:33.658: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 11 00:05:33.782: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-d420a0e7-0111-4c98-a9fd-2597d29c545f] Namespace:persistent-local-volumes-test-8899 PodName:hostexec-node1-n5bf8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:33.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-e83949ea-e1a0-470d-be71-6170bd1da9c5" Jun 11 00:05:33.883: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-e83949ea-e1a0-470d-be71-6170bd1da9c5"] Namespace:persistent-local-volumes-test-8899 PodName:hostexec-node1-n5bf8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:33.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 11 00:05:33.983: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-e83949ea-e1a0-470d-be71-6170bd1da9c5] Namespace:persistent-local-volumes-test-8899 PodName:hostexec-node1-n5bf8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:33.983: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-c50c0242-e291-48dd-b487-f7a959f9ffb6" Jun 11 00:05:34.086: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-c50c0242-e291-48dd-b487-f7a959f9ffb6"] Namespace:persistent-local-volumes-test-8899 PodName:hostexec-node1-n5bf8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:34.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 11 00:05:34.187: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-c50c0242-e291-48dd-b487-f7a959f9ffb6] Namespace:persistent-local-volumes-test-8899 PodName:hostexec-node1-n5bf8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:34.187: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-25f372fd-fc23-4a00-a413-5d08f47985dd" Jun 11 00:05:34.278: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-25f372fd-fc23-4a00-a413-5d08f47985dd"] Namespace:persistent-local-volumes-test-8899 PodName:hostexec-node1-n5bf8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:34.278: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 11 00:05:34.381: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-25f372fd-fc23-4a00-a413-5d08f47985dd] Namespace:persistent-local-volumes-test-8899 PodName:hostexec-node1-n5bf8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:34.381: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-bedc1b10-dba6-4f39-96ea-ddfd0ed6cc81" Jun 11 00:05:34.474: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-bedc1b10-dba6-4f39-96ea-ddfd0ed6cc81"] Namespace:persistent-local-volumes-test-8899 PodName:hostexec-node1-n5bf8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:34.474: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 11 00:05:34.586: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-bedc1b10-dba6-4f39-96ea-ddfd0ed6cc81] Namespace:persistent-local-volumes-test-8899 PodName:hostexec-node1-n5bf8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:34.586: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-19f7bcbd-b305-48ef-97bf-24563d0763af" Jun 11 00:05:34.680: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-19f7bcbd-b305-48ef-97bf-24563d0763af"] Namespace:persistent-local-volumes-test-8899 PodName:hostexec-node1-n5bf8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:34.680: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 11 00:05:34.782: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-19f7bcbd-b305-48ef-97bf-24563d0763af] Namespace:persistent-local-volumes-test-8899 PodName:hostexec-node1-n5bf8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:34.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-2c848609-9414-46f5-a268-245331c8a044" Jun 11 00:05:34.877: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-2c848609-9414-46f5-a268-245331c8a044"] Namespace:persistent-local-volumes-test-8899 PodName:hostexec-node1-n5bf8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:34.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 11 00:05:34.976: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-2c848609-9414-46f5-a268-245331c8a044] Namespace:persistent-local-volumes-test-8899 PodName:hostexec-node1-n5bf8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:34.976: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-e4e5a403-2029-46fd-a06a-5021308093e2" Jun 11 00:05:35.075: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-e4e5a403-2029-46fd-a06a-5021308093e2"] Namespace:persistent-local-volumes-test-8899 PodName:hostexec-node1-n5bf8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:35.075: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 11 00:05:35.174: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-e4e5a403-2029-46fd-a06a-5021308093e2] Namespace:persistent-local-volumes-test-8899 PodName:hostexec-node1-n5bf8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:35.174: INFO: >>> kubeConfig: /root/.kube/config STEP: Cleaning up 10 local volumes on node "node2" STEP: Cleaning up PVC and PV Jun 11 00:05:35.267: INFO: pvc is nil Jun 11 00:05:35.267: INFO: Deleting PersistentVolume "local-pvlpklm" STEP: Cleaning up PVC and PV Jun 11 00:05:35.273: INFO: pvc is nil Jun 11 00:05:35.273: INFO: Deleting PersistentVolume "local-pv782z8" STEP: Cleaning up PVC and PV Jun 11 00:05:35.277: INFO: pvc is nil Jun 11 00:05:35.277: INFO: Deleting PersistentVolume "local-pvkr557" STEP: Cleaning up PVC and PV Jun 11 00:05:35.281: INFO: pvc is nil Jun 11 00:05:35.281: INFO: Deleting PersistentVolume "local-pv9sdsh" STEP: Cleaning up PVC and PV Jun 11 00:05:35.285: INFO: pvc is nil Jun 11 00:05:35.285: INFO: Deleting PersistentVolume "local-pvx89tt" STEP: Cleaning up PVC and PV Jun 11 00:05:35.288: INFO: pvc is nil Jun 11 00:05:35.288: INFO: Deleting PersistentVolume "local-pv95zqw" STEP: Cleaning up PVC and PV Jun 11 00:05:35.291: INFO: pvc is nil Jun 11 00:05:35.291: INFO: Deleting PersistentVolume "local-pv6lln2" STEP: Cleaning up PVC and PV Jun 11 00:05:35.295: INFO: pvc is nil Jun 11 00:05:35.295: INFO: Deleting PersistentVolume "local-pv5bmjx" STEP: Cleaning up PVC and PV Jun 11 00:05:35.298: INFO: pvc is nil Jun 11 00:05:35.298: INFO: Deleting PersistentVolume "local-pvzlfr8" STEP: Cleaning up PVC and PV Jun 11 00:05:35.302: INFO: pvc is nil Jun 11 00:05:35.302: INFO: Deleting PersistentVolume "local-pvm7k29" STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-255f7fbf-a15d-4896-9864-6d46ff85465b" Jun 11 00:05:35.305: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-255f7fbf-a15d-4896-9864-6d46ff85465b"] Namespace:persistent-local-volumes-test-8899 PodName:hostexec-node2-djgjz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:35.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 11 00:05:35.404: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-255f7fbf-a15d-4896-9864-6d46ff85465b] Namespace:persistent-local-volumes-test-8899 PodName:hostexec-node2-djgjz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:35.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-cad9189e-9a8c-45f0-b565-ca347df5b35c" Jun 11 00:05:35.488: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-cad9189e-9a8c-45f0-b565-ca347df5b35c"] Namespace:persistent-local-volumes-test-8899 PodName:hostexec-node2-djgjz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:35.488: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 11 00:05:35.586: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-cad9189e-9a8c-45f0-b565-ca347df5b35c] Namespace:persistent-local-volumes-test-8899 PodName:hostexec-node2-djgjz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:35.586: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-5e9892ac-1047-4627-a1b5-7d51b5aa1d0f" Jun 11 00:05:35.676: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-5e9892ac-1047-4627-a1b5-7d51b5aa1d0f"] Namespace:persistent-local-volumes-test-8899 PodName:hostexec-node2-djgjz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:35.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 11 00:05:35.773: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-5e9892ac-1047-4627-a1b5-7d51b5aa1d0f] Namespace:persistent-local-volumes-test-8899 PodName:hostexec-node2-djgjz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:35.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-420352c9-4c5f-4d24-8750-eaf099423a0f" Jun 11 00:05:35.855: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-420352c9-4c5f-4d24-8750-eaf099423a0f"] Namespace:persistent-local-volumes-test-8899 PodName:hostexec-node2-djgjz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:35.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 11 00:05:35.962: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-420352c9-4c5f-4d24-8750-eaf099423a0f] Namespace:persistent-local-volumes-test-8899 PodName:hostexec-node2-djgjz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:35.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-24ed5a40-2c91-4703-b824-d7531d9e6794" Jun 11 00:05:36.049: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-24ed5a40-2c91-4703-b824-d7531d9e6794"] Namespace:persistent-local-volumes-test-8899 PodName:hostexec-node2-djgjz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:36.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 11 00:05:36.157: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-24ed5a40-2c91-4703-b824-d7531d9e6794] Namespace:persistent-local-volumes-test-8899 PodName:hostexec-node2-djgjz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:36.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-94eddc8c-47b4-44a6-bf99-8667d36fc2b9" Jun 11 00:05:36.247: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-94eddc8c-47b4-44a6-bf99-8667d36fc2b9"] Namespace:persistent-local-volumes-test-8899 PodName:hostexec-node2-djgjz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:36.247: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 11 00:05:36.369: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-94eddc8c-47b4-44a6-bf99-8667d36fc2b9] Namespace:persistent-local-volumes-test-8899 PodName:hostexec-node2-djgjz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:36.369: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-a55730a7-05d9-43e7-bfcf-b1782abcf05f" Jun 11 00:05:36.535: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-a55730a7-05d9-43e7-bfcf-b1782abcf05f"] Namespace:persistent-local-volumes-test-8899 PodName:hostexec-node2-djgjz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:36.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 11 00:05:36.772: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-a55730a7-05d9-43e7-bfcf-b1782abcf05f] Namespace:persistent-local-volumes-test-8899 PodName:hostexec-node2-djgjz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:36.772: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-9120109d-aadc-454d-ad07-5af24861cfe0" Jun 11 00:05:36.954: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-9120109d-aadc-454d-ad07-5af24861cfe0"] Namespace:persistent-local-volumes-test-8899 PodName:hostexec-node2-djgjz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:36.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 11 00:05:37.096: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-9120109d-aadc-454d-ad07-5af24861cfe0] Namespace:persistent-local-volumes-test-8899 PodName:hostexec-node2-djgjz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:37.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-ebefec66-2823-47d7-924b-0cf1d8e203e9" Jun 11 00:05:37.196: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-ebefec66-2823-47d7-924b-0cf1d8e203e9"] Namespace:persistent-local-volumes-test-8899 PodName:hostexec-node2-djgjz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:37.196: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 11 00:05:37.284: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-ebefec66-2823-47d7-924b-0cf1d8e203e9] Namespace:persistent-local-volumes-test-8899 PodName:hostexec-node2-djgjz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:37.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-cf5ec20a-3e38-4d29-a6cd-66feb2d0d44e" Jun 11 00:05:37.363: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-cf5ec20a-3e38-4d29-a6cd-66feb2d0d44e"] Namespace:persistent-local-volumes-test-8899 PodName:hostexec-node2-djgjz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:37.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 11 00:05:37.454: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-cf5ec20a-3e38-4d29-a6cd-66feb2d0d44e] Namespace:persistent-local-volumes-test-8899 PodName:hostexec-node2-djgjz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:37.454: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:05:37.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-8899" for this suite. • [SLOW TEST:105.241 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Stress with local volumes [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:441 should be able to process many pods and reuse local volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:531 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","total":-1,"completed":3,"skipped":52,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:04:12.449: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] StatefulSet with pod affinity [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:391 STEP: Setting up local volumes on node "node1" STEP: Initializing test volumes Jun 11 00:04:22.498: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-089223f8-125c-4d20-845f-cb3238e88cbc] Namespace:persistent-local-volumes-test-5202 PodName:hostexec-node1-n47g6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:04:22.498: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:04:22.609: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-8f58c516-ef50-45d7-a75a-65419b6138ef] Namespace:persistent-local-volumes-test-5202 PodName:hostexec-node1-n47g6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:04:22.609: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:04:22.698: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-bba005c7-7e32-4b84-94fa-691a751003b0] Namespace:persistent-local-volumes-test-5202 PodName:hostexec-node1-n47g6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:04:22.698: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:04:22.794: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-f676ee20-9d42-4161-85d0-8ef2a33303b7] Namespace:persistent-local-volumes-test-5202 PodName:hostexec-node1-n47g6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:04:22.794: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:04:22.882: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-e2f52722-a7a6-4f40-9c1a-a805e4ac4c11] Namespace:persistent-local-volumes-test-5202 PodName:hostexec-node1-n47g6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:04:22.882: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:04:23.301: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-3d56438a-7158-4e5e-824e-e96d5c620aa0] Namespace:persistent-local-volumes-test-5202 PodName:hostexec-node1-n47g6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:04:23.301: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 11 00:04:23.391: INFO: Creating a PV followed by a PVC Jun 11 00:04:23.398: INFO: Creating a PV followed by a PVC Jun 11 00:04:23.404: INFO: Creating a PV followed by a PVC Jun 11 00:04:23.411: INFO: Creating a PV followed by a PVC Jun 11 00:04:23.416: INFO: Creating a PV followed by a PVC Jun 11 00:04:23.422: INFO: Creating a PV followed by a PVC Jun 11 00:04:33.464: INFO: PVCs were not bound within 10s (that's good) STEP: Setting up local volumes on node "node2" STEP: Initializing test volumes Jun 11 00:04:45.486: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-c2169a1a-4f92-4b01-b914-cb0b202ed301] Namespace:persistent-local-volumes-test-5202 PodName:hostexec-node2-2kcvn ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:04:45.486: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:04:45.620: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-bdcc54a7-5778-44d0-b726-8b1b7c72b7f7] Namespace:persistent-local-volumes-test-5202 PodName:hostexec-node2-2kcvn ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:04:45.620: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:04:45.872: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-e3bfa7ef-fe45-42d2-afb9-7dc89a0f7068] Namespace:persistent-local-volumes-test-5202 PodName:hostexec-node2-2kcvn ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:04:45.872: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:04:46.056: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-4d8f00ab-8198-41fa-93e3-35ca0496b3a9] Namespace:persistent-local-volumes-test-5202 PodName:hostexec-node2-2kcvn ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:04:46.056: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:04:46.174: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-45dd4509-9b10-4339-b922-7fe184c861e9] Namespace:persistent-local-volumes-test-5202 PodName:hostexec-node2-2kcvn ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:04:46.174: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:04:46.279: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-2b5e185a-b417-4dce-a316-b84f58f3bc78] Namespace:persistent-local-volumes-test-5202 PodName:hostexec-node2-2kcvn ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:04:46.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 11 00:04:46.841: INFO: Creating a PV followed by a PVC Jun 11 00:04:46.850: INFO: Creating a PV followed by a PVC Jun 11 00:04:46.855: INFO: Creating a PV followed by a PVC Jun 11 00:04:46.864: INFO: Creating a PV followed by a PVC Jun 11 00:04:46.872: INFO: Creating a PV followed by a PVC Jun 11 00:04:46.877: INFO: Creating a PV followed by a PVC Jun 11 00:04:56.925: INFO: PVCs were not bound within 10s (that's good) [It] should use volumes on one node when pod has affinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:419 STEP: Creating a StatefulSet with pod affinity on nodes Jun 11 00:04:56.932: INFO: Found 0 stateful pods, waiting for 3 Jun 11 00:05:06.937: INFO: Found 1 stateful pods, waiting for 3 Jun 11 00:05:16.937: INFO: Found 2 stateful pods, waiting for 3 Jun 11 00:05:26.937: INFO: Waiting for pod local-volume-statefulset-0 to enter Running - Ready=true, currently Running - Ready=true Jun 11 00:05:26.937: INFO: Waiting for pod local-volume-statefulset-1 to enter Running - Ready=true, currently Running - Ready=true Jun 11 00:05:26.937: INFO: Waiting for pod local-volume-statefulset-2 to enter Running - Ready=true, currently Pending - Ready=false Jun 11 00:05:36.936: INFO: Waiting for pod local-volume-statefulset-0 to enter Running - Ready=true, currently Running - Ready=true Jun 11 00:05:36.936: INFO: Waiting for pod local-volume-statefulset-1 to enter Running - Ready=true, currently Running - Ready=true Jun 11 00:05:36.936: INFO: Waiting for pod local-volume-statefulset-2 to enter Running - Ready=true, currently Running - Ready=true Jun 11 00:05:36.939: INFO: Waiting up to timeout=1s for PersistentVolumeClaims [vol1-local-volume-statefulset-0] to have phase Bound Jun 11 00:05:36.942: INFO: PersistentVolumeClaim vol1-local-volume-statefulset-0 found and phase=Bound (2.407725ms) Jun 11 00:05:36.942: INFO: Waiting up to timeout=1s for PersistentVolumeClaims [vol2-local-volume-statefulset-0] to have phase Bound Jun 11 00:05:36.944: INFO: PersistentVolumeClaim vol2-local-volume-statefulset-0 found and phase=Bound (2.040109ms) Jun 11 00:05:36.944: INFO: Waiting up to timeout=1s for PersistentVolumeClaims [vol2-local-volume-statefulset-1] to have phase Bound Jun 11 00:05:36.946: INFO: PersistentVolumeClaim vol2-local-volume-statefulset-1 found and phase=Bound (1.85677ms) Jun 11 00:05:36.946: INFO: Waiting up to timeout=1s for PersistentVolumeClaims [vol1-local-volume-statefulset-1] to have phase Bound Jun 11 00:05:36.948: INFO: PersistentVolumeClaim vol1-local-volume-statefulset-1 found and phase=Bound (2.080649ms) Jun 11 00:05:36.948: INFO: Waiting up to timeout=1s for PersistentVolumeClaims [vol1-local-volume-statefulset-2] to have phase Bound Jun 11 00:05:36.950: INFO: PersistentVolumeClaim vol1-local-volume-statefulset-2 found and phase=Bound (1.877772ms) Jun 11 00:05:36.950: INFO: Waiting up to timeout=1s for PersistentVolumeClaims [vol2-local-volume-statefulset-2] to have phase Bound Jun 11 00:05:36.952: INFO: PersistentVolumeClaim vol2-local-volume-statefulset-2 found and phase=Bound (2.145585ms) [AfterEach] StatefulSet with pod affinity [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:403 STEP: Cleaning up PVC and PV Jun 11 00:05:36.952: INFO: Deleting PersistentVolumeClaim "pvc-6nx6l" Jun 11 00:05:36.957: INFO: Deleting PersistentVolume "local-pvtlldb" STEP: Cleaning up PVC and PV Jun 11 00:05:36.961: INFO: Deleting PersistentVolumeClaim "pvc-94zsg" Jun 11 00:05:36.965: INFO: Deleting PersistentVolume "local-pvdlpl5" STEP: Cleaning up PVC and PV Jun 11 00:05:36.968: INFO: Deleting PersistentVolumeClaim "pvc-dt6cg" Jun 11 00:05:36.974: INFO: Deleting PersistentVolume "local-pv4v9nc" STEP: Cleaning up PVC and PV Jun 11 00:05:36.977: INFO: Deleting PersistentVolumeClaim "pvc-s5qmx" Jun 11 00:05:36.981: INFO: Deleting PersistentVolume "local-pv6ch7t" STEP: Cleaning up PVC and PV Jun 11 00:05:36.985: INFO: Deleting PersistentVolumeClaim "pvc-z6ds6" Jun 11 00:05:36.988: INFO: Deleting PersistentVolume "local-pv52w4j" STEP: Cleaning up PVC and PV Jun 11 00:05:36.992: INFO: Deleting PersistentVolumeClaim "pvc-rfxmd" Jun 11 00:05:36.996: INFO: Deleting PersistentVolume "local-pvpxlb8" STEP: Removing the test directory Jun 11 00:05:36.999: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-089223f8-125c-4d20-845f-cb3238e88cbc] Namespace:persistent-local-volumes-test-5202 PodName:hostexec-node1-n47g6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:36.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 11 00:05:37.094: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-8f58c516-ef50-45d7-a75a-65419b6138ef] Namespace:persistent-local-volumes-test-5202 PodName:hostexec-node1-n47g6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:37.094: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 11 00:05:37.193: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-bba005c7-7e32-4b84-94fa-691a751003b0] Namespace:persistent-local-volumes-test-5202 PodName:hostexec-node1-n47g6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:37.193: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 11 00:05:37.282: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-f676ee20-9d42-4161-85d0-8ef2a33303b7] Namespace:persistent-local-volumes-test-5202 PodName:hostexec-node1-n47g6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:37.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 11 00:05:37.369: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-e2f52722-a7a6-4f40-9c1a-a805e4ac4c11] Namespace:persistent-local-volumes-test-5202 PodName:hostexec-node1-n47g6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:37.369: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 11 00:05:37.461: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-3d56438a-7158-4e5e-824e-e96d5c620aa0] Namespace:persistent-local-volumes-test-5202 PodName:hostexec-node1-n47g6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:37.461: INFO: >>> kubeConfig: /root/.kube/config STEP: Cleaning up PVC and PV Jun 11 00:05:37.552: INFO: Deleting PersistentVolumeClaim "pvc-qtctb" Jun 11 00:05:37.557: INFO: Deleting PersistentVolume "local-pvnkl4v" STEP: Cleaning up PVC and PV Jun 11 00:05:37.562: INFO: Deleting PersistentVolumeClaim "pvc-pxk78" Jun 11 00:05:37.566: INFO: Deleting PersistentVolume "local-pvzb7sq" STEP: Cleaning up PVC and PV Jun 11 00:05:37.570: INFO: Deleting PersistentVolumeClaim "pvc-jb6jg" Jun 11 00:05:37.573: INFO: Deleting PersistentVolume "local-pvqn4w7" STEP: Cleaning up PVC and PV Jun 11 00:05:37.578: INFO: Deleting PersistentVolumeClaim "pvc-6nl76" Jun 11 00:05:37.582: INFO: Deleting PersistentVolume "local-pv7rd7f" STEP: Cleaning up PVC and PV Jun 11 00:05:37.586: INFO: Deleting PersistentVolumeClaim "pvc-n8gzc" Jun 11 00:05:37.590: INFO: Deleting PersistentVolume "local-pvlsbj4" STEP: Cleaning up PVC and PV Jun 11 00:05:37.594: INFO: Deleting PersistentVolumeClaim "pvc-tdvt5" Jun 11 00:05:37.597: INFO: Deleting PersistentVolume "local-pvchhdt" STEP: Removing the test directory Jun 11 00:05:37.601: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-c2169a1a-4f92-4b01-b914-cb0b202ed301] Namespace:persistent-local-volumes-test-5202 PodName:hostexec-node2-2kcvn ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:37.601: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 11 00:05:37.690: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-bdcc54a7-5778-44d0-b726-8b1b7c72b7f7] Namespace:persistent-local-volumes-test-5202 PodName:hostexec-node2-2kcvn ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:37.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 11 00:05:37.772: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-e3bfa7ef-fe45-42d2-afb9-7dc89a0f7068] Namespace:persistent-local-volumes-test-5202 PodName:hostexec-node2-2kcvn ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:37.772: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 11 00:05:37.852: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-4d8f00ab-8198-41fa-93e3-35ca0496b3a9] Namespace:persistent-local-volumes-test-5202 PodName:hostexec-node2-2kcvn ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:37.852: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 11 00:05:37.963: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-45dd4509-9b10-4339-b922-7fe184c861e9] Namespace:persistent-local-volumes-test-5202 PodName:hostexec-node2-2kcvn ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:37.963: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 11 00:05:38.088: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-2b5e185a-b417-4dce-a316-b84f58f3bc78] Namespace:persistent-local-volumes-test-5202 PodName:hostexec-node2-2kcvn ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:38.088: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:05:38.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5202" for this suite. • [SLOW TEST:85.746 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 StatefulSet with pod affinity [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:384 should use volumes on one node when pod has affinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:419 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local StatefulSet with pod affinity [Slow] should use volumes on one node when pod has affinity","total":-1,"completed":3,"skipped":53,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:05:38.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-disks STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:74 Jun 11 00:05:38.258: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:05:38.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-disks-6323" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.034 seconds] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should be able to delete a non-existent PD without error [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:449 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:75 ------------------------------ SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:05:37.590: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 Jun 11 00:05:39.636: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-5135 PodName:hostexec-node2-vntz4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:39.636: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:05:39.721: INFO: exec node2: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Jun 11 00:05:39.721: INFO: exec node2: stdout: "0\n" Jun 11 00:05:39.721: INFO: exec node2: stderr: "ls: cannot access /mnt/disks/by-uuid/google-local-ssds-scsi-fs/: No such file or directory\n" Jun 11 00:05:39.721: INFO: exec node2: exit code: 0 Jun 11 00:05:39.721: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:05:39.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5135" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [2.143 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1254 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:04:12.121: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should call NodeStage after NodeUnstage success /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:961 STEP: Building a driver namespace object, basename csi-mock-volumes-2506 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Jun 11 00:04:12.189: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2506-1673/csi-attacher Jun 11 00:04:12.192: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-2506 Jun 11 00:04:12.192: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-2506 Jun 11 00:04:12.194: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-2506 Jun 11 00:04:12.197: INFO: creating *v1.Role: csi-mock-volumes-2506-1673/external-attacher-cfg-csi-mock-volumes-2506 Jun 11 00:04:12.200: INFO: creating *v1.RoleBinding: csi-mock-volumes-2506-1673/csi-attacher-role-cfg Jun 11 00:04:12.202: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2506-1673/csi-provisioner Jun 11 00:04:12.205: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-2506 Jun 11 00:04:12.205: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-2506 Jun 11 00:04:12.208: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-2506 Jun 11 00:04:12.210: INFO: creating *v1.Role: csi-mock-volumes-2506-1673/external-provisioner-cfg-csi-mock-volumes-2506 Jun 11 00:04:12.213: INFO: creating *v1.RoleBinding: csi-mock-volumes-2506-1673/csi-provisioner-role-cfg Jun 11 00:04:12.216: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2506-1673/csi-resizer Jun 11 00:04:12.219: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-2506 Jun 11 00:04:12.219: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-2506 Jun 11 00:04:12.222: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-2506 Jun 11 00:04:12.224: INFO: creating *v1.Role: csi-mock-volumes-2506-1673/external-resizer-cfg-csi-mock-volumes-2506 Jun 11 00:04:12.227: INFO: creating *v1.RoleBinding: csi-mock-volumes-2506-1673/csi-resizer-role-cfg Jun 11 00:04:12.230: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2506-1673/csi-snapshotter Jun 11 00:04:12.232: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-2506 Jun 11 00:04:12.233: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-2506 Jun 11 00:04:12.235: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-2506 Jun 11 00:04:12.237: INFO: creating *v1.Role: csi-mock-volumes-2506-1673/external-snapshotter-leaderelection-csi-mock-volumes-2506 Jun 11 00:04:12.240: INFO: creating *v1.RoleBinding: csi-mock-volumes-2506-1673/external-snapshotter-leaderelection Jun 11 00:04:12.244: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2506-1673/csi-mock Jun 11 00:04:12.246: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-2506 Jun 11 00:04:12.249: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-2506 Jun 11 00:04:12.251: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-2506 Jun 11 00:04:12.255: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-2506 Jun 11 00:04:12.258: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-2506 Jun 11 00:04:12.260: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-2506 Jun 11 00:04:12.263: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-2506 Jun 11 00:04:12.266: INFO: creating *v1.StatefulSet: csi-mock-volumes-2506-1673/csi-mockplugin Jun 11 00:04:12.271: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-2506 Jun 11 00:04:12.274: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-2506" Jun 11 00:04:12.276: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-2506 to register on node node1 STEP: Creating pod Jun 11 00:04:21.792: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jun 11 00:04:21.797: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-f2jt2] to have phase Bound Jun 11 00:04:21.799: INFO: PersistentVolumeClaim pvc-f2jt2 found but phase is Pending instead of Bound. Jun 11 00:04:23.806: INFO: PersistentVolumeClaim pvc-f2jt2 found and phase=Bound (2.008735278s) Jun 11 00:04:23.822: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-f2jt2] to have phase Bound Jun 11 00:04:23.824: INFO: PersistentVolumeClaim pvc-f2jt2 found and phase=Bound (1.85856ms) Jun 11 00:04:35.829: INFO: Deleting pod "pvc-volume-tester-vpvxt" in namespace "csi-mock-volumes-2506" Jun 11 00:04:35.834: INFO: Wait up to 5m0s for pod "pvc-volume-tester-vpvxt" to be fully deleted Jun 11 00:04:57.858: INFO: Deleting pod "pvc-volume-tester-7q4v7" in namespace "csi-mock-volumes-2506" Jun 11 00:04:57.863: INFO: Wait up to 5m0s for pod "pvc-volume-tester-7q4v7" to be fully deleted STEP: Waiting for all remaining expected CSI calls STEP: Deleting pod pvc-volume-tester-vpvxt Jun 11 00:05:04.884: INFO: Deleting pod "pvc-volume-tester-vpvxt" in namespace "csi-mock-volumes-2506" STEP: Deleting pod pvc-volume-tester-7q4v7 Jun 11 00:05:04.886: INFO: Deleting pod "pvc-volume-tester-7q4v7" in namespace "csi-mock-volumes-2506" STEP: Deleting claim pvc-f2jt2 Jun 11 00:05:04.894: INFO: Waiting up to 2m0s for PersistentVolume pvc-fb637a56-526c-4041-8ac7-84b4e8a1f0e8 to get deleted Jun 11 00:05:04.896: INFO: PersistentVolume pvc-fb637a56-526c-4041-8ac7-84b4e8a1f0e8 found and phase=Bound (1.934519ms) Jun 11 00:05:06.899: INFO: PersistentVolume pvc-fb637a56-526c-4041-8ac7-84b4e8a1f0e8 was removed STEP: Deleting storageclass csi-mock-volumes-2506-scv4qtb STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-2506 STEP: Waiting for namespaces [csi-mock-volumes-2506] to vanish STEP: uninstalling csi mock driver Jun 11 00:05:12.910: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2506-1673/csi-attacher Jun 11 00:05:12.914: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-2506 Jun 11 00:05:12.917: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-2506 Jun 11 00:05:12.921: INFO: deleting *v1.Role: csi-mock-volumes-2506-1673/external-attacher-cfg-csi-mock-volumes-2506 Jun 11 00:05:12.925: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2506-1673/csi-attacher-role-cfg Jun 11 00:05:12.929: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2506-1673/csi-provisioner Jun 11 00:05:12.932: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-2506 Jun 11 00:05:12.937: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-2506 Jun 11 00:05:12.942: INFO: deleting *v1.Role: csi-mock-volumes-2506-1673/external-provisioner-cfg-csi-mock-volumes-2506 Jun 11 00:05:12.950: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2506-1673/csi-provisioner-role-cfg Jun 11 00:05:12.958: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2506-1673/csi-resizer Jun 11 00:05:12.964: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-2506 Jun 11 00:05:12.968: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-2506 Jun 11 00:05:12.971: INFO: deleting *v1.Role: csi-mock-volumes-2506-1673/external-resizer-cfg-csi-mock-volumes-2506 Jun 11 00:05:12.974: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2506-1673/csi-resizer-role-cfg Jun 11 00:05:12.978: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2506-1673/csi-snapshotter Jun 11 00:05:12.982: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-2506 Jun 11 00:05:12.985: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-2506 Jun 11 00:05:12.989: INFO: deleting *v1.Role: csi-mock-volumes-2506-1673/external-snapshotter-leaderelection-csi-mock-volumes-2506 Jun 11 00:05:12.993: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2506-1673/external-snapshotter-leaderelection Jun 11 00:05:12.997: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2506-1673/csi-mock Jun 11 00:05:13.000: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-2506 Jun 11 00:05:13.005: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-2506 Jun 11 00:05:13.008: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-2506 Jun 11 00:05:13.011: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-2506 Jun 11 00:05:13.014: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-2506 Jun 11 00:05:13.018: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-2506 Jun 11 00:05:13.022: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-2506 Jun 11 00:05:13.025: INFO: deleting *v1.StatefulSet: csi-mock-volumes-2506-1673/csi-mockplugin Jun 11 00:05:13.030: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-2506 STEP: deleting the driver namespace: csi-mock-volumes-2506-1673 STEP: Waiting for namespaces [csi-mock-volumes-2506-1673] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:05:41.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:88.928 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI NodeUnstage error cases [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:900 should call NodeStage after NodeUnstage success /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:961 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI NodeUnstage error cases [Slow] should call NodeStage after NodeUnstage success","total":-1,"completed":5,"skipped":115,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:05:39.813: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:91 STEP: Creating a pod to test downward API volume plugin Jun 11 00:05:39.850: INFO: Waiting up to 5m0s for pod "metadata-volume-906af7e8-5fb1-402d-9974-da37ceced3e2" in namespace "downward-api-8801" to be "Succeeded or Failed" Jun 11 00:05:39.853: INFO: Pod "metadata-volume-906af7e8-5fb1-402d-9974-da37ceced3e2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.948858ms Jun 11 00:05:41.856: INFO: Pod "metadata-volume-906af7e8-5fb1-402d-9974-da37ceced3e2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00611494s Jun 11 00:05:43.859: INFO: Pod "metadata-volume-906af7e8-5fb1-402d-9974-da37ceced3e2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009579331s STEP: Saw pod success Jun 11 00:05:43.859: INFO: Pod "metadata-volume-906af7e8-5fb1-402d-9974-da37ceced3e2" satisfied condition "Succeeded or Failed" Jun 11 00:05:43.861: INFO: Trying to get logs from node node2 pod metadata-volume-906af7e8-5fb1-402d-9974-da37ceced3e2 container client-container: STEP: delete the pod Jun 11 00:05:43.877: INFO: Waiting for pod metadata-volume-906af7e8-5fb1-402d-9974-da37ceced3e2 to disappear Jun 11 00:05:43.879: INFO: Pod metadata-volume-906af7e8-5fb1-402d-9974-da37ceced3e2 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:05:43.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8801" for this suite. • ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:05:38.289: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] Pod with node different from PV's NodeAffinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:354 STEP: Initializing test volumes Jun 11 00:05:40.339: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-69e6f2f1-4a66-4417-884d-302a986a1422] Namespace:persistent-local-volumes-test-3848 PodName:hostexec-node1-cxm8z ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:40.340: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 11 00:05:40.432: INFO: Creating a PV followed by a PVC Jun 11 00:05:40.438: INFO: Waiting for PV local-pv8k4fs to bind to PVC pvc-d4zdh Jun 11 00:05:40.438: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-d4zdh] to have phase Bound Jun 11 00:05:40.441: INFO: PersistentVolumeClaim pvc-d4zdh found but phase is Pending instead of Bound. Jun 11 00:05:42.445: INFO: PersistentVolumeClaim pvc-d4zdh found and phase=Bound (2.007225906s) Jun 11 00:05:42.445: INFO: Waiting up to 3m0s for PersistentVolume local-pv8k4fs to have phase Bound Jun 11 00:05:42.448: INFO: PersistentVolume local-pv8k4fs found and phase=Bound (2.561016ms) [It] should fail scheduling due to different NodeAffinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:375 STEP: local-volume-type: dir Jun 11 00:05:42.463: INFO: Waiting up to 5m0s for pod "pod-19ecc898-bc00-4303-b868-b3132aa23d97" in namespace "persistent-local-volumes-test-3848" to be "Unschedulable" Jun 11 00:05:42.465: INFO: Pod "pod-19ecc898-bc00-4303-b868-b3132aa23d97": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038639ms Jun 11 00:05:44.469: INFO: Pod "pod-19ecc898-bc00-4303-b868-b3132aa23d97": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006208584s Jun 11 00:05:44.469: INFO: Pod "pod-19ecc898-bc00-4303-b868-b3132aa23d97" satisfied condition "Unschedulable" [AfterEach] Pod with node different from PV's NodeAffinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:370 STEP: Cleaning up PVC and PV Jun 11 00:05:44.469: INFO: Deleting PersistentVolumeClaim "pvc-d4zdh" Jun 11 00:05:44.473: INFO: Deleting PersistentVolume "local-pv8k4fs" STEP: Removing the test directory Jun 11 00:05:44.477: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-69e6f2f1-4a66-4417-884d-302a986a1422] Namespace:persistent-local-volumes-test-3848 PodName:hostexec-node1-cxm8z ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:44.477: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:05:44.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-3848" for this suite. • [SLOW TEST:6.288 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Pod with node different from PV's NodeAffinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:347 should fail scheduling due to different NodeAffinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:375 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeAffinity","total":-1,"completed":4,"skipped":78,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:05:44.607: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pvc-protection STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:72 Jun 11 00:05:44.631: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable STEP: Creating a PVC Jun 11 00:05:44.636: INFO: error finding default storageClass : No default storage class found [AfterEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:05:44.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pvc-protection-4410" for this suite. [AfterEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:108 S [SKIPPING] in Spec Setup (BeforeEach) [0.040 seconds] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Verify that PVC in active use by a pod is not removed immediately [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:126 error finding default storageClass : No default storage class found /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pv/pv.go:819 ------------------------------ SSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":4,"skipped":101,"failed":0} [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:05:43.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 Jun 11 00:05:45.936: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-7570 PodName:hostexec-node1-f87hh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:45.936: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:05:46.039: INFO: exec node1: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Jun 11 00:05:46.039: INFO: exec node1: stdout: "0\n" Jun 11 00:05:46.039: INFO: exec node1: stderr: "ls: cannot access /mnt/disks/by-uuid/google-local-ssds-scsi-fs/: No such file or directory\n" Jun 11 00:05:46.039: INFO: exec node1: exit code: 0 Jun 11 00:05:46.039: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:05:46.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-7570" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [2.159 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1254 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:05:44.674: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 Jun 11 00:05:46.723: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-5389 PodName:hostexec-node1-ccxbc ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:46.723: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:05:46.817: INFO: exec node1: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Jun 11 00:05:46.817: INFO: exec node1: stdout: "0\n" Jun 11 00:05:46.817: INFO: exec node1: stderr: "ls: cannot access /mnt/disks/by-uuid/google-local-ssds-scsi-fs/: No such file or directory\n" Jun 11 00:05:46.817: INFO: exec node1: exit code: 0 Jun 11 00:05:46.817: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:05:46.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5389" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [2.154 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1254 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:05:16.085: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node2" using path "/tmp/local-volume-test-7787009f-917c-4441-89cb-95e64fb20701" Jun 11 00:05:26.139: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-7787009f-917c-4441-89cb-95e64fb20701 && dd if=/dev/zero of=/tmp/local-volume-test-7787009f-917c-4441-89cb-95e64fb20701/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-7787009f-917c-4441-89cb-95e64fb20701/file] Namespace:persistent-local-volumes-test-4927 PodName:hostexec-node2-cv8bs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:26.139: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:05:26.473: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-7787009f-917c-4441-89cb-95e64fb20701/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-4927 PodName:hostexec-node2-cv8bs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:26.473: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:05:26.629: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkfs -t ext4 /dev/loop0 && mount -t ext4 /dev/loop0 /tmp/local-volume-test-7787009f-917c-4441-89cb-95e64fb20701 && chmod o+rwx /tmp/local-volume-test-7787009f-917c-4441-89cb-95e64fb20701] Namespace:persistent-local-volumes-test-4927 PodName:hostexec-node2-cv8bs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:26.629: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 11 00:05:28.329: INFO: Creating a PV followed by a PVC Jun 11 00:05:28.335: INFO: Waiting for PV local-pvbn5xc to bind to PVC pvc-ns8mh Jun 11 00:05:28.336: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-ns8mh] to have phase Bound Jun 11 00:05:28.338: INFO: PersistentVolumeClaim pvc-ns8mh found but phase is Pending instead of Bound. Jun 11 00:05:30.343: INFO: PersistentVolumeClaim pvc-ns8mh found but phase is Pending instead of Bound. Jun 11 00:05:32.346: INFO: PersistentVolumeClaim pvc-ns8mh found but phase is Pending instead of Bound. Jun 11 00:05:34.352: INFO: PersistentVolumeClaim pvc-ns8mh found but phase is Pending instead of Bound. Jun 11 00:05:36.355: INFO: PersistentVolumeClaim pvc-ns8mh found but phase is Pending instead of Bound. Jun 11 00:05:38.358: INFO: PersistentVolumeClaim pvc-ns8mh found but phase is Pending instead of Bound. Jun 11 00:05:40.361: INFO: PersistentVolumeClaim pvc-ns8mh found but phase is Pending instead of Bound. Jun 11 00:05:42.365: INFO: PersistentVolumeClaim pvc-ns8mh found and phase=Bound (14.029257342s) Jun 11 00:05:42.365: INFO: Waiting up to 3m0s for PersistentVolume local-pvbn5xc to have phase Bound Jun 11 00:05:42.367: INFO: PersistentVolume local-pvbn5xc found and phase=Bound (2.399892ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Jun 11 00:05:48.394: INFO: pod "pod-4d6d69a4-08a6-47b6-a47c-732cb42a913d" created on Node "node2" STEP: Writing in pod1 Jun 11 00:05:48.395: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4927 PodName:pod-4d6d69a4-08a6-47b6-a47c-732cb42a913d ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:05:48.395: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:05:48.904: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 STEP: Reading in pod1 Jun 11 00:05:48.904: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4927 PodName:pod-4d6d69a4-08a6-47b6-a47c-732cb42a913d ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:05:48.904: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:05:49.289: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-4d6d69a4-08a6-47b6-a47c-732cb42a913d in namespace persistent-local-volumes-test-4927 [AfterEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 11 00:05:49.295: INFO: Deleting PersistentVolumeClaim "pvc-ns8mh" Jun 11 00:05:49.299: INFO: Deleting PersistentVolume "local-pvbn5xc" Jun 11 00:05:49.303: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-7787009f-917c-4441-89cb-95e64fb20701] Namespace:persistent-local-volumes-test-4927 PodName:hostexec-node2-cv8bs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:49.303: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:05:49.457: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-7787009f-917c-4441-89cb-95e64fb20701/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-4927 PodName:hostexec-node2-cv8bs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:49.457: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "node2" at path /tmp/local-volume-test-7787009f-917c-4441-89cb-95e64fb20701/file Jun 11 00:05:49.553: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-4927 PodName:hostexec-node2-cv8bs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:49.553: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-7787009f-917c-4441-89cb-95e64fb20701 Jun 11 00:05:49.663: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-7787009f-917c-4441-89cb-95e64fb20701] Namespace:persistent-local-volumes-test-4927 PodName:hostexec-node2-cv8bs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:49.663: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:05:49.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-4927" for this suite. • [SLOW TEST:33.715 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":4,"skipped":72,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:05:46.105: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:59 STEP: Creating configMap with name configmap-test-volume-3e64a029-c005-4d2a-8544-a75edf2eaeb0 STEP: Creating a pod to test consume configMaps Jun 11 00:05:46.142: INFO: Waiting up to 5m0s for pod "pod-configmaps-2207e65a-0a1e-4237-a6df-b004fe14b09f" in namespace "configmap-3883" to be "Succeeded or Failed" Jun 11 00:05:46.147: INFO: Pod "pod-configmaps-2207e65a-0a1e-4237-a6df-b004fe14b09f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.031554ms Jun 11 00:05:48.150: INFO: Pod "pod-configmaps-2207e65a-0a1e-4237-a6df-b004fe14b09f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00806609s Jun 11 00:05:50.154: INFO: Pod "pod-configmaps-2207e65a-0a1e-4237-a6df-b004fe14b09f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012039369s STEP: Saw pod success Jun 11 00:05:50.154: INFO: Pod "pod-configmaps-2207e65a-0a1e-4237-a6df-b004fe14b09f" satisfied condition "Succeeded or Failed" Jun 11 00:05:50.157: INFO: Trying to get logs from node node2 pod pod-configmaps-2207e65a-0a1e-4237-a6df-b004fe14b09f container agnhost-container: STEP: delete the pod Jun 11 00:05:50.171: INFO: Waiting for pod pod-configmaps-2207e65a-0a1e-4237-a6df-b004fe14b09f to disappear Jun 11 00:05:50.172: INFO: Pod pod-configmaps-2207e65a-0a1e-4237-a6df-b004fe14b09f no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:05:50.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3883" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":5,"skipped":127,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:05:46.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-char-dev STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:256 STEP: Create a pod for further testing Jun 11 00:05:46.961: INFO: The status of Pod test-hostpath-type-j2f75 is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:05:48.967: INFO: The status of Pod test-hostpath-type-j2f75 is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:05:50.964: INFO: The status of Pod test-hostpath-type-j2f75 is Running (Ready = true) STEP: running on node node2 STEP: Create a character device for further testing Jun 11 00:05:50.966: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/achardev c 89 1] Namespace:host-path-type-char-dev-9200 PodName:test-hostpath-type-j2f75 ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:05:50.966: INFO: >>> kubeConfig: /root/.kube/config [It] Should fail on mounting character device 'achardev' when HostPathType is HostPathBlockDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:300 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:05:53.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-char-dev-9200" for this suite. • [SLOW TEST:6.203 seconds] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting character device 'achardev' when HostPathType is HostPathBlockDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:300 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Character Device [Slow] Should fail on mounting character device 'achardev' when HostPathType is HostPathBlockDev","total":-1,"completed":5,"skipped":150,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:05:53.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Jun 11 00:05:53.165: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:05:53.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-9089" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.033 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:383 should create unbound pv count metrics for pvc controller after creating pv only /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:485 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:05:26.603: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] StatefulSet with pod affinity [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:391 STEP: Setting up local volumes on node "node1" STEP: Initializing test volumes Jun 11 00:05:28.652: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-0325be5d-e937-43a4-8f97-2d7242c9da79] Namespace:persistent-local-volumes-test-5140 PodName:hostexec-node1-hwxmz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:28.652: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:05:28.772: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-dee0b8ae-adae-4d63-8b1a-f168566df5fe] Namespace:persistent-local-volumes-test-5140 PodName:hostexec-node1-hwxmz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:28.772: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:05:29.055: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-1d25723b-4ae5-4c7f-8028-e5f2a7025b58] Namespace:persistent-local-volumes-test-5140 PodName:hostexec-node1-hwxmz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:29.055: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:05:29.154: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-ad1e79f7-69d1-4932-a57d-3bb8d0df76a7] Namespace:persistent-local-volumes-test-5140 PodName:hostexec-node1-hwxmz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:29.154: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:05:29.249: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-94e73f9a-e205-4fb2-8eed-e93d19d578a5] Namespace:persistent-local-volumes-test-5140 PodName:hostexec-node1-hwxmz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:29.249: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:05:29.348: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-551735a9-1937-4602-a8da-498285bda9e7] Namespace:persistent-local-volumes-test-5140 PodName:hostexec-node1-hwxmz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:29.348: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 11 00:05:29.448: INFO: Creating a PV followed by a PVC Jun 11 00:05:29.455: INFO: Creating a PV followed by a PVC Jun 11 00:05:29.461: INFO: Creating a PV followed by a PVC Jun 11 00:05:29.467: INFO: Creating a PV followed by a PVC Jun 11 00:05:29.472: INFO: Creating a PV followed by a PVC Jun 11 00:05:29.478: INFO: Creating a PV followed by a PVC Jun 11 00:05:39.522: INFO: PVCs were not bound within 10s (that's good) STEP: Setting up local volumes on node "node2" STEP: Initializing test volumes Jun 11 00:05:41.539: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-bece94c5-5424-4f6b-ab24-51e3944b7c4d] Namespace:persistent-local-volumes-test-5140 PodName:hostexec-node2-2lttv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:41.539: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:05:41.745: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-54f4cbc9-e830-45f8-a4af-a812dd99058a] Namespace:persistent-local-volumes-test-5140 PodName:hostexec-node2-2lttv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:41.745: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:05:41.837: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-43a4204c-5068-4409-a997-85ec833589b4] Namespace:persistent-local-volumes-test-5140 PodName:hostexec-node2-2lttv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:41.837: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:05:41.962: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-0aa21300-b789-4fa7-ab11-9d2c4b3ac97c] Namespace:persistent-local-volumes-test-5140 PodName:hostexec-node2-2lttv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:41.962: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:05:42.063: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-553d91fd-df5b-4aaa-8acf-64679afcff1b] Namespace:persistent-local-volumes-test-5140 PodName:hostexec-node2-2lttv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:42.063: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:05:42.222: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-78e43100-fd47-4c12-9ff6-88689986aede] Namespace:persistent-local-volumes-test-5140 PodName:hostexec-node2-2lttv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:42.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 11 00:05:42.384: INFO: Creating a PV followed by a PVC Jun 11 00:05:42.391: INFO: Creating a PV followed by a PVC Jun 11 00:05:42.396: INFO: Creating a PV followed by a PVC Jun 11 00:05:42.401: INFO: Creating a PV followed by a PVC Jun 11 00:05:42.407: INFO: Creating a PV followed by a PVC Jun 11 00:05:42.413: INFO: Creating a PV followed by a PVC Jun 11 00:05:52.472: INFO: PVCs were not bound within 10s (that's good) [It] should use volumes spread across nodes when pod has anti-affinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:410 Jun 11 00:05:52.472: INFO: Runs only when number of nodes >= 3 [AfterEach] StatefulSet with pod affinity [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:403 STEP: Cleaning up PVC and PV Jun 11 00:05:52.474: INFO: Deleting PersistentVolumeClaim "pvc-6tb42" Jun 11 00:05:52.479: INFO: Deleting PersistentVolume "local-pvzszdl" STEP: Cleaning up PVC and PV Jun 11 00:05:52.484: INFO: Deleting PersistentVolumeClaim "pvc-pjkpt" Jun 11 00:05:52.488: INFO: Deleting PersistentVolume "local-pv2tjnw" STEP: Cleaning up PVC and PV Jun 11 00:05:52.491: INFO: Deleting PersistentVolumeClaim "pvc-jzzjp" Jun 11 00:05:52.495: INFO: Deleting PersistentVolume "local-pv87ncq" STEP: Cleaning up PVC and PV Jun 11 00:05:52.498: INFO: Deleting PersistentVolumeClaim "pvc-ssg5f" Jun 11 00:05:52.501: INFO: Deleting PersistentVolume "local-pvxqk54" STEP: Cleaning up PVC and PV Jun 11 00:05:52.505: INFO: Deleting PersistentVolumeClaim "pvc-pq2hm" Jun 11 00:05:52.508: INFO: Deleting PersistentVolume "local-pvfjh8b" STEP: Cleaning up PVC and PV Jun 11 00:05:52.512: INFO: Deleting PersistentVolumeClaim "pvc-9kpgz" Jun 11 00:05:52.515: INFO: Deleting PersistentVolume "local-pv8vjn7" STEP: Removing the test directory Jun 11 00:05:52.519: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-0325be5d-e937-43a4-8f97-2d7242c9da79] Namespace:persistent-local-volumes-test-5140 PodName:hostexec-node1-hwxmz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:52.519: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 11 00:05:52.607: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-dee0b8ae-adae-4d63-8b1a-f168566df5fe] Namespace:persistent-local-volumes-test-5140 PodName:hostexec-node1-hwxmz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:52.608: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 11 00:05:52.693: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-1d25723b-4ae5-4c7f-8028-e5f2a7025b58] Namespace:persistent-local-volumes-test-5140 PodName:hostexec-node1-hwxmz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:52.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 11 00:05:52.779: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-ad1e79f7-69d1-4932-a57d-3bb8d0df76a7] Namespace:persistent-local-volumes-test-5140 PodName:hostexec-node1-hwxmz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:52.779: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 11 00:05:52.863: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-94e73f9a-e205-4fb2-8eed-e93d19d578a5] Namespace:persistent-local-volumes-test-5140 PodName:hostexec-node1-hwxmz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:52.863: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 11 00:05:52.965: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-551735a9-1937-4602-a8da-498285bda9e7] Namespace:persistent-local-volumes-test-5140 PodName:hostexec-node1-hwxmz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:52.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Cleaning up PVC and PV Jun 11 00:05:53.056: INFO: Deleting PersistentVolumeClaim "pvc-72mzj" Jun 11 00:05:53.060: INFO: Deleting PersistentVolume "local-pvwwscx" STEP: Cleaning up PVC and PV Jun 11 00:05:53.064: INFO: Deleting PersistentVolumeClaim "pvc-7xmhq" Jun 11 00:05:53.068: INFO: Deleting PersistentVolume "local-pvc2hdt" STEP: Cleaning up PVC and PV Jun 11 00:05:53.071: INFO: Deleting PersistentVolumeClaim "pvc-n5cjf" Jun 11 00:05:53.075: INFO: Deleting PersistentVolume "local-pvkzxhm" STEP: Cleaning up PVC and PV Jun 11 00:05:53.078: INFO: Deleting PersistentVolumeClaim "pvc-lbl27" Jun 11 00:05:53.083: INFO: Deleting PersistentVolume "local-pvxjm6x" STEP: Cleaning up PVC and PV Jun 11 00:05:53.086: INFO: Deleting PersistentVolumeClaim "pvc-z9m5l" Jun 11 00:05:53.091: INFO: Deleting PersistentVolume "local-pvw7g6n" STEP: Cleaning up PVC and PV Jun 11 00:05:53.095: INFO: Deleting PersistentVolumeClaim "pvc-2rgww" Jun 11 00:05:53.098: INFO: Deleting PersistentVolume "local-pvmhl6s" STEP: Removing the test directory Jun 11 00:05:53.101: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-bece94c5-5424-4f6b-ab24-51e3944b7c4d] Namespace:persistent-local-volumes-test-5140 PodName:hostexec-node2-2lttv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:53.101: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 11 00:05:53.184: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-54f4cbc9-e830-45f8-a4af-a812dd99058a] Namespace:persistent-local-volumes-test-5140 PodName:hostexec-node2-2lttv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:53.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 11 00:05:53.265: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-43a4204c-5068-4409-a997-85ec833589b4] Namespace:persistent-local-volumes-test-5140 PodName:hostexec-node2-2lttv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:53.265: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 11 00:05:53.366: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-0aa21300-b789-4fa7-ab11-9d2c4b3ac97c] Namespace:persistent-local-volumes-test-5140 PodName:hostexec-node2-2lttv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:53.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 11 00:05:53.447: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-553d91fd-df5b-4aaa-8acf-64679afcff1b] Namespace:persistent-local-volumes-test-5140 PodName:hostexec-node2-2lttv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:53.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 11 00:05:53.531: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-78e43100-fd47-4c12-9ff6-88689986aede] Namespace:persistent-local-volumes-test-5140 PodName:hostexec-node2-2lttv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:53.531: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:05:53.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5140" for this suite. S [SKIPPING] [27.033 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 StatefulSet with pod affinity [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:384 should use volumes spread across nodes when pod has anti-affinity [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:410 Runs only when number of nodes >= 3 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:412 ------------------------------ SSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:05:49.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node1" using path "/tmp/local-volume-test-e9210ad3-ebac-409e-abb8-3736e07ef681" Jun 11 00:05:51.873: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-e9210ad3-ebac-409e-abb8-3736e07ef681 && dd if=/dev/zero of=/tmp/local-volume-test-e9210ad3-ebac-409e-abb8-3736e07ef681/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-e9210ad3-ebac-409e-abb8-3736e07ef681/file] Namespace:persistent-local-volumes-test-4853 PodName:hostexec-node1-9qm97 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:51.873: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:05:51.987: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-e9210ad3-ebac-409e-abb8-3736e07ef681/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-4853 PodName:hostexec-node1-9qm97 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:51.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 11 00:05:52.227: INFO: Creating a PV followed by a PVC Jun 11 00:05:52.234: INFO: Waiting for PV local-pvb8rd6 to bind to PVC pvc-llz49 Jun 11 00:05:52.234: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-llz49] to have phase Bound Jun 11 00:05:52.236: INFO: PersistentVolumeClaim pvc-llz49 found but phase is Pending instead of Bound. Jun 11 00:05:54.241: INFO: PersistentVolumeClaim pvc-llz49 found but phase is Pending instead of Bound. Jun 11 00:05:56.244: INFO: PersistentVolumeClaim pvc-llz49 found but phase is Pending instead of Bound. Jun 11 00:05:58.248: INFO: PersistentVolumeClaim pvc-llz49 found and phase=Bound (6.013757324s) Jun 11 00:05:58.248: INFO: Waiting up to 3m0s for PersistentVolume local-pvb8rd6 to have phase Bound Jun 11 00:05:58.250: INFO: PersistentVolume local-pvb8rd6 found and phase=Bound (2.369076ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 Jun 11 00:05:58.254: INFO: We don't set fsGroup on block device, skipped. [AfterEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 11 00:05:58.256: INFO: Deleting PersistentVolumeClaim "pvc-llz49" Jun 11 00:05:58.260: INFO: Deleting PersistentVolume "local-pvb8rd6" Jun 11 00:05:58.263: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-e9210ad3-ebac-409e-abb8-3736e07ef681/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-4853 PodName:hostexec-node1-9qm97 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:58.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "node1" at path /tmp/local-volume-test-e9210ad3-ebac-409e-abb8-3736e07ef681/file Jun 11 00:05:58.364: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-4853 PodName:hostexec-node1-9qm97 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:58.364: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-e9210ad3-ebac-409e-abb8-3736e07ef681 Jun 11 00:05:58.570: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-e9210ad3-ebac-409e-abb8-3736e07ef681] Namespace:persistent-local-volumes-test-4853 PodName:hostexec-node1-9qm97 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:58.570: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:05:58.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-4853" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [8.899 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set same fsGroup for two pods simultaneously [Slow] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 We don't set fsGroup on block device, skipped. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:263 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:05:58.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50 [It] new files should be created with FSGroup ownership when container is root /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:55 STEP: Creating a pod to test emptydir 0644 on tmpfs Jun 11 00:05:58.861: INFO: Waiting up to 5m0s for pod "pod-80683011-6f4f-4d64-88de-0a5a509fa841" in namespace "emptydir-2000" to be "Succeeded or Failed" Jun 11 00:05:58.868: INFO: Pod "pod-80683011-6f4f-4d64-88de-0a5a509fa841": Phase="Pending", Reason="", readiness=false. Elapsed: 6.422567ms Jun 11 00:06:00.873: INFO: Pod "pod-80683011-6f4f-4d64-88de-0a5a509fa841": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011454193s Jun 11 00:06:02.876: INFO: Pod "pod-80683011-6f4f-4d64-88de-0a5a509fa841": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014378391s STEP: Saw pod success Jun 11 00:06:02.876: INFO: Pod "pod-80683011-6f4f-4d64-88de-0a5a509fa841" satisfied condition "Succeeded or Failed" Jun 11 00:06:02.879: INFO: Trying to get logs from node node1 pod pod-80683011-6f4f-4d64-88de-0a5a509fa841 container test-container: STEP: delete the pod Jun 11 00:06:02.897: INFO: Waiting for pod pod-80683011-6f4f-4d64-88de-0a5a509fa841 to disappear Jun 11 00:06:02.900: INFO: Pod pod-80683011-6f4f-4d64-88de-0a5a509fa841 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:06:02.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2000" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is root","total":-1,"completed":5,"skipped":129,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:04:28.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] exhausted, late binding, no topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1080 STEP: Building a driver namespace object, basename csi-mock-volumes-7608 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock proxy Jun 11 00:04:28.841: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7608-6515/csi-attacher Jun 11 00:04:28.844: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-7608 Jun 11 00:04:28.844: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-7608 Jun 11 00:04:28.847: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-7608 Jun 11 00:04:28.850: INFO: creating *v1.Role: csi-mock-volumes-7608-6515/external-attacher-cfg-csi-mock-volumes-7608 Jun 11 00:04:28.853: INFO: creating *v1.RoleBinding: csi-mock-volumes-7608-6515/csi-attacher-role-cfg Jun 11 00:04:28.855: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7608-6515/csi-provisioner Jun 11 00:04:28.858: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-7608 Jun 11 00:04:28.858: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-7608 Jun 11 00:04:28.861: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-7608 Jun 11 00:04:28.863: INFO: creating *v1.Role: csi-mock-volumes-7608-6515/external-provisioner-cfg-csi-mock-volumes-7608 Jun 11 00:04:28.867: INFO: creating *v1.RoleBinding: csi-mock-volumes-7608-6515/csi-provisioner-role-cfg Jun 11 00:04:28.870: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7608-6515/csi-resizer Jun 11 00:04:28.872: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-7608 Jun 11 00:04:28.872: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-7608 Jun 11 00:04:28.875: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-7608 Jun 11 00:04:28.878: INFO: creating *v1.Role: csi-mock-volumes-7608-6515/external-resizer-cfg-csi-mock-volumes-7608 Jun 11 00:04:28.881: INFO: creating *v1.RoleBinding: csi-mock-volumes-7608-6515/csi-resizer-role-cfg Jun 11 00:04:28.883: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7608-6515/csi-snapshotter Jun 11 00:04:28.886: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-7608 Jun 11 00:04:28.886: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-7608 Jun 11 00:04:28.888: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-7608 Jun 11 00:04:28.891: INFO: creating *v1.Role: csi-mock-volumes-7608-6515/external-snapshotter-leaderelection-csi-mock-volumes-7608 Jun 11 00:04:28.893: INFO: creating *v1.RoleBinding: csi-mock-volumes-7608-6515/external-snapshotter-leaderelection Jun 11 00:04:28.896: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7608-6515/csi-mock Jun 11 00:04:28.899: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-7608 Jun 11 00:04:28.901: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-7608 Jun 11 00:04:28.904: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-7608 Jun 11 00:04:28.906: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-7608 Jun 11 00:04:28.908: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-7608 Jun 11 00:04:28.911: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-7608 Jun 11 00:04:28.913: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-7608 Jun 11 00:04:28.916: INFO: creating *v1.StatefulSet: csi-mock-volumes-7608-6515/csi-mockplugin Jun 11 00:04:28.920: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-7608 Jun 11 00:04:28.923: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-7608" Jun 11 00:04:28.926: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-7608 to register on node node1 I0611 00:04:41.073644 29 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-7608","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I0611 00:04:41.105596 29 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null} I0611 00:04:41.107754 29 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-7608","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I0611 00:04:41.109799 29 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":"","FullError":null} I0611 00:04:41.150353 29 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null} I0611 00:04:41.602156 29 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-7608"},"Error":"","FullError":null} STEP: Creating pod Jun 11 00:04:45.196: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil I0611 00:04:45.227867 29 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-c80ffa01-ba34-4ebd-ab4b-9dbb49b10260","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":null,"Error":"rpc error: code = ResourceExhausted desc = fake error","FullError":{"code":8,"message":"fake error"}} I0611 00:04:45.232415 29 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-c80ffa01-ba34-4ebd-ab4b-9dbb49b10260","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-c80ffa01-ba34-4ebd-ab4b-9dbb49b10260"}}},"Error":"","FullError":null} I0611 00:04:51.711221 29 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Jun 11 00:04:51.713: INFO: >>> kubeConfig: /root/.kube/config I0611 00:04:51.817106 29 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c80ffa01-ba34-4ebd-ab4b-9dbb49b10260/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-c80ffa01-ba34-4ebd-ab4b-9dbb49b10260","storage.kubernetes.io/csiProvisionerIdentity":"1654905881153-8081-csi-mock-csi-mock-volumes-7608"}},"Response":{},"Error":"","FullError":null} I0611 00:04:52.395888 29 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Jun 11 00:04:52.474: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:04:52.557: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:04:52.649: INFO: >>> kubeConfig: /root/.kube/config I0611 00:04:52.741646 29 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c80ffa01-ba34-4ebd-ab4b-9dbb49b10260/globalmount","target_path":"/var/lib/kubelet/pods/01c4b6ef-8ac6-4582-8c3e-e1156f847050/volumes/kubernetes.io~csi/pvc-c80ffa01-ba34-4ebd-ab4b-9dbb49b10260/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-c80ffa01-ba34-4ebd-ab4b-9dbb49b10260","storage.kubernetes.io/csiProvisionerIdentity":"1654905881153-8081-csi-mock-csi-mock-volumes-7608"}},"Response":{},"Error":"","FullError":null} I0611 00:04:53.417493 29 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0611 00:04:53.420283 29 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetVolumeStats","Request":{"volume_id":"4","volume_path":"/var/lib/kubelet/pods/01c4b6ef-8ac6-4582-8c3e-e1156f847050/volumes/kubernetes.io~csi/pvc-c80ffa01-ba34-4ebd-ab4b-9dbb49b10260/mount"},"Response":{"usage":[{"total":1073741824,"unit":1}],"volume_condition":{}},"Error":"","FullError":null} Jun 11 00:05:01.217: INFO: Deleting pod "pvc-volume-tester-rs5dl" in namespace "csi-mock-volumes-7608" Jun 11 00:05:01.221: INFO: Wait up to 5m0s for pod "pvc-volume-tester-rs5dl" to be fully deleted Jun 11 00:05:06.375: INFO: >>> kubeConfig: /root/.kube/config I0611 00:05:06.457863 29 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/01c4b6ef-8ac6-4582-8c3e-e1156f847050/volumes/kubernetes.io~csi/pvc-c80ffa01-ba34-4ebd-ab4b-9dbb49b10260/mount"},"Response":{},"Error":"","FullError":null} I0611 00:05:06.475695 29 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0611 00:05:06.477285 29 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c80ffa01-ba34-4ebd-ab4b-9dbb49b10260/globalmount"},"Response":{},"Error":"","FullError":null} I0611 00:05:13.271741 29 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null} STEP: Checking PVC events Jun 11 00:05:14.231: INFO: PVC event ADDED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-6xvfg", GenerateName:"pvc-", Namespace:"csi-mock-volumes-7608", SelfLink:"", UID:"c80ffa01-ba34-4ebd-ab4b-9dbb49b10260", ResourceVersion:"91530", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63790502685, loc:(*time.Location)(0x9e2e180)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00328dc20), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00328dc38)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc00432f7e0), VolumeMode:(*v1.PersistentVolumeMode)(0xc00432f7f0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Jun 11 00:05:14.231: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-6xvfg", GenerateName:"pvc-", Namespace:"csi-mock-volumes-7608", SelfLink:"", UID:"c80ffa01-ba34-4ebd-ab4b-9dbb49b10260", ResourceVersion:"91533", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63790502685, loc:(*time.Location)(0x9e2e180)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.kubernetes.io/selected-node":"node1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00328dc98), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00328dcb0)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00328dcc8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00328dce0)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc00432f830), VolumeMode:(*v1.PersistentVolumeMode)(0xc00432f840), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Jun 11 00:05:14.231: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-6xvfg", GenerateName:"pvc-", Namespace:"csi-mock-volumes-7608", SelfLink:"", UID:"c80ffa01-ba34-4ebd-ab4b-9dbb49b10260", ResourceVersion:"91535", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63790502685, loc:(*time.Location)(0x9e2e180)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-7608", "volume.kubernetes.io/selected-node":"node1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003428a68), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003428a80)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003428a98), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003428ab0)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003428ae0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003428af8)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc002c98860), VolumeMode:(*v1.PersistentVolumeMode)(0xc002c98870), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Jun 11 00:05:14.231: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-6xvfg", GenerateName:"pvc-", Namespace:"csi-mock-volumes-7608", SelfLink:"", UID:"c80ffa01-ba34-4ebd-ab4b-9dbb49b10260", ResourceVersion:"91556", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63790502685, loc:(*time.Location)(0x9e2e180)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-7608", "volume.kubernetes.io/selected-node":"node1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003428b28), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003428b40)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003428b58), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003428b70)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003428b88), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003428ba0)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-c80ffa01-ba34-4ebd-ab4b-9dbb49b10260", StorageClassName:(*string)(0xc002c988a0), VolumeMode:(*v1.PersistentVolumeMode)(0xc002c988b0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Jun 11 00:05:14.232: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-6xvfg", GenerateName:"pvc-", Namespace:"csi-mock-volumes-7608", SelfLink:"", UID:"c80ffa01-ba34-4ebd-ab4b-9dbb49b10260", ResourceVersion:"91557", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63790502685, loc:(*time.Location)(0x9e2e180)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-7608", "volume.kubernetes.io/selected-node":"node1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003428bd0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003428be8)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003428c00), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003428c18)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003428c30), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003428c48)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-c80ffa01-ba34-4ebd-ab4b-9dbb49b10260", StorageClassName:(*string)(0xc002c988e0), VolumeMode:(*v1.PersistentVolumeMode)(0xc002c988f0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Jun 11 00:05:14.232: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-6xvfg", GenerateName:"pvc-", Namespace:"csi-mock-volumes-7608", SelfLink:"", UID:"c80ffa01-ba34-4ebd-ab4b-9dbb49b10260", ResourceVersion:"92583", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63790502685, loc:(*time.Location)(0x9e2e180)}}, DeletionTimestamp:(*v1.Time)(0xc0028f01c8), DeletionGracePeriodSeconds:(*int64)(0xc00461c3c8), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-7608", "volume.kubernetes.io/selected-node":"node1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0028f01e0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0028f01f8)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0028f0210), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0028f0228)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0028f0240), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0028f0258)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-c80ffa01-ba34-4ebd-ab4b-9dbb49b10260", StorageClassName:(*string)(0xc004bee110), VolumeMode:(*v1.PersistentVolumeMode)(0xc004bee120), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Jun 11 00:05:14.232: INFO: PVC event DELETED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-6xvfg", GenerateName:"pvc-", Namespace:"csi-mock-volumes-7608", SelfLink:"", UID:"c80ffa01-ba34-4ebd-ab4b-9dbb49b10260", ResourceVersion:"92593", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63790502685, loc:(*time.Location)(0x9e2e180)}}, DeletionTimestamp:(*v1.Time)(0xc004af0240), DeletionGracePeriodSeconds:(*int64)(0xc0031aa128), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-7608", "volume.kubernetes.io/selected-node":"node1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004af0258), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004af0270)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004af0288), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004af02a0)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004af02b8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004af02d0)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-c80ffa01-ba34-4ebd-ab4b-9dbb49b10260", StorageClassName:(*string)(0xc004c06110), VolumeMode:(*v1.PersistentVolumeMode)(0xc004c06120), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} STEP: Deleting pod pvc-volume-tester-rs5dl Jun 11 00:05:14.232: INFO: Deleting pod "pvc-volume-tester-rs5dl" in namespace "csi-mock-volumes-7608" STEP: Deleting claim pvc-6xvfg STEP: Deleting storageclass csi-mock-volumes-7608-scr82lb STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-7608 STEP: Waiting for namespaces [csi-mock-volumes-7608] to vanish STEP: uninstalling csi mock driver Jun 11 00:05:20.734: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7608-6515/csi-attacher Jun 11 00:05:20.738: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-7608 Jun 11 00:05:20.742: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-7608 Jun 11 00:05:20.746: INFO: deleting *v1.Role: csi-mock-volumes-7608-6515/external-attacher-cfg-csi-mock-volumes-7608 Jun 11 00:05:20.749: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7608-6515/csi-attacher-role-cfg Jun 11 00:05:20.753: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7608-6515/csi-provisioner Jun 11 00:05:20.757: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-7608 Jun 11 00:05:20.762: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-7608 Jun 11 00:05:20.769: INFO: deleting *v1.Role: csi-mock-volumes-7608-6515/external-provisioner-cfg-csi-mock-volumes-7608 Jun 11 00:05:20.777: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7608-6515/csi-provisioner-role-cfg Jun 11 00:05:20.784: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7608-6515/csi-resizer Jun 11 00:05:20.791: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-7608 Jun 11 00:05:20.794: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-7608 Jun 11 00:05:20.797: INFO: deleting *v1.Role: csi-mock-volumes-7608-6515/external-resizer-cfg-csi-mock-volumes-7608 Jun 11 00:05:20.801: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7608-6515/csi-resizer-role-cfg Jun 11 00:05:20.805: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7608-6515/csi-snapshotter Jun 11 00:05:20.808: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-7608 Jun 11 00:05:20.811: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-7608 Jun 11 00:05:20.815: INFO: deleting *v1.Role: csi-mock-volumes-7608-6515/external-snapshotter-leaderelection-csi-mock-volumes-7608 Jun 11 00:05:20.819: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7608-6515/external-snapshotter-leaderelection Jun 11 00:05:20.822: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7608-6515/csi-mock Jun 11 00:05:20.826: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-7608 Jun 11 00:05:20.829: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-7608 Jun 11 00:05:20.833: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-7608 Jun 11 00:05:20.836: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-7608 Jun 11 00:05:20.839: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-7608 Jun 11 00:05:20.843: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-7608 Jun 11 00:05:20.846: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-7608 Jun 11 00:05:20.849: INFO: deleting *v1.StatefulSet: csi-mock-volumes-7608-6515/csi-mockplugin Jun 11 00:05:20.853: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-7608 STEP: deleting the driver namespace: csi-mock-volumes-7608-6515 STEP: Waiting for namespaces [csi-mock-volumes-7608-6515] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:06:04.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:96.111 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 storage capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1022 exhausted, late binding, no topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1080 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume storage capacity exhausted, late binding, no topology","total":-1,"completed":8,"skipped":401,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:05:50.201: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Jun 11 00:05:54.251: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-a4180fd0-623a-4a02-acb7-cca993a611f2 && mount --bind /tmp/local-volume-test-a4180fd0-623a-4a02-acb7-cca993a611f2 /tmp/local-volume-test-a4180fd0-623a-4a02-acb7-cca993a611f2] Namespace:persistent-local-volumes-test-8395 PodName:hostexec-node1-pmnww ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:54.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 11 00:05:54.352: INFO: Creating a PV followed by a PVC Jun 11 00:05:54.359: INFO: Waiting for PV local-pv4q9j5 to bind to PVC pvc-h54r5 Jun 11 00:05:54.359: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-h54r5] to have phase Bound Jun 11 00:05:54.361: INFO: PersistentVolumeClaim pvc-h54r5 found but phase is Pending instead of Bound. Jun 11 00:05:56.365: INFO: PersistentVolumeClaim pvc-h54r5 found but phase is Pending instead of Bound. Jun 11 00:05:58.369: INFO: PersistentVolumeClaim pvc-h54r5 found and phase=Bound (4.009550448s) Jun 11 00:05:58.369: INFO: Waiting up to 3m0s for PersistentVolume local-pv4q9j5 to have phase Bound Jun 11 00:05:58.371: INFO: PersistentVolume local-pv4q9j5 found and phase=Bound (2.057433ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Jun 11 00:06:04.397: INFO: pod "pod-093daedf-2295-4a2c-834d-7f137a606ce5" created on Node "node1" STEP: Writing in pod1 Jun 11 00:06:04.397: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-8395 PodName:pod-093daedf-2295-4a2c-834d-7f137a606ce5 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:06:04.397: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:06:04.757: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Jun 11 00:06:04.757: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-8395 PodName:pod-093daedf-2295-4a2c-834d-7f137a606ce5 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:06:04.757: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:06:04.843: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod1 Jun 11 00:06:04.843: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-a4180fd0-623a-4a02-acb7-cca993a611f2 > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-8395 PodName:pod-093daedf-2295-4a2c-834d-7f137a606ce5 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:06:04.843: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:06:05.034: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-a4180fd0-623a-4a02-acb7-cca993a611f2 > /mnt/volume1/test-file", out: "", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-093daedf-2295-4a2c-834d-7f137a606ce5 in namespace persistent-local-volumes-test-8395 [AfterEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 11 00:06:05.038: INFO: Deleting PersistentVolumeClaim "pvc-h54r5" Jun 11 00:06:05.042: INFO: Deleting PersistentVolume "local-pv4q9j5" STEP: Removing the test directory Jun 11 00:06:05.046: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-a4180fd0-623a-4a02-acb7-cca993a611f2 && rm -r /tmp/local-volume-test-a4180fd0-623a-4a02-acb7-cca993a611f2] Namespace:persistent-local-volumes-test-8395 PodName:hostexec-node1-pmnww ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:06:05.046: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:06:05.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-8395" for this suite. • [SLOW TEST:14.988 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":6,"skipped":134,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:05:53.655: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Jun 11 00:05:55.727: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-4a0e16dc-84fb-4457-bc93-a0d19311bf74-backend && mount --bind /tmp/local-volume-test-4a0e16dc-84fb-4457-bc93-a0d19311bf74-backend /tmp/local-volume-test-4a0e16dc-84fb-4457-bc93-a0d19311bf74-backend && ln -s /tmp/local-volume-test-4a0e16dc-84fb-4457-bc93-a0d19311bf74-backend /tmp/local-volume-test-4a0e16dc-84fb-4457-bc93-a0d19311bf74] Namespace:persistent-local-volumes-test-8953 PodName:hostexec-node1-p2n2j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:55.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 11 00:05:55.826: INFO: Creating a PV followed by a PVC Jun 11 00:05:55.833: INFO: Waiting for PV local-pvx98qr to bind to PVC pvc-cl2wc Jun 11 00:05:55.833: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-cl2wc] to have phase Bound Jun 11 00:05:55.835: INFO: PersistentVolumeClaim pvc-cl2wc found but phase is Pending instead of Bound. Jun 11 00:05:57.839: INFO: PersistentVolumeClaim pvc-cl2wc found and phase=Bound (2.005596083s) Jun 11 00:05:57.839: INFO: Waiting up to 3m0s for PersistentVolume local-pvx98qr to have phase Bound Jun 11 00:05:57.841: INFO: PersistentVolume local-pvx98qr found and phase=Bound (2.059962ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 STEP: Creating pod1 STEP: Creating a pod Jun 11 00:06:01.869: INFO: pod "pod-6109f2f0-f699-434b-b18d-1f49c846f202" created on Node "node1" STEP: Writing in pod1 Jun 11 00:06:01.869: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-8953 PodName:pod-6109f2f0-f699-434b-b18d-1f49c846f202 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:06:01.869: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:06:01.951: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Jun 11 00:06:01.951: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-8953 PodName:pod-6109f2f0-f699-434b-b18d-1f49c846f202 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:06:01.951: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:06:02.086: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-6109f2f0-f699-434b-b18d-1f49c846f202 in namespace persistent-local-volumes-test-8953 STEP: Creating pod2 STEP: Creating a pod Jun 11 00:06:06.115: INFO: pod "pod-3bd08790-f514-49bf-85dc-1ce8e7a6e71a" created on Node "node1" STEP: Reading in pod2 Jun 11 00:06:06.115: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-8953 PodName:pod-3bd08790-f514-49bf-85dc-1ce8e7a6e71a ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:06:06.115: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:06:06.294: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod2 STEP: Deleting pod pod-3bd08790-f514-49bf-85dc-1ce8e7a6e71a in namespace persistent-local-volumes-test-8953 [AfterEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 11 00:06:06.299: INFO: Deleting PersistentVolumeClaim "pvc-cl2wc" Jun 11 00:06:06.304: INFO: Deleting PersistentVolume "local-pvx98qr" STEP: Removing the test directory Jun 11 00:06:06.315: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm /tmp/local-volume-test-4a0e16dc-84fb-4457-bc93-a0d19311bf74 && umount /tmp/local-volume-test-4a0e16dc-84fb-4457-bc93-a0d19311bf74-backend && rm -r /tmp/local-volume-test-4a0e16dc-84fb-4457-bc93-a0d19311bf74-backend] Namespace:persistent-local-volumes-test-8953 PodName:hostexec-node1-p2n2j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:06:06.315: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:06:06.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-8953" for this suite. • [SLOW TEST:12.844 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":6,"skipped":246,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:05:41.221: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node2" using path "/tmp/local-volume-test-0d08a39c-e985-4dc5-b6f8-aae85ad778b6" Jun 11 00:05:45.273: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-0d08a39c-e985-4dc5-b6f8-aae85ad778b6 && dd if=/dev/zero of=/tmp/local-volume-test-0d08a39c-e985-4dc5-b6f8-aae85ad778b6/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-0d08a39c-e985-4dc5-b6f8-aae85ad778b6/file] Namespace:persistent-local-volumes-test-4667 PodName:hostexec-node2-64pn2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:45.273: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:05:45.484: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-0d08a39c-e985-4dc5-b6f8-aae85ad778b6/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-4667 PodName:hostexec-node2-64pn2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:05:45.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 11 00:05:45.983: INFO: Creating a PV followed by a PVC Jun 11 00:05:45.989: INFO: Waiting for PV local-pvgqgth to bind to PVC pvc-jk7vw Jun 11 00:05:45.989: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-jk7vw] to have phase Bound Jun 11 00:05:45.992: INFO: PersistentVolumeClaim pvc-jk7vw found but phase is Pending instead of Bound. Jun 11 00:05:47.996: INFO: PersistentVolumeClaim pvc-jk7vw found but phase is Pending instead of Bound. Jun 11 00:05:49.999: INFO: PersistentVolumeClaim pvc-jk7vw found but phase is Pending instead of Bound. Jun 11 00:05:52.003: INFO: PersistentVolumeClaim pvc-jk7vw found but phase is Pending instead of Bound. Jun 11 00:05:54.008: INFO: PersistentVolumeClaim pvc-jk7vw found but phase is Pending instead of Bound. Jun 11 00:05:56.012: INFO: PersistentVolumeClaim pvc-jk7vw found but phase is Pending instead of Bound. Jun 11 00:05:58.015: INFO: PersistentVolumeClaim pvc-jk7vw found and phase=Bound (12.026036315s) Jun 11 00:05:58.015: INFO: Waiting up to 3m0s for PersistentVolume local-pvgqgth to have phase Bound Jun 11 00:05:58.017: INFO: PersistentVolume local-pvgqgth found and phase=Bound (2.030116ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 STEP: Create first pod and check fsGroup is set STEP: Creating a pod Jun 11 00:06:02.039: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-4667 exec pod-5feaa6c4-82f8-4784-8fae-2b4d414582a5 --namespace=persistent-local-volumes-test-4667 -- stat -c %g /mnt/volume1' Jun 11 00:06:02.325: INFO: stderr: "" Jun 11 00:06:02.325: INFO: stdout: "1234\n" STEP: Create second pod with same fsGroup and check fsGroup is correct STEP: Creating a pod Jun 11 00:06:08.345: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-4667 exec pod-fd582e4c-52da-4f0f-bf0d-4bd4ff0dccb1 --namespace=persistent-local-volumes-test-4667 -- stat -c %g /mnt/volume1' Jun 11 00:06:08.588: INFO: stderr: "" Jun 11 00:06:08.588: INFO: stdout: "1234\n" STEP: Deleting first pod STEP: Deleting pod pod-5feaa6c4-82f8-4784-8fae-2b4d414582a5 in namespace persistent-local-volumes-test-4667 STEP: Deleting second pod STEP: Deleting pod pod-fd582e4c-52da-4f0f-bf0d-4bd4ff0dccb1 in namespace persistent-local-volumes-test-4667 [AfterEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 11 00:06:08.598: INFO: Deleting PersistentVolumeClaim "pvc-jk7vw" Jun 11 00:06:08.601: INFO: Deleting PersistentVolume "local-pvgqgth" Jun 11 00:06:08.605: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-0d08a39c-e985-4dc5-b6f8-aae85ad778b6/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-4667 PodName:hostexec-node2-64pn2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:06:08.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop1" on node "node2" at path /tmp/local-volume-test-0d08a39c-e985-4dc5-b6f8-aae85ad778b6/file Jun 11 00:06:08.696: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop1] Namespace:persistent-local-volumes-test-4667 PodName:hostexec-node2-64pn2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:06:08.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-0d08a39c-e985-4dc5-b6f8-aae85ad778b6 Jun 11 00:06:08.812: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-0d08a39c-e985-4dc5-b6f8-aae85ad778b6] Namespace:persistent-local-volumes-test-4667 PodName:hostexec-node2-64pn2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:06:08.812: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:06:08.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-4667" for this suite. • [SLOW TEST:27.689 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]","total":-1,"completed":6,"skipped":202,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:06:04.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node2" using path "/tmp/local-volume-test-02e4dc4a-2105-47de-9d00-9ef15b6beba8" Jun 11 00:06:08.981: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-02e4dc4a-2105-47de-9d00-9ef15b6beba8 && dd if=/dev/zero of=/tmp/local-volume-test-02e4dc4a-2105-47de-9d00-9ef15b6beba8/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-02e4dc4a-2105-47de-9d00-9ef15b6beba8/file] Namespace:persistent-local-volumes-test-6240 PodName:hostexec-node2-vl2db ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:06:08.981: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:06:09.113: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-02e4dc4a-2105-47de-9d00-9ef15b6beba8/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-6240 PodName:hostexec-node2-vl2db ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:06:09.113: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:06:09.303: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkfs -t ext4 /dev/loop0 && mount -t ext4 /dev/loop0 /tmp/local-volume-test-02e4dc4a-2105-47de-9d00-9ef15b6beba8 && chmod o+rwx /tmp/local-volume-test-02e4dc4a-2105-47de-9d00-9ef15b6beba8] Namespace:persistent-local-volumes-test-6240 PodName:hostexec-node2-vl2db ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:06:09.303: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 11 00:06:09.763: INFO: Creating a PV followed by a PVC Jun 11 00:06:09.771: INFO: Waiting for PV local-pvvnhjf to bind to PVC pvc-drpbx Jun 11 00:06:09.771: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-drpbx] to have phase Bound Jun 11 00:06:09.773: INFO: PersistentVolumeClaim pvc-drpbx found but phase is Pending instead of Bound. Jun 11 00:06:11.777: INFO: PersistentVolumeClaim pvc-drpbx found but phase is Pending instead of Bound. Jun 11 00:06:13.781: INFO: PersistentVolumeClaim pvc-drpbx found and phase=Bound (4.010079025s) Jun 11 00:06:13.781: INFO: Waiting up to 3m0s for PersistentVolume local-pvvnhjf to have phase Bound Jun 11 00:06:13.783: INFO: PersistentVolume local-pvvnhjf found and phase=Bound (2.301496ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Jun 11 00:06:13.789: INFO: Disabled temporarily, reopen after #73168 is fixed [AfterEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 11 00:06:13.791: INFO: Deleting PersistentVolumeClaim "pvc-drpbx" Jun 11 00:06:13.796: INFO: Deleting PersistentVolume "local-pvvnhjf" Jun 11 00:06:13.799: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-02e4dc4a-2105-47de-9d00-9ef15b6beba8] Namespace:persistent-local-volumes-test-6240 PodName:hostexec-node2-vl2db ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:06:13.799: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:06:13.964: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-02e4dc4a-2105-47de-9d00-9ef15b6beba8/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-6240 PodName:hostexec-node2-vl2db ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:06:13.964: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "node2" at path /tmp/local-volume-test-02e4dc4a-2105-47de-9d00-9ef15b6beba8/file Jun 11 00:06:14.231: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-6240 PodName:hostexec-node2-vl2db ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:06:14.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-02e4dc4a-2105-47de-9d00-9ef15b6beba8 Jun 11 00:06:14.332: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-02e4dc4a-2105-47de-9d00-9ef15b6beba8] Namespace:persistent-local-volumes-test-6240 PodName:hostexec-node2-vl2db ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:06:14.332: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:06:14.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-6240" for this suite. S [SKIPPING] [9.586 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set different fsGroup for second pod if first pod is deleted [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Disabled temporarily, reopen after #73168 is fixed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:287 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:06:06.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Jun 11 00:06:12.557: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-ed36120d-3232-4662-af71-f0a5731e7249-backend && mount --bind /tmp/local-volume-test-ed36120d-3232-4662-af71-f0a5731e7249-backend /tmp/local-volume-test-ed36120d-3232-4662-af71-f0a5731e7249-backend && ln -s /tmp/local-volume-test-ed36120d-3232-4662-af71-f0a5731e7249-backend /tmp/local-volume-test-ed36120d-3232-4662-af71-f0a5731e7249] Namespace:persistent-local-volumes-test-8525 PodName:hostexec-node2-clxvk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:06:12.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 11 00:06:12.644: INFO: Creating a PV followed by a PVC Jun 11 00:06:12.651: INFO: Waiting for PV local-pv65jgq to bind to PVC pvc-4mcrj Jun 11 00:06:12.651: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-4mcrj] to have phase Bound Jun 11 00:06:12.654: INFO: PersistentVolumeClaim pvc-4mcrj found but phase is Pending instead of Bound. Jun 11 00:06:14.657: INFO: PersistentVolumeClaim pvc-4mcrj found but phase is Pending instead of Bound. Jun 11 00:06:16.661: INFO: PersistentVolumeClaim pvc-4mcrj found but phase is Pending instead of Bound. Jun 11 00:06:18.664: INFO: PersistentVolumeClaim pvc-4mcrj found but phase is Pending instead of Bound. Jun 11 00:06:20.669: INFO: PersistentVolumeClaim pvc-4mcrj found but phase is Pending instead of Bound. Jun 11 00:06:22.672: INFO: PersistentVolumeClaim pvc-4mcrj found but phase is Pending instead of Bound. Jun 11 00:06:24.676: INFO: PersistentVolumeClaim pvc-4mcrj found but phase is Pending instead of Bound. Jun 11 00:06:26.680: INFO: PersistentVolumeClaim pvc-4mcrj found but phase is Pending instead of Bound. Jun 11 00:06:28.683: INFO: PersistentVolumeClaim pvc-4mcrj found and phase=Bound (16.032223548s) Jun 11 00:06:28.683: INFO: Waiting up to 3m0s for PersistentVolume local-pv65jgq to have phase Bound Jun 11 00:06:28.686: INFO: PersistentVolume local-pv65jgq found and phase=Bound (2.324788ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 STEP: Checking fsGroup is set STEP: Creating a pod Jun 11 00:06:32.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-8525 exec pod-4a3d2d9c-c2ba-4078-bd2d-6b90e14099cd --namespace=persistent-local-volumes-test-8525 -- stat -c %g /mnt/volume1' Jun 11 00:06:32.979: INFO: stderr: "" Jun 11 00:06:32.979: INFO: stdout: "1234\n" STEP: Deleting pod STEP: Deleting pod pod-4a3d2d9c-c2ba-4078-bd2d-6b90e14099cd in namespace persistent-local-volumes-test-8525 [AfterEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 11 00:06:32.984: INFO: Deleting PersistentVolumeClaim "pvc-4mcrj" Jun 11 00:06:32.988: INFO: Deleting PersistentVolume "local-pv65jgq" STEP: Removing the test directory Jun 11 00:06:32.992: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm /tmp/local-volume-test-ed36120d-3232-4662-af71-f0a5731e7249 && umount /tmp/local-volume-test-ed36120d-3232-4662-af71-f0a5731e7249-backend && rm -r /tmp/local-volume-test-ed36120d-3232-4662-af71-f0a5731e7249-backend] Namespace:persistent-local-volumes-test-8525 PodName:hostexec-node2-clxvk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:06:32.992: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:06:33.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-8525" for this suite. • [SLOW TEST:26.774 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Set fsGroup for local volume should set fsGroup for one pod [Slow]","total":-1,"completed":7,"skipped":247,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Flexvolumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:06:33.317: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename flexvolume STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Flexvolumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/flexvolume.go:169 Jun 11 00:06:33.344: INFO: No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' [AfterEach] [sig-storage] Flexvolumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:06:33.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "flexvolume-5510" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.037 seconds] [sig-storage] Flexvolumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should be mountable when non-attachable [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/flexvolume.go:188 No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/flexvolume.go:173 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:06:14.626: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] StatefulSet with pod affinity [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:391 STEP: Setting up local volumes on node "node1" STEP: Initializing test volumes Jun 11 00:06:16.680: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-740453a3-ac4a-4005-9e45-397597a7eb11] Namespace:persistent-local-volumes-test-777 PodName:hostexec-node1-jfwbp ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:06:16.680: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:06:16.773: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-3babb0c9-c2d2-49b2-ba39-8025c7798614] Namespace:persistent-local-volumes-test-777 PodName:hostexec-node1-jfwbp ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:06:16.773: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:06:16.864: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-a3712d78-9906-4004-8576-259a8acb9a19] Namespace:persistent-local-volumes-test-777 PodName:hostexec-node1-jfwbp ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:06:16.864: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:06:16.951: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-28f3f76f-8301-46b4-9304-fa37e2443054] Namespace:persistent-local-volumes-test-777 PodName:hostexec-node1-jfwbp ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:06:16.951: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:06:17.060: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-056484e9-7600-4a39-ac1a-50f76201eddc] Namespace:persistent-local-volumes-test-777 PodName:hostexec-node1-jfwbp ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:06:17.060: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:06:17.161: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-be4b1f8f-0e09-453c-b180-f4d7341cd6a2] Namespace:persistent-local-volumes-test-777 PodName:hostexec-node1-jfwbp ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:06:17.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 11 00:06:17.243: INFO: Creating a PV followed by a PVC Jun 11 00:06:17.251: INFO: Creating a PV followed by a PVC Jun 11 00:06:17.256: INFO: Creating a PV followed by a PVC Jun 11 00:06:17.262: INFO: Creating a PV followed by a PVC Jun 11 00:06:17.268: INFO: Creating a PV followed by a PVC Jun 11 00:06:17.273: INFO: Creating a PV followed by a PVC Jun 11 00:06:27.323: INFO: PVCs were not bound within 10s (that's good) STEP: Setting up local volumes on node "node2" STEP: Initializing test volumes Jun 11 00:06:31.345: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-e3e55cba-543f-4483-a51b-3424f24d0473] Namespace:persistent-local-volumes-test-777 PodName:hostexec-node2-q4wp2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:06:31.345: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:06:31.436: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-feec1709-91d0-4173-b408-d37e5398d7ff] Namespace:persistent-local-volumes-test-777 PodName:hostexec-node2-q4wp2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:06:31.436: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:06:31.523: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-5fdf3b58-44fd-48b7-b400-23a8cac67242] Namespace:persistent-local-volumes-test-777 PodName:hostexec-node2-q4wp2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:06:31.523: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:06:31.619: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-d9470af4-0e26-4731-a0ab-f6bc2b9af367] Namespace:persistent-local-volumes-test-777 PodName:hostexec-node2-q4wp2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:06:31.619: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:06:31.706: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-85f8f035-7040-4d57-9bfb-f3176d4ae79e] Namespace:persistent-local-volumes-test-777 PodName:hostexec-node2-q4wp2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:06:31.706: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:06:31.794: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-134aacee-9a4e-4be4-8e73-776467d3f996] Namespace:persistent-local-volumes-test-777 PodName:hostexec-node2-q4wp2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:06:31.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 11 00:06:31.912: INFO: Creating a PV followed by a PVC Jun 11 00:06:31.919: INFO: Creating a PV followed by a PVC Jun 11 00:06:31.925: INFO: Creating a PV followed by a PVC Jun 11 00:06:31.930: INFO: Creating a PV followed by a PVC Jun 11 00:06:31.936: INFO: Creating a PV followed by a PVC Jun 11 00:06:31.942: INFO: Creating a PV followed by a PVC Jun 11 00:06:41.988: INFO: PVCs were not bound within 10s (that's good) [It] should use volumes spread across nodes when pod management is parallel and pod has anti-affinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:425 Jun 11 00:06:41.988: INFO: Runs only when number of nodes >= 3 [AfterEach] StatefulSet with pod affinity [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:403 STEP: Cleaning up PVC and PV Jun 11 00:06:41.990: INFO: Deleting PersistentVolumeClaim "pvc-s2vnl" Jun 11 00:06:41.995: INFO: Deleting PersistentVolume "local-pvxfnhx" STEP: Cleaning up PVC and PV Jun 11 00:06:42.000: INFO: Deleting PersistentVolumeClaim "pvc-hw79h" Jun 11 00:06:42.003: INFO: Deleting PersistentVolume "local-pvc742c" STEP: Cleaning up PVC and PV Jun 11 00:06:42.007: INFO: Deleting PersistentVolumeClaim "pvc-jhxjw" Jun 11 00:06:42.011: INFO: Deleting PersistentVolume "local-pvsjpxq" STEP: Cleaning up PVC and PV Jun 11 00:06:42.014: INFO: Deleting PersistentVolumeClaim "pvc-kbw5h" Jun 11 00:06:42.018: INFO: Deleting PersistentVolume "local-pvmjkpx" STEP: Cleaning up PVC and PV Jun 11 00:06:42.021: INFO: Deleting PersistentVolumeClaim "pvc-p4zq7" Jun 11 00:06:42.025: INFO: Deleting PersistentVolume "local-pvxc742" STEP: Cleaning up PVC and PV Jun 11 00:06:42.030: INFO: Deleting PersistentVolumeClaim "pvc-nrj5m" Jun 11 00:06:42.033: INFO: Deleting PersistentVolume "local-pv59q2p" STEP: Removing the test directory Jun 11 00:06:42.036: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-740453a3-ac4a-4005-9e45-397597a7eb11] Namespace:persistent-local-volumes-test-777 PodName:hostexec-node1-jfwbp ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:06:42.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 11 00:06:42.130: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-3babb0c9-c2d2-49b2-ba39-8025c7798614] Namespace:persistent-local-volumes-test-777 PodName:hostexec-node1-jfwbp ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:06:42.130: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 11 00:06:42.220: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-a3712d78-9906-4004-8576-259a8acb9a19] Namespace:persistent-local-volumes-test-777 PodName:hostexec-node1-jfwbp ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:06:42.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 11 00:06:42.320: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-28f3f76f-8301-46b4-9304-fa37e2443054] Namespace:persistent-local-volumes-test-777 PodName:hostexec-node1-jfwbp ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:06:42.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 11 00:06:42.419: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-056484e9-7600-4a39-ac1a-50f76201eddc] Namespace:persistent-local-volumes-test-777 PodName:hostexec-node1-jfwbp ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:06:42.419: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 11 00:06:42.509: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-be4b1f8f-0e09-453c-b180-f4d7341cd6a2] Namespace:persistent-local-volumes-test-777 PodName:hostexec-node1-jfwbp ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:06:42.509: INFO: >>> kubeConfig: /root/.kube/config STEP: Cleaning up PVC and PV Jun 11 00:06:42.601: INFO: Deleting PersistentVolumeClaim "pvc-gqn2r" Jun 11 00:06:42.606: INFO: Deleting PersistentVolume "local-pvntfbc" STEP: Cleaning up PVC and PV Jun 11 00:06:42.611: INFO: Deleting PersistentVolumeClaim "pvc-8k8lp" Jun 11 00:06:42.615: INFO: Deleting PersistentVolume "local-pvn2cwk" STEP: Cleaning up PVC and PV Jun 11 00:06:42.619: INFO: Deleting PersistentVolumeClaim "pvc-4rwb8" Jun 11 00:06:42.622: INFO: Deleting PersistentVolume "local-pvqw994" STEP: Cleaning up PVC and PV Jun 11 00:06:42.625: INFO: Deleting PersistentVolumeClaim "pvc-chwtj" Jun 11 00:06:42.628: INFO: Deleting PersistentVolume "local-pvcj7zs" STEP: Cleaning up PVC and PV Jun 11 00:06:42.632: INFO: Deleting PersistentVolumeClaim "pvc-z57zd" Jun 11 00:06:42.636: INFO: Deleting PersistentVolume "local-pvldt9k" STEP: Cleaning up PVC and PV Jun 11 00:06:42.639: INFO: Deleting PersistentVolumeClaim "pvc-92fh7" Jun 11 00:06:42.643: INFO: Deleting PersistentVolume "local-pv85n8l" STEP: Removing the test directory Jun 11 00:06:42.646: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-e3e55cba-543f-4483-a51b-3424f24d0473] Namespace:persistent-local-volumes-test-777 PodName:hostexec-node2-q4wp2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:06:42.646: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 11 00:06:42.728: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-feec1709-91d0-4173-b408-d37e5398d7ff] Namespace:persistent-local-volumes-test-777 PodName:hostexec-node2-q4wp2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:06:42.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 11 00:06:42.818: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-5fdf3b58-44fd-48b7-b400-23a8cac67242] Namespace:persistent-local-volumes-test-777 PodName:hostexec-node2-q4wp2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:06:42.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 11 00:06:42.903: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-d9470af4-0e26-4731-a0ab-f6bc2b9af367] Namespace:persistent-local-volumes-test-777 PodName:hostexec-node2-q4wp2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:06:42.903: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 11 00:06:42.988: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-85f8f035-7040-4d57-9bfb-f3176d4ae79e] Namespace:persistent-local-volumes-test-777 PodName:hostexec-node2-q4wp2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:06:42.988: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 11 00:06:43.105: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-134aacee-9a4e-4be4-8e73-776467d3f996] Namespace:persistent-local-volumes-test-777 PodName:hostexec-node2-q4wp2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:06:43.106: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:06:43.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-777" for this suite. S [SKIPPING] [28.571 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 StatefulSet with pod affinity [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:384 should use volumes spread across nodes when pod management is parallel and pod has anti-affinity [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:425 Runs only when number of nodes >= 3 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:427 ------------------------------ SSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:06:33.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-7ecbc796-fe89-4af4-b8c9-466bb495a1f4" Jun 11 00:06:37.456: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-7ecbc796-fe89-4af4-b8c9-466bb495a1f4" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-7ecbc796-fe89-4af4-b8c9-466bb495a1f4" "/tmp/local-volume-test-7ecbc796-fe89-4af4-b8c9-466bb495a1f4"] Namespace:persistent-local-volumes-test-2819 PodName:hostexec-node2-nbxkw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:06:37.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 11 00:06:37.562: INFO: Creating a PV followed by a PVC Jun 11 00:06:37.568: INFO: Waiting for PV local-pvqxks6 to bind to PVC pvc-kbzh2 Jun 11 00:06:37.568: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-kbzh2] to have phase Bound Jun 11 00:06:37.571: INFO: PersistentVolumeClaim pvc-kbzh2 found but phase is Pending instead of Bound. Jun 11 00:06:39.577: INFO: PersistentVolumeClaim pvc-kbzh2 found but phase is Pending instead of Bound. Jun 11 00:06:41.581: INFO: PersistentVolumeClaim pvc-kbzh2 found but phase is Pending instead of Bound. Jun 11 00:06:43.587: INFO: PersistentVolumeClaim pvc-kbzh2 found and phase=Bound (6.018212115s) Jun 11 00:06:43.587: INFO: Waiting up to 3m0s for PersistentVolume local-pvqxks6 to have phase Bound Jun 11 00:06:43.589: INFO: PersistentVolume local-pvqxks6 found and phase=Bound (2.524116ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 STEP: Creating pod1 STEP: Creating a pod Jun 11 00:06:47.617: INFO: pod "pod-729b52d4-ca8f-4754-ac9d-890092d0619b" created on Node "node2" STEP: Writing in pod1 Jun 11 00:06:47.617: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-2819 PodName:pod-729b52d4-ca8f-4754-ac9d-890092d0619b ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:06:47.617: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:06:47.697: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Jun 11 00:06:47.697: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-2819 PodName:pod-729b52d4-ca8f-4754-ac9d-890092d0619b ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:06:47.697: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:06:47.772: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-729b52d4-ca8f-4754-ac9d-890092d0619b in namespace persistent-local-volumes-test-2819 STEP: Creating pod2 STEP: Creating a pod Jun 11 00:06:51.801: INFO: pod "pod-05b15312-34bf-4872-aaf8-6d424243fe1f" created on Node "node2" STEP: Reading in pod2 Jun 11 00:06:51.801: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-2819 PodName:pod-05b15312-34bf-4872-aaf8-6d424243fe1f ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:06:51.801: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:06:51.881: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod2 STEP: Deleting pod pod-05b15312-34bf-4872-aaf8-6d424243fe1f in namespace persistent-local-volumes-test-2819 [AfterEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 11 00:06:51.888: INFO: Deleting PersistentVolumeClaim "pvc-kbzh2" Jun 11 00:06:51.893: INFO: Deleting PersistentVolume "local-pvqxks6" STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-7ecbc796-fe89-4af4-b8c9-466bb495a1f4" Jun 11 00:06:51.897: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-7ecbc796-fe89-4af4-b8c9-466bb495a1f4"] Namespace:persistent-local-volumes-test-2819 PodName:hostexec-node2-nbxkw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:06:51.897: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 11 00:06:51.993: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-7ecbc796-fe89-4af4-b8c9-466bb495a1f4] Namespace:persistent-local-volumes-test-2819 PodName:hostexec-node2-nbxkw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:06:51.993: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:06:52.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-2819" for this suite. • [SLOW TEST:18.677 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":8,"skipped":291,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:06:52.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:110 STEP: Creating configMap with name configmap-test-volume-map-489b1b2a-7a4a-4a78-9386-3661ebd4b1a9 STEP: Creating a pod to test consume configMaps Jun 11 00:06:52.146: INFO: Waiting up to 5m0s for pod "pod-configmaps-a0e8ecb6-87b8-419d-8c0a-73e92a7aad6e" in namespace "configmap-3445" to be "Succeeded or Failed" Jun 11 00:06:52.149: INFO: Pod "pod-configmaps-a0e8ecb6-87b8-419d-8c0a-73e92a7aad6e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.606466ms Jun 11 00:06:54.153: INFO: Pod "pod-configmaps-a0e8ecb6-87b8-419d-8c0a-73e92a7aad6e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007665423s Jun 11 00:06:56.156: INFO: Pod "pod-configmaps-a0e8ecb6-87b8-419d-8c0a-73e92a7aad6e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010640829s STEP: Saw pod success Jun 11 00:06:56.156: INFO: Pod "pod-configmaps-a0e8ecb6-87b8-419d-8c0a-73e92a7aad6e" satisfied condition "Succeeded or Failed" Jun 11 00:06:56.159: INFO: Trying to get logs from node node2 pod pod-configmaps-a0e8ecb6-87b8-419d-8c0a-73e92a7aad6e container agnhost-container: STEP: delete the pod Jun 11 00:06:56.175: INFO: Waiting for pod pod-configmaps-a0e8ecb6-87b8-419d-8c0a-73e92a7aad6e to disappear Jun 11 00:06:56.176: INFO: Pod pod-configmaps-a0e8ecb6-87b8-419d-8c0a-73e92a7aad6e no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:06:56.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3445" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":9,"skipped":297,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:06:43.210: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node1" using path "/tmp/local-volume-test-bb5e24f7-a7f0-4f03-881d-32c804ed0178" Jun 11 00:06:47.258: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-bb5e24f7-a7f0-4f03-881d-32c804ed0178 && dd if=/dev/zero of=/tmp/local-volume-test-bb5e24f7-a7f0-4f03-881d-32c804ed0178/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-bb5e24f7-a7f0-4f03-881d-32c804ed0178/file] Namespace:persistent-local-volumes-test-1072 PodName:hostexec-node1-rwfqw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:06:47.258: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:06:47.381: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-bb5e24f7-a7f0-4f03-881d-32c804ed0178/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-1072 PodName:hostexec-node1-rwfqw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:06:47.381: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:06:47.472: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkfs -t ext4 /dev/loop0 && mount -t ext4 /dev/loop0 /tmp/local-volume-test-bb5e24f7-a7f0-4f03-881d-32c804ed0178 && chmod o+rwx /tmp/local-volume-test-bb5e24f7-a7f0-4f03-881d-32c804ed0178] Namespace:persistent-local-volumes-test-1072 PodName:hostexec-node1-rwfqw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:06:47.473: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 11 00:06:47.745: INFO: Creating a PV followed by a PVC Jun 11 00:06:47.752: INFO: Waiting for PV local-pvjhjpj to bind to PVC pvc-58k5h Jun 11 00:06:47.752: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-58k5h] to have phase Bound Jun 11 00:06:47.754: INFO: PersistentVolumeClaim pvc-58k5h found but phase is Pending instead of Bound. Jun 11 00:06:49.758: INFO: PersistentVolumeClaim pvc-58k5h found but phase is Pending instead of Bound. Jun 11 00:06:51.761: INFO: PersistentVolumeClaim pvc-58k5h found but phase is Pending instead of Bound. Jun 11 00:06:53.766: INFO: PersistentVolumeClaim pvc-58k5h found but phase is Pending instead of Bound. Jun 11 00:06:55.770: INFO: PersistentVolumeClaim pvc-58k5h found but phase is Pending instead of Bound. Jun 11 00:06:57.774: INFO: PersistentVolumeClaim pvc-58k5h found and phase=Bound (10.022127539s) Jun 11 00:06:57.774: INFO: Waiting up to 3m0s for PersistentVolume local-pvjhjpj to have phase Bound Jun 11 00:06:57.777: INFO: PersistentVolume local-pvjhjpj found and phase=Bound (2.694796ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 STEP: Creating pod1 STEP: Creating a pod Jun 11 00:07:01.803: INFO: pod "pod-ec267cfe-e04d-4c9e-8e2a-3be7a3aa9f20" created on Node "node1" STEP: Writing in pod1 Jun 11 00:07:01.803: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1072 PodName:pod-ec267cfe-e04d-4c9e-8e2a-3be7a3aa9f20 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:07:01.803: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:07:01.894: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Jun 11 00:07:01.894: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1072 PodName:pod-ec267cfe-e04d-4c9e-8e2a-3be7a3aa9f20 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:07:01.894: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:07:01.986: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-ec267cfe-e04d-4c9e-8e2a-3be7a3aa9f20 in namespace persistent-local-volumes-test-1072 STEP: Creating pod2 STEP: Creating a pod Jun 11 00:07:06.011: INFO: pod "pod-68933d93-e9c0-424f-a98e-2b58c656701f" created on Node "node1" STEP: Reading in pod2 Jun 11 00:07:06.011: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1072 PodName:pod-68933d93-e9c0-424f-a98e-2b58c656701f ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:07:06.011: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:07:06.089: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod2 STEP: Deleting pod pod-68933d93-e9c0-424f-a98e-2b58c656701f in namespace persistent-local-volumes-test-1072 [AfterEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 11 00:07:06.094: INFO: Deleting PersistentVolumeClaim "pvc-58k5h" Jun 11 00:07:06.098: INFO: Deleting PersistentVolume "local-pvjhjpj" Jun 11 00:07:06.102: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-bb5e24f7-a7f0-4f03-881d-32c804ed0178] Namespace:persistent-local-volumes-test-1072 PodName:hostexec-node1-rwfqw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:07:06.102: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:07:06.203: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-bb5e24f7-a7f0-4f03-881d-32c804ed0178/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-1072 PodName:hostexec-node1-rwfqw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:07:06.203: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "node1" at path /tmp/local-volume-test-bb5e24f7-a7f0-4f03-881d-32c804ed0178/file Jun 11 00:07:06.291: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-1072 PodName:hostexec-node1-rwfqw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:07:06.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-bb5e24f7-a7f0-4f03-881d-32c804ed0178 Jun 11 00:07:06.388: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-bb5e24f7-a7f0-4f03-881d-32c804ed0178] Namespace:persistent-local-volumes-test-1072 PodName:hostexec-node1-rwfqw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:07:06.388: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:07:06.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-1072" for this suite. • [SLOW TEST:23.295 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":9,"skipped":479,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:05:53.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should report attach limit when limit is bigger than 0 [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:529 STEP: Building a driver namespace object, basename csi-mock-volumes-1236 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Jun 11 00:05:53.234: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1236-5356/csi-attacher Jun 11 00:05:53.236: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-1236 Jun 11 00:05:53.236: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-1236 Jun 11 00:05:53.239: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-1236 Jun 11 00:05:53.242: INFO: creating *v1.Role: csi-mock-volumes-1236-5356/external-attacher-cfg-csi-mock-volumes-1236 Jun 11 00:05:53.245: INFO: creating *v1.RoleBinding: csi-mock-volumes-1236-5356/csi-attacher-role-cfg Jun 11 00:05:53.247: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1236-5356/csi-provisioner Jun 11 00:05:53.250: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-1236 Jun 11 00:05:53.250: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-1236 Jun 11 00:05:53.253: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-1236 Jun 11 00:05:53.255: INFO: creating *v1.Role: csi-mock-volumes-1236-5356/external-provisioner-cfg-csi-mock-volumes-1236 Jun 11 00:05:53.258: INFO: creating *v1.RoleBinding: csi-mock-volumes-1236-5356/csi-provisioner-role-cfg Jun 11 00:05:53.262: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1236-5356/csi-resizer Jun 11 00:05:53.264: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-1236 Jun 11 00:05:53.264: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-1236 Jun 11 00:05:53.267: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-1236 Jun 11 00:05:53.269: INFO: creating *v1.Role: csi-mock-volumes-1236-5356/external-resizer-cfg-csi-mock-volumes-1236 Jun 11 00:05:53.272: INFO: creating *v1.RoleBinding: csi-mock-volumes-1236-5356/csi-resizer-role-cfg Jun 11 00:05:53.275: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1236-5356/csi-snapshotter Jun 11 00:05:53.278: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-1236 Jun 11 00:05:53.278: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-1236 Jun 11 00:05:53.281: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-1236 Jun 11 00:05:53.284: INFO: creating *v1.Role: csi-mock-volumes-1236-5356/external-snapshotter-leaderelection-csi-mock-volumes-1236 Jun 11 00:05:53.287: INFO: creating *v1.RoleBinding: csi-mock-volumes-1236-5356/external-snapshotter-leaderelection Jun 11 00:05:53.290: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1236-5356/csi-mock Jun 11 00:05:53.294: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-1236 Jun 11 00:05:53.296: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-1236 Jun 11 00:05:53.299: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-1236 Jun 11 00:05:53.301: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-1236 Jun 11 00:05:53.303: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-1236 Jun 11 00:05:53.306: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-1236 Jun 11 00:05:53.309: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-1236 Jun 11 00:05:53.311: INFO: creating *v1.StatefulSet: csi-mock-volumes-1236-5356/csi-mockplugin Jun 11 00:05:53.315: INFO: creating *v1.StatefulSet: csi-mock-volumes-1236-5356/csi-mockplugin-attacher Jun 11 00:05:53.318: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-1236 to register on node node2 STEP: Creating pod Jun 11 00:05:58.330: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jun 11 00:05:58.334: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-m4qhf] to have phase Bound Jun 11 00:05:58.336: INFO: PersistentVolumeClaim pvc-m4qhf found but phase is Pending instead of Bound. Jun 11 00:06:00.340: INFO: PersistentVolumeClaim pvc-m4qhf found and phase=Bound (2.006424338s) STEP: Creating pod Jun 11 00:06:06.364: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jun 11 00:06:06.369: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-4mkrs] to have phase Bound Jun 11 00:06:06.371: INFO: PersistentVolumeClaim pvc-4mkrs found but phase is Pending instead of Bound. Jun 11 00:06:08.374: INFO: PersistentVolumeClaim pvc-4mkrs found and phase=Bound (2.005636611s) STEP: Creating pod Jun 11 00:06:16.403: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jun 11 00:06:16.406: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-vl9b9] to have phase Bound Jun 11 00:06:16.408: INFO: PersistentVolumeClaim pvc-vl9b9 found but phase is Pending instead of Bound. Jun 11 00:06:18.413: INFO: PersistentVolumeClaim pvc-vl9b9 found and phase=Bound (2.00724216s) STEP: Deleting pod pvc-volume-tester-rsrmv Jun 11 00:06:28.435: INFO: Deleting pod "pvc-volume-tester-rsrmv" in namespace "csi-mock-volumes-1236" Jun 11 00:06:28.440: INFO: Wait up to 5m0s for pod "pvc-volume-tester-rsrmv" to be fully deleted STEP: Deleting pod pvc-volume-tester-wk9rj Jun 11 00:06:32.445: INFO: Deleting pod "pvc-volume-tester-wk9rj" in namespace "csi-mock-volumes-1236" Jun 11 00:06:32.450: INFO: Wait up to 5m0s for pod "pvc-volume-tester-wk9rj" to be fully deleted STEP: Deleting pod pvc-volume-tester-v249f Jun 11 00:06:38.456: INFO: Deleting pod "pvc-volume-tester-v249f" in namespace "csi-mock-volumes-1236" Jun 11 00:06:38.460: INFO: Wait up to 5m0s for pod "pvc-volume-tester-v249f" to be fully deleted STEP: Deleting claim pvc-m4qhf Jun 11 00:06:48.471: INFO: Waiting up to 2m0s for PersistentVolume pvc-543e3b66-9b4e-462f-88e3-4796945d4751 to get deleted Jun 11 00:06:48.473: INFO: PersistentVolume pvc-543e3b66-9b4e-462f-88e3-4796945d4751 found and phase=Bound (2.013682ms) Jun 11 00:06:50.477: INFO: PersistentVolume pvc-543e3b66-9b4e-462f-88e3-4796945d4751 was removed STEP: Deleting claim pvc-4mkrs Jun 11 00:06:50.485: INFO: Waiting up to 2m0s for PersistentVolume pvc-5b0f739a-cbdf-4d42-bfdc-ef8ea4854a54 to get deleted Jun 11 00:06:50.487: INFO: PersistentVolume pvc-5b0f739a-cbdf-4d42-bfdc-ef8ea4854a54 found and phase=Bound (2.411189ms) Jun 11 00:06:52.493: INFO: PersistentVolume pvc-5b0f739a-cbdf-4d42-bfdc-ef8ea4854a54 was removed STEP: Deleting claim pvc-vl9b9 Jun 11 00:06:52.500: INFO: Waiting up to 2m0s for PersistentVolume pvc-6829ddd1-f29c-4133-93e8-845c506de1ab to get deleted Jun 11 00:06:52.502: INFO: PersistentVolume pvc-6829ddd1-f29c-4133-93e8-845c506de1ab found and phase=Bound (2.205072ms) Jun 11 00:06:54.508: INFO: PersistentVolume pvc-6829ddd1-f29c-4133-93e8-845c506de1ab was removed STEP: Deleting storageclass csi-mock-volumes-1236-scms2fj STEP: Deleting storageclass csi-mock-volumes-1236-scxp9sk STEP: Deleting storageclass csi-mock-volumes-1236-sczj6kl STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-1236 STEP: Waiting for namespaces [csi-mock-volumes-1236] to vanish STEP: uninstalling csi mock driver Jun 11 00:07:00.531: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1236-5356/csi-attacher Jun 11 00:07:00.535: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-1236 Jun 11 00:07:00.539: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-1236 Jun 11 00:07:00.542: INFO: deleting *v1.Role: csi-mock-volumes-1236-5356/external-attacher-cfg-csi-mock-volumes-1236 Jun 11 00:07:00.546: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1236-5356/csi-attacher-role-cfg Jun 11 00:07:00.549: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1236-5356/csi-provisioner Jun 11 00:07:00.553: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-1236 Jun 11 00:07:00.556: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-1236 Jun 11 00:07:00.560: INFO: deleting *v1.Role: csi-mock-volumes-1236-5356/external-provisioner-cfg-csi-mock-volumes-1236 Jun 11 00:07:00.567: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1236-5356/csi-provisioner-role-cfg Jun 11 00:07:00.576: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1236-5356/csi-resizer Jun 11 00:07:00.585: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-1236 Jun 11 00:07:00.590: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-1236 Jun 11 00:07:00.594: INFO: deleting *v1.Role: csi-mock-volumes-1236-5356/external-resizer-cfg-csi-mock-volumes-1236 Jun 11 00:07:00.598: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1236-5356/csi-resizer-role-cfg Jun 11 00:07:00.602: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1236-5356/csi-snapshotter Jun 11 00:07:00.605: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-1236 Jun 11 00:07:00.608: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-1236 Jun 11 00:07:00.611: INFO: deleting *v1.Role: csi-mock-volumes-1236-5356/external-snapshotter-leaderelection-csi-mock-volumes-1236 Jun 11 00:07:00.614: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1236-5356/external-snapshotter-leaderelection Jun 11 00:07:00.618: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1236-5356/csi-mock Jun 11 00:07:00.621: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-1236 Jun 11 00:07:00.624: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-1236 Jun 11 00:07:00.628: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-1236 Jun 11 00:07:00.631: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-1236 Jun 11 00:07:00.635: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-1236 Jun 11 00:07:00.638: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-1236 Jun 11 00:07:00.641: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-1236 Jun 11 00:07:00.645: INFO: deleting *v1.StatefulSet: csi-mock-volumes-1236-5356/csi-mockplugin Jun 11 00:07:00.648: INFO: deleting *v1.StatefulSet: csi-mock-volumes-1236-5356/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-1236-5356 STEP: Waiting for namespaces [csi-mock-volumes-1236-5356] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:07:12.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:79.486 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI volume limit information using mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:528 should report attach limit when limit is bigger than 0 [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:529 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI volume limit information using mock driver should report attach limit when limit is bigger than 0 [Slow]","total":-1,"completed":6,"skipped":158,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:07:12.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 Jun 11 00:07:16.739: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-8735 PodName:hostexec-node1-ssd4q ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:07:16.739: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:07:16.828: INFO: exec node1: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Jun 11 00:07:16.828: INFO: exec node1: stdout: "0\n" Jun 11 00:07:16.828: INFO: exec node1: stderr: "ls: cannot access /mnt/disks/by-uuid/google-local-ssds-scsi-fs/: No such file or directory\n" Jun 11 00:07:16.828: INFO: exec node1: exit code: 0 Jun 11 00:07:16.828: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:07:16.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-8735" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [4.149 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1254 ------------------------------ S ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:06:08.932: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] exhausted, immediate binding /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1080 STEP: Building a driver namespace object, basename csi-mock-volumes-990 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock proxy Jun 11 00:06:08.995: INFO: creating *v1.ServiceAccount: csi-mock-volumes-990-8057/csi-attacher Jun 11 00:06:08.997: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-990 Jun 11 00:06:08.997: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-990 Jun 11 00:06:09.002: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-990 Jun 11 00:06:09.005: INFO: creating *v1.Role: csi-mock-volumes-990-8057/external-attacher-cfg-csi-mock-volumes-990 Jun 11 00:06:09.007: INFO: creating *v1.RoleBinding: csi-mock-volumes-990-8057/csi-attacher-role-cfg Jun 11 00:06:09.010: INFO: creating *v1.ServiceAccount: csi-mock-volumes-990-8057/csi-provisioner Jun 11 00:06:09.013: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-990 Jun 11 00:06:09.013: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-990 Jun 11 00:06:09.015: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-990 Jun 11 00:06:09.020: INFO: creating *v1.Role: csi-mock-volumes-990-8057/external-provisioner-cfg-csi-mock-volumes-990 Jun 11 00:06:09.023: INFO: creating *v1.RoleBinding: csi-mock-volumes-990-8057/csi-provisioner-role-cfg Jun 11 00:06:09.026: INFO: creating *v1.ServiceAccount: csi-mock-volumes-990-8057/csi-resizer Jun 11 00:06:09.030: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-990 Jun 11 00:06:09.030: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-990 Jun 11 00:06:09.032: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-990 Jun 11 00:06:09.035: INFO: creating *v1.Role: csi-mock-volumes-990-8057/external-resizer-cfg-csi-mock-volumes-990 Jun 11 00:06:09.038: INFO: creating *v1.RoleBinding: csi-mock-volumes-990-8057/csi-resizer-role-cfg Jun 11 00:06:09.041: INFO: creating *v1.ServiceAccount: csi-mock-volumes-990-8057/csi-snapshotter Jun 11 00:06:09.043: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-990 Jun 11 00:06:09.043: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-990 Jun 11 00:06:09.046: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-990 Jun 11 00:06:09.055: INFO: creating *v1.Role: csi-mock-volumes-990-8057/external-snapshotter-leaderelection-csi-mock-volumes-990 Jun 11 00:06:09.059: INFO: creating *v1.RoleBinding: csi-mock-volumes-990-8057/external-snapshotter-leaderelection Jun 11 00:06:09.064: INFO: creating *v1.ServiceAccount: csi-mock-volumes-990-8057/csi-mock Jun 11 00:06:09.070: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-990 Jun 11 00:06:09.074: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-990 Jun 11 00:06:09.077: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-990 Jun 11 00:06:09.079: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-990 Jun 11 00:06:09.083: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-990 Jun 11 00:06:09.085: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-990 Jun 11 00:06:09.088: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-990 Jun 11 00:06:09.091: INFO: creating *v1.StatefulSet: csi-mock-volumes-990-8057/csi-mockplugin Jun 11 00:06:09.096: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-990 Jun 11 00:06:09.099: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-990" Jun 11 00:06:09.101: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-990 to register on node node1 I0611 00:06:14.148628 39 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-990","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I0611 00:06:14.244478 39 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null} I0611 00:06:14.246405 39 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-990","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I0611 00:06:14.248018 39 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":"","FullError":null} I0611 00:06:14.289357 39 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null} I0611 00:06:14.749369 39 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-990"},"Error":"","FullError":null} STEP: Creating pod Jun 11 00:06:18.624: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jun 11 00:06:18.630: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-k7pt2] to have phase Bound Jun 11 00:06:18.631: INFO: PersistentVolumeClaim pvc-k7pt2 found but phase is Pending instead of Bound. I0611 00:06:18.636764 39 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-d97f9169-7819-4809-9786-fb681044cd71","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":null,"Error":"rpc error: code = ResourceExhausted desc = fake error","FullError":{"code":8,"message":"fake error"}} I0611 00:06:18.638395 39 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-d97f9169-7819-4809-9786-fb681044cd71","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-d97f9169-7819-4809-9786-fb681044cd71"}}},"Error":"","FullError":null} Jun 11 00:06:20.635: INFO: PersistentVolumeClaim pvc-k7pt2 found and phase=Bound (2.005660742s) I0611 00:06:20.877774 39 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Jun 11 00:06:20.880: INFO: >>> kubeConfig: /root/.kube/config I0611 00:06:20.976336 39 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-d97f9169-7819-4809-9786-fb681044cd71/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-d97f9169-7819-4809-9786-fb681044cd71","storage.kubernetes.io/csiProvisionerIdentity":"1654905974330-8081-csi-mock-csi-mock-volumes-990"}},"Response":{},"Error":"","FullError":null} I0611 00:06:20.990080 39 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Jun 11 00:06:20.992: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:06:21.101: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:06:21.183: INFO: >>> kubeConfig: /root/.kube/config I0611 00:06:21.266302 39 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-d97f9169-7819-4809-9786-fb681044cd71/globalmount","target_path":"/var/lib/kubelet/pods/9c95956f-e4df-4044-b9f0-d84c5fa71240/volumes/kubernetes.io~csi/pvc-d97f9169-7819-4809-9786-fb681044cd71/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-d97f9169-7819-4809-9786-fb681044cd71","storage.kubernetes.io/csiProvisionerIdentity":"1654905974330-8081-csi-mock-csi-mock-volumes-990"}},"Response":{},"Error":"","FullError":null} Jun 11 00:06:24.657: INFO: Deleting pod "pvc-volume-tester-brd48" in namespace "csi-mock-volumes-990" Jun 11 00:06:24.663: INFO: Wait up to 5m0s for pod "pvc-volume-tester-brd48" to be fully deleted Jun 11 00:06:29.192: INFO: >>> kubeConfig: /root/.kube/config I0611 00:06:29.282900 39 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/9c95956f-e4df-4044-b9f0-d84c5fa71240/volumes/kubernetes.io~csi/pvc-d97f9169-7819-4809-9786-fb681044cd71/mount"},"Response":{},"Error":"","FullError":null} I0611 00:06:29.297019 39 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0611 00:06:29.384029 39 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-d97f9169-7819-4809-9786-fb681044cd71/globalmount"},"Response":{},"Error":"","FullError":null} I0611 00:06:38.686438 39 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null} STEP: Checking PVC events Jun 11 00:06:39.676: INFO: PVC event ADDED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-k7pt2", GenerateName:"pvc-", Namespace:"csi-mock-volumes-990", SelfLink:"", UID:"d97f9169-7819-4809-9786-fb681044cd71", ResourceVersion:"94844", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63790502778, loc:(*time.Location)(0x9e2e180)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003d9e438), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003d9e450)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc004669f20), VolumeMode:(*v1.PersistentVolumeMode)(0xc004669f30), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Jun 11 00:06:39.676: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-k7pt2", GenerateName:"pvc-", Namespace:"csi-mock-volumes-990", SelfLink:"", UID:"d97f9169-7819-4809-9786-fb681044cd71", ResourceVersion:"94845", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63790502778, loc:(*time.Location)(0x9e2e180)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-990"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003736c30), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003736c48)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003736c60), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003736c78)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc0046a8530), VolumeMode:(*v1.PersistentVolumeMode)(0xc0046a8550), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Jun 11 00:06:39.676: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-k7pt2", GenerateName:"pvc-", Namespace:"csi-mock-volumes-990", SelfLink:"", UID:"d97f9169-7819-4809-9786-fb681044cd71", ResourceVersion:"94851", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63790502778, loc:(*time.Location)(0x9e2e180)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-990"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004b8ef00), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004b8ef18)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004b8ef30), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004b8ef48)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-d97f9169-7819-4809-9786-fb681044cd71", StorageClassName:(*string)(0xc00481a410), VolumeMode:(*v1.PersistentVolumeMode)(0xc00481a420), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Jun 11 00:06:39.677: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-k7pt2", GenerateName:"pvc-", Namespace:"csi-mock-volumes-990", SelfLink:"", UID:"d97f9169-7819-4809-9786-fb681044cd71", ResourceVersion:"94854", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63790502778, loc:(*time.Location)(0x9e2e180)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-990"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004b8ef78), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004b8ef90)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004b8efa8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004b8efc0)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-d97f9169-7819-4809-9786-fb681044cd71", StorageClassName:(*string)(0xc00481a450), VolumeMode:(*v1.PersistentVolumeMode)(0xc00481a460), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Jun 11 00:06:39.677: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-k7pt2", GenerateName:"pvc-", Namespace:"csi-mock-volumes-990", SelfLink:"", UID:"d97f9169-7819-4809-9786-fb681044cd71", ResourceVersion:"95080", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63790502778, loc:(*time.Location)(0x9e2e180)}}, DeletionTimestamp:(*v1.Time)(0xc0049c42d0), DeletionGracePeriodSeconds:(*int64)(0xc00107ae78), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-990"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0049c42e8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0049c4300)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0049c4318), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0049c4330)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-d97f9169-7819-4809-9786-fb681044cd71", StorageClassName:(*string)(0xc004701a20), VolumeMode:(*v1.PersistentVolumeMode)(0xc004701a30), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Jun 11 00:06:39.677: INFO: PVC event DELETED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-k7pt2", GenerateName:"pvc-", Namespace:"csi-mock-volumes-990", SelfLink:"", UID:"d97f9169-7819-4809-9786-fb681044cd71", ResourceVersion:"95081", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63790502778, loc:(*time.Location)(0x9e2e180)}}, DeletionTimestamp:(*v1.Time)(0xc0049c4360), DeletionGracePeriodSeconds:(*int64)(0xc00107af48), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-990"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0049c4378), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0049c4390)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0049c43a8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0049c43c0)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-d97f9169-7819-4809-9786-fb681044cd71", StorageClassName:(*string)(0xc004701a70), VolumeMode:(*v1.PersistentVolumeMode)(0xc004701a80), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} STEP: Deleting pod pvc-volume-tester-brd48 Jun 11 00:06:39.677: INFO: Deleting pod "pvc-volume-tester-brd48" in namespace "csi-mock-volumes-990" STEP: Deleting claim pvc-k7pt2 STEP: Deleting storageclass csi-mock-volumes-990-sctnv6k STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-990 STEP: Waiting for namespaces [csi-mock-volumes-990] to vanish STEP: uninstalling csi mock driver Jun 11 00:06:45.709: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-990-8057/csi-attacher Jun 11 00:06:45.713: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-990 Jun 11 00:06:45.716: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-990 Jun 11 00:06:45.721: INFO: deleting *v1.Role: csi-mock-volumes-990-8057/external-attacher-cfg-csi-mock-volumes-990 Jun 11 00:06:45.724: INFO: deleting *v1.RoleBinding: csi-mock-volumes-990-8057/csi-attacher-role-cfg Jun 11 00:06:45.728: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-990-8057/csi-provisioner Jun 11 00:06:45.733: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-990 Jun 11 00:06:45.736: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-990 Jun 11 00:06:45.740: INFO: deleting *v1.Role: csi-mock-volumes-990-8057/external-provisioner-cfg-csi-mock-volumes-990 Jun 11 00:06:45.744: INFO: deleting *v1.RoleBinding: csi-mock-volumes-990-8057/csi-provisioner-role-cfg Jun 11 00:06:45.748: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-990-8057/csi-resizer Jun 11 00:06:45.751: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-990 Jun 11 00:06:45.755: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-990 Jun 11 00:06:45.758: INFO: deleting *v1.Role: csi-mock-volumes-990-8057/external-resizer-cfg-csi-mock-volumes-990 Jun 11 00:06:45.762: INFO: deleting *v1.RoleBinding: csi-mock-volumes-990-8057/csi-resizer-role-cfg Jun 11 00:06:45.767: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-990-8057/csi-snapshotter Jun 11 00:06:45.771: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-990 Jun 11 00:06:45.776: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-990 Jun 11 00:06:45.780: INFO: deleting *v1.Role: csi-mock-volumes-990-8057/external-snapshotter-leaderelection-csi-mock-volumes-990 Jun 11 00:06:45.784: INFO: deleting *v1.RoleBinding: csi-mock-volumes-990-8057/external-snapshotter-leaderelection Jun 11 00:06:45.787: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-990-8057/csi-mock Jun 11 00:06:45.791: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-990 Jun 11 00:06:45.795: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-990 Jun 11 00:06:45.798: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-990 Jun 11 00:06:45.802: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-990 Jun 11 00:06:45.806: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-990 Jun 11 00:06:45.809: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-990 Jun 11 00:06:45.812: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-990 Jun 11 00:06:45.816: INFO: deleting *v1.StatefulSet: csi-mock-volumes-990-8057/csi-mockplugin Jun 11 00:06:45.820: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-990 STEP: deleting the driver namespace: csi-mock-volumes-990-8057 STEP: Waiting for namespaces [csi-mock-volumes-990-8057] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:07:29.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:80.908 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 storage capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1022 exhausted, immediate binding /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1080 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume storage capacity exhausted, immediate binding","total":-1,"completed":7,"skipped":210,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:07:16.845: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node2" using path "/tmp/local-volume-test-aeb16e0e-223b-44c2-af55-d4dd0f95c9c8" Jun 11 00:07:20.893: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-aeb16e0e-223b-44c2-af55-d4dd0f95c9c8 && dd if=/dev/zero of=/tmp/local-volume-test-aeb16e0e-223b-44c2-af55-d4dd0f95c9c8/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-aeb16e0e-223b-44c2-af55-d4dd0f95c9c8/file] Namespace:persistent-local-volumes-test-5349 PodName:hostexec-node2-h4nbx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:07:20.893: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:07:21.115: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-aeb16e0e-223b-44c2-af55-d4dd0f95c9c8/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-5349 PodName:hostexec-node2-h4nbx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:07:21.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 11 00:07:21.228: INFO: Creating a PV followed by a PVC Jun 11 00:07:21.236: INFO: Waiting for PV local-pv55v4t to bind to PVC pvc-dr2sj Jun 11 00:07:21.236: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-dr2sj] to have phase Bound Jun 11 00:07:21.238: INFO: PersistentVolumeClaim pvc-dr2sj found but phase is Pending instead of Bound. Jun 11 00:07:23.242: INFO: PersistentVolumeClaim pvc-dr2sj found but phase is Pending instead of Bound. Jun 11 00:07:25.247: INFO: PersistentVolumeClaim pvc-dr2sj found but phase is Pending instead of Bound. Jun 11 00:07:27.254: INFO: PersistentVolumeClaim pvc-dr2sj found and phase=Bound (6.018775688s) Jun 11 00:07:27.254: INFO: Waiting up to 3m0s for PersistentVolume local-pv55v4t to have phase Bound Jun 11 00:07:27.258: INFO: PersistentVolume local-pv55v4t found and phase=Bound (3.133668ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Jun 11 00:07:31.286: INFO: pod "pod-4f1e7246-858d-4087-90d1-078daa46e0d9" created on Node "node2" STEP: Writing in pod1 Jun 11 00:07:31.286: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /tmp/mnt/volume1; echo test-file-content > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5349 PodName:pod-4f1e7246-858d-4087-90d1-078daa46e0d9 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:07:31.286: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:07:31.367: INFO: podRWCmdExec cmd: "mkdir -p /tmp/mnt/volume1; echo test-file-content > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file", out: "", stderr: "0+1 records in\n0+1 records out\n18 bytes (18B) copied, 0.000161 seconds, 109.2KB/s", err: [It] should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Jun 11 00:07:31.367: INFO: ExecWithOptions {Command:[/bin/sh -c hexdump -n 100 -e '100 "%_p"' /mnt/volume1 | head -1] Namespace:persistent-local-volumes-test-5349 PodName:pod-4f1e7246-858d-4087-90d1-078daa46e0d9 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:07:31.367: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:07:31.446: INFO: podRWCmdExec cmd: "hexdump -n 100 -e '100 \"%_p\"' /mnt/volume1 | head -1", out: "test-file-content...................................................................................", stderr: "", err: STEP: Writing in pod1 Jun 11 00:07:31.446: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /tmp/mnt/volume1; echo /dev/loop0 > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5349 PodName:pod-4f1e7246-858d-4087-90d1-078daa46e0d9 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:07:31.446: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:07:31.558: INFO: podRWCmdExec cmd: "mkdir -p /tmp/mnt/volume1; echo /dev/loop0 > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file", out: "", stderr: "0+1 records in\n0+1 records out\n11 bytes (11B) copied, 0.000039 seconds, 275.4KB/s", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-4f1e7246-858d-4087-90d1-078daa46e0d9 in namespace persistent-local-volumes-test-5349 [AfterEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 11 00:07:31.563: INFO: Deleting PersistentVolumeClaim "pvc-dr2sj" Jun 11 00:07:31.567: INFO: Deleting PersistentVolume "local-pv55v4t" Jun 11 00:07:31.571: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-aeb16e0e-223b-44c2-af55-d4dd0f95c9c8/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-5349 PodName:hostexec-node2-h4nbx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:07:31.571: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "node2" at path /tmp/local-volume-test-aeb16e0e-223b-44c2-af55-d4dd0f95c9c8/file Jun 11 00:07:31.680: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-5349 PodName:hostexec-node2-h4nbx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:07:31.680: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-aeb16e0e-223b-44c2-af55-d4dd0f95c9c8 Jun 11 00:07:31.765: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-aeb16e0e-223b-44c2-af55-d4dd0f95c9c8] Namespace:persistent-local-volumes-test-5349 PodName:hostexec-node2-h4nbx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:07:31.765: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:07:31.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5349" for this suite. • [SLOW TEST:15.026 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":7,"skipped":171,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:02:37.792: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets W0611 00:02:37.814721 26 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 11 00:02:37.814: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 11 00:02:37.823: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] Should fail non-optional pod creation due to the key in the secret object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/secrets_volume.go:440 STEP: Creating secret with name s-test-opt-create-2471953e-fabe-4d69-85cd-730daafb9fda STEP: Creating the pod [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:07:37.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-540" for this suite. • [SLOW TEST:300.081 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 Should fail non-optional pod creation due to the key in the secret object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/secrets_volume.go:440 ------------------------------ {"msg":"PASSED [sig-storage] Secrets Should fail non-optional pod creation due to the key in the secret object does not exist [Slow]","total":-1,"completed":1,"skipped":4,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:07:37.973: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Jun 11 00:07:37.995: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:07:37.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-2737" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.032 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create prometheus metrics for volume provisioning and attach/detach [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:101 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:07:29.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Jun 11 00:07:33.982: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-3ca49b9b-c673-40de-b899-205c944853f3] Namespace:persistent-local-volumes-test-7695 PodName:hostexec-node1-9t4nb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:07:33.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 11 00:07:34.166: INFO: Creating a PV followed by a PVC Jun 11 00:07:34.173: INFO: Waiting for PV local-pvjl6n7 to bind to PVC pvc-dntld Jun 11 00:07:34.173: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-dntld] to have phase Bound Jun 11 00:07:34.177: INFO: PersistentVolumeClaim pvc-dntld found but phase is Pending instead of Bound. Jun 11 00:07:36.180: INFO: PersistentVolumeClaim pvc-dntld found but phase is Pending instead of Bound. Jun 11 00:07:38.183: INFO: PersistentVolumeClaim pvc-dntld found but phase is Pending instead of Bound. Jun 11 00:07:40.190: INFO: PersistentVolumeClaim pvc-dntld found but phase is Pending instead of Bound. Jun 11 00:07:42.193: INFO: PersistentVolumeClaim pvc-dntld found and phase=Bound (8.019973096s) Jun 11 00:07:42.193: INFO: Waiting up to 3m0s for PersistentVolume local-pvjl6n7 to have phase Bound Jun 11 00:07:42.196: INFO: PersistentVolume local-pvjl6n7 found and phase=Bound (2.894754ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Jun 11 00:07:42.200: INFO: Disabled temporarily, reopen after #73168 is fixed [AfterEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 11 00:07:42.202: INFO: Deleting PersistentVolumeClaim "pvc-dntld" Jun 11 00:07:42.205: INFO: Deleting PersistentVolume "local-pvjl6n7" STEP: Removing the test directory Jun 11 00:07:42.210: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-3ca49b9b-c673-40de-b899-205c944853f3] Namespace:persistent-local-volumes-test-7695 PodName:hostexec-node1-9t4nb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:07:42.210: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:07:42.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-7695" for this suite. S [SKIPPING] [12.404 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set different fsGroup for second pod if first pod is deleted [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Disabled temporarily, reopen after #73168 is fixed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:287 ------------------------------ SSSSSS ------------------------------ [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:07:42.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume-provisioning STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:146 [It] should create and delete default persistent volumes [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:692 Jun 11 00:07:42.357: INFO: Only supported for providers [openstack gce aws gke vsphere azure] (not local) [AfterEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:07:42.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-provisioning-3577" for this suite. S [SKIPPING] [0.032 seconds] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 DynamicProvisioner Default /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:691 should create and delete default persistent volumes [Slow] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:692 Only supported for providers [openstack gce aws gke vsphere azure] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:693 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:07:38.129: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-directory STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:57 STEP: Create a pod for further testing Jun 11 00:07:38.166: INFO: The status of Pod test-hostpath-type-hnrc5 is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:07:40.171: INFO: The status of Pod test-hostpath-type-hnrc5 is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:07:42.172: INFO: The status of Pod test-hostpath-type-hnrc5 is Running (Ready = true) STEP: running on node node2 STEP: Should automatically create a new directory 'adir' when HostPathType is HostPathDirectoryOrCreate [It] Should fail on mounting directory 'adir' when HostPathType is HostPathFile /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:84 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:07:48.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-directory-1869" for this suite. • [SLOW TEST:10.109 seconds] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting directory 'adir' when HostPathType is HostPathFile /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:84 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Directory [Slow] Should fail on mounting directory 'adir' when HostPathType is HostPathFile","total":-1,"completed":2,"skipped":116,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:02:57.813: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] Should fail non-optional pod creation due to configMap object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:460 STEP: Creating the pod [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:07:57.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-312" for this suite. • [SLOW TEST:300.057 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 Should fail non-optional pod creation due to configMap object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:460 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap Should fail non-optional pod creation due to configMap object does not exist [Slow]","total":-1,"completed":1,"skipped":68,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:06:56.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should not be passed when CSIDriver does not exist /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 STEP: Building a driver namespace object, basename csi-mock-volumes-4884 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Jun 11 00:06:56.295: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4884-4281/csi-attacher Jun 11 00:06:56.298: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4884 Jun 11 00:06:56.298: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-4884 Jun 11 00:06:56.301: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4884 Jun 11 00:06:56.304: INFO: creating *v1.Role: csi-mock-volumes-4884-4281/external-attacher-cfg-csi-mock-volumes-4884 Jun 11 00:06:56.307: INFO: creating *v1.RoleBinding: csi-mock-volumes-4884-4281/csi-attacher-role-cfg Jun 11 00:06:56.310: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4884-4281/csi-provisioner Jun 11 00:06:56.313: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4884 Jun 11 00:06:56.313: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-4884 Jun 11 00:06:56.316: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4884 Jun 11 00:06:56.319: INFO: creating *v1.Role: csi-mock-volumes-4884-4281/external-provisioner-cfg-csi-mock-volumes-4884 Jun 11 00:06:56.322: INFO: creating *v1.RoleBinding: csi-mock-volumes-4884-4281/csi-provisioner-role-cfg Jun 11 00:06:56.324: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4884-4281/csi-resizer Jun 11 00:06:56.327: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4884 Jun 11 00:06:56.327: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-4884 Jun 11 00:06:56.330: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4884 Jun 11 00:06:56.333: INFO: creating *v1.Role: csi-mock-volumes-4884-4281/external-resizer-cfg-csi-mock-volumes-4884 Jun 11 00:06:56.336: INFO: creating *v1.RoleBinding: csi-mock-volumes-4884-4281/csi-resizer-role-cfg Jun 11 00:06:56.339: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4884-4281/csi-snapshotter Jun 11 00:06:56.341: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4884 Jun 11 00:06:56.341: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-4884 Jun 11 00:06:56.344: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4884 Jun 11 00:06:56.347: INFO: creating *v1.Role: csi-mock-volumes-4884-4281/external-snapshotter-leaderelection-csi-mock-volumes-4884 Jun 11 00:06:56.350: INFO: creating *v1.RoleBinding: csi-mock-volumes-4884-4281/external-snapshotter-leaderelection Jun 11 00:06:56.353: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4884-4281/csi-mock Jun 11 00:06:56.355: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4884 Jun 11 00:06:56.358: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4884 Jun 11 00:06:56.360: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4884 Jun 11 00:06:56.362: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4884 Jun 11 00:06:56.365: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4884 Jun 11 00:06:56.367: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4884 Jun 11 00:06:56.370: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4884 Jun 11 00:06:56.373: INFO: creating *v1.StatefulSet: csi-mock-volumes-4884-4281/csi-mockplugin Jun 11 00:06:56.377: INFO: creating *v1.StatefulSet: csi-mock-volumes-4884-4281/csi-mockplugin-attacher Jun 11 00:06:56.380: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-4884 to register on node node2 STEP: Creating pod Jun 11 00:07:05.899: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jun 11 00:07:05.903: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-tzgpq] to have phase Bound Jun 11 00:07:05.905: INFO: PersistentVolumeClaim pvc-tzgpq found but phase is Pending instead of Bound. Jun 11 00:07:07.908: INFO: PersistentVolumeClaim pvc-tzgpq found and phase=Bound (2.005485233s) STEP: Deleting the previously created pod Jun 11 00:07:19.935: INFO: Deleting pod "pvc-volume-tester-r62md" in namespace "csi-mock-volumes-4884" Jun 11 00:07:19.941: INFO: Wait up to 5m0s for pod "pvc-volume-tester-r62md" to be fully deleted STEP: Checking CSI driver logs Jun 11 00:07:27.970: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/72fd6a25-ba97-4235-9eca-70fd25ec3cca/volumes/kubernetes.io~csi/pvc-dc3a4b4d-c845-4643-a927-ca27c7758b30/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:} STEP: Deleting pod pvc-volume-tester-r62md Jun 11 00:07:27.970: INFO: Deleting pod "pvc-volume-tester-r62md" in namespace "csi-mock-volumes-4884" STEP: Deleting claim pvc-tzgpq Jun 11 00:07:27.979: INFO: Waiting up to 2m0s for PersistentVolume pvc-dc3a4b4d-c845-4643-a927-ca27c7758b30 to get deleted Jun 11 00:07:27.981: INFO: PersistentVolume pvc-dc3a4b4d-c845-4643-a927-ca27c7758b30 found and phase=Bound (2.161878ms) Jun 11 00:07:29.988: INFO: PersistentVolume pvc-dc3a4b4d-c845-4643-a927-ca27c7758b30 was removed STEP: Deleting storageclass csi-mock-volumes-4884-sclgrj6 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-4884 STEP: Waiting for namespaces [csi-mock-volumes-4884] to vanish STEP: uninstalling csi mock driver Jun 11 00:07:36.000: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4884-4281/csi-attacher Jun 11 00:07:36.005: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4884 Jun 11 00:07:36.009: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4884 Jun 11 00:07:36.012: INFO: deleting *v1.Role: csi-mock-volumes-4884-4281/external-attacher-cfg-csi-mock-volumes-4884 Jun 11 00:07:36.016: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4884-4281/csi-attacher-role-cfg Jun 11 00:07:36.020: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4884-4281/csi-provisioner Jun 11 00:07:36.024: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4884 Jun 11 00:07:36.028: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4884 Jun 11 00:07:36.031: INFO: deleting *v1.Role: csi-mock-volumes-4884-4281/external-provisioner-cfg-csi-mock-volumes-4884 Jun 11 00:07:36.035: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4884-4281/csi-provisioner-role-cfg Jun 11 00:07:36.039: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4884-4281/csi-resizer Jun 11 00:07:36.042: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4884 Jun 11 00:07:36.045: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4884 Jun 11 00:07:36.048: INFO: deleting *v1.Role: csi-mock-volumes-4884-4281/external-resizer-cfg-csi-mock-volumes-4884 Jun 11 00:07:36.052: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4884-4281/csi-resizer-role-cfg Jun 11 00:07:36.056: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4884-4281/csi-snapshotter Jun 11 00:07:36.060: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4884 Jun 11 00:07:36.063: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4884 Jun 11 00:07:36.067: INFO: deleting *v1.Role: csi-mock-volumes-4884-4281/external-snapshotter-leaderelection-csi-mock-volumes-4884 Jun 11 00:07:36.070: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4884-4281/external-snapshotter-leaderelection Jun 11 00:07:36.075: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4884-4281/csi-mock Jun 11 00:07:36.078: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4884 Jun 11 00:07:36.082: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4884 Jun 11 00:07:36.085: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4884 Jun 11 00:07:36.088: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4884 Jun 11 00:07:36.091: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4884 Jun 11 00:07:36.096: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4884 Jun 11 00:07:36.099: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4884 Jun 11 00:07:36.103: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4884-4281/csi-mockplugin Jun 11 00:07:36.107: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4884-4281/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-4884-4281 STEP: Waiting for namespaces [csi-mock-volumes-4884-4281] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:08:04.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:67.885 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI workload information using mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:443 should not be passed when CSIDriver does not exist /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when CSIDriver does not exist","total":-1,"completed":10,"skipped":318,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:08:04.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-disks STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:74 Jun 11 00:08:04.238: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:08:04.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-disks-9883" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.031 seconds] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 schedule pods each with a PD, delete pod and verify detach [Slow] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:93 for read-only PD with pod delete grace period of "default (30s)" /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:135 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:75 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:07:42.433: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Jun 11 00:07:44.498: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-f23f2c15-90dd-4cd5-adf6-8d189aeaf218] Namespace:persistent-local-volumes-test-1105 PodName:hostexec-node2-zjgrn ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:07:44.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 11 00:07:44.588: INFO: Creating a PV followed by a PVC Jun 11 00:07:44.596: INFO: Waiting for PV local-pvk8gz5 to bind to PVC pvc-jbdv6 Jun 11 00:07:44.596: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-jbdv6] to have phase Bound Jun 11 00:07:44.598: INFO: PersistentVolumeClaim pvc-jbdv6 found but phase is Pending instead of Bound. Jun 11 00:07:46.601: INFO: PersistentVolumeClaim pvc-jbdv6 found but phase is Pending instead of Bound. Jun 11 00:07:48.605: INFO: PersistentVolumeClaim pvc-jbdv6 found but phase is Pending instead of Bound. Jun 11 00:07:50.610: INFO: PersistentVolumeClaim pvc-jbdv6 found but phase is Pending instead of Bound. Jun 11 00:07:52.615: INFO: PersistentVolumeClaim pvc-jbdv6 found but phase is Pending instead of Bound. Jun 11 00:07:54.620: INFO: PersistentVolumeClaim pvc-jbdv6 found but phase is Pending instead of Bound. Jun 11 00:07:56.624: INFO: PersistentVolumeClaim pvc-jbdv6 found but phase is Pending instead of Bound. Jun 11 00:07:58.628: INFO: PersistentVolumeClaim pvc-jbdv6 found and phase=Bound (14.032120868s) Jun 11 00:07:58.628: INFO: Waiting up to 3m0s for PersistentVolume local-pvk8gz5 to have phase Bound Jun 11 00:07:58.630: INFO: PersistentVolume local-pvk8gz5 found and phase=Bound (2.157253ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 STEP: Checking fsGroup is set STEP: Creating a pod Jun 11 00:08:04.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-1105 exec pod-48e556ef-2138-4da1-b606-db19d85fc65b --namespace=persistent-local-volumes-test-1105 -- stat -c %g /mnt/volume1' Jun 11 00:08:05.150: INFO: stderr: "" Jun 11 00:08:05.150: INFO: stdout: "1234\n" STEP: Deleting pod STEP: Deleting pod pod-48e556ef-2138-4da1-b606-db19d85fc65b in namespace persistent-local-volumes-test-1105 [AfterEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 11 00:08:05.157: INFO: Deleting PersistentVolumeClaim "pvc-jbdv6" Jun 11 00:08:05.161: INFO: Deleting PersistentVolume "local-pvk8gz5" STEP: Removing the test directory Jun 11 00:08:05.164: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-f23f2c15-90dd-4cd5-adf6-8d189aeaf218] Namespace:persistent-local-volumes-test-1105 PodName:hostexec-node2-zjgrn ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:08:05.164: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:08:05.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-1105" for this suite. • [SLOW TEST:22.871 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir] Set fsGroup for local volume should set fsGroup for one pod [Slow]","total":-1,"completed":8,"skipped":278,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:08:04.322: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-char-dev STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:256 STEP: Create a pod for further testing Jun 11 00:08:04.360: INFO: The status of Pod test-hostpath-type-qbsdt is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:08:06.363: INFO: The status of Pod test-hostpath-type-qbsdt is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:08:08.363: INFO: The status of Pod test-hostpath-type-qbsdt is Running (Ready = true) STEP: running on node node2 STEP: Create a character device for further testing Jun 11 00:08:08.366: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/achardev c 89 1] Namespace:host-path-type-char-dev-4907 PodName:test-hostpath-type-qbsdt ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:08:08.366: INFO: >>> kubeConfig: /root/.kube/config [It] Should fail on mounting non-existent character device 'does-not-exist-char-dev' when HostPathType is HostPathCharDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:271 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:08:10.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-char-dev-4907" for this suite. • [SLOW TEST:6.178 seconds] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting non-existent character device 'does-not-exist-char-dev' when HostPathType is HostPathCharDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:271 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Character Device [Slow] Should fail on mounting non-existent character device 'does-not-exist-char-dev' when HostPathType is HostPathCharDev","total":-1,"completed":11,"skipped":401,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:07:48.247: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node2" using path "/tmp/local-volume-test-0f2e0250-2eb0-4bc7-b82a-b2992917a8b0" Jun 11 00:07:52.303: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-0f2e0250-2eb0-4bc7-b82a-b2992917a8b0 && dd if=/dev/zero of=/tmp/local-volume-test-0f2e0250-2eb0-4bc7-b82a-b2992917a8b0/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-0f2e0250-2eb0-4bc7-b82a-b2992917a8b0/file] Namespace:persistent-local-volumes-test-5258 PodName:hostexec-node2-kt25k ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:07:52.304: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:07:52.424: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-0f2e0250-2eb0-4bc7-b82a-b2992917a8b0/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-5258 PodName:hostexec-node2-kt25k ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:07:52.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 11 00:07:52.512: INFO: Creating a PV followed by a PVC Jun 11 00:07:52.518: INFO: Waiting for PV local-pvctxsj to bind to PVC pvc-97x8n Jun 11 00:07:52.518: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-97x8n] to have phase Bound Jun 11 00:07:52.520: INFO: PersistentVolumeClaim pvc-97x8n found but phase is Pending instead of Bound. Jun 11 00:07:54.525: INFO: PersistentVolumeClaim pvc-97x8n found but phase is Pending instead of Bound. Jun 11 00:07:56.529: INFO: PersistentVolumeClaim pvc-97x8n found but phase is Pending instead of Bound. Jun 11 00:07:58.533: INFO: PersistentVolumeClaim pvc-97x8n found and phase=Bound (6.015396221s) Jun 11 00:07:58.533: INFO: Waiting up to 3m0s for PersistentVolume local-pvctxsj to have phase Bound Jun 11 00:07:58.536: INFO: PersistentVolume local-pvctxsj found and phase=Bound (2.184447ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 STEP: Creating pod1 STEP: Creating a pod Jun 11 00:08:04.567: INFO: pod "pod-4ed17e1c-d553-48a1-b0be-8ab6322f21da" created on Node "node2" STEP: Writing in pod1 Jun 11 00:08:04.567: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5258 PodName:pod-4ed17e1c-d553-48a1-b0be-8ab6322f21da ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:08:04.567: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:08:04.671: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Jun 11 00:08:04.671: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5258 PodName:pod-4ed17e1c-d553-48a1-b0be-8ab6322f21da ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:08:04.671: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:08:04.810: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-4ed17e1c-d553-48a1-b0be-8ab6322f21da in namespace persistent-local-volumes-test-5258 STEP: Creating pod2 STEP: Creating a pod Jun 11 00:08:12.839: INFO: pod "pod-28c6c2cf-946f-4002-b711-2d7b12ab4de3" created on Node "node2" STEP: Reading in pod2 Jun 11 00:08:12.839: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5258 PodName:pod-28c6c2cf-946f-4002-b711-2d7b12ab4de3 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:08:12.839: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:08:12.927: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod2 STEP: Deleting pod pod-28c6c2cf-946f-4002-b711-2d7b12ab4de3 in namespace persistent-local-volumes-test-5258 [AfterEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 11 00:08:12.933: INFO: Deleting PersistentVolumeClaim "pvc-97x8n" Jun 11 00:08:12.936: INFO: Deleting PersistentVolume "local-pvctxsj" Jun 11 00:08:12.940: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-0f2e0250-2eb0-4bc7-b82a-b2992917a8b0/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-5258 PodName:hostexec-node2-kt25k ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:08:12.940: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "node2" at path /tmp/local-volume-test-0f2e0250-2eb0-4bc7-b82a-b2992917a8b0/file Jun 11 00:08:13.029: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-5258 PodName:hostexec-node2-kt25k ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:08:13.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-0f2e0250-2eb0-4bc7-b82a-b2992917a8b0 Jun 11 00:08:13.114: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-0f2e0250-2eb0-4bc7-b82a-b2992917a8b0] Namespace:persistent-local-volumes-test-5258 PodName:hostexec-node2-kt25k ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:08:13.114: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:08:13.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5258" for this suite. • [SLOW TEST:25.003 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":3,"skipped":118,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:08:13.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50 [It] volume on tmpfs should have the correct mode using FSGroup /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:75 STEP: Creating a pod to test emptydir volume type on tmpfs Jun 11 00:08:13.509: INFO: Waiting up to 5m0s for pod "pod-54c6c09b-faa6-47a9-8917-6b8e299c5196" in namespace "emptydir-1862" to be "Succeeded or Failed" Jun 11 00:08:13.511: INFO: Pod "pod-54c6c09b-faa6-47a9-8917-6b8e299c5196": Phase="Pending", Reason="", readiness=false. Elapsed: 2.577038ms Jun 11 00:08:15.515: INFO: Pod "pod-54c6c09b-faa6-47a9-8917-6b8e299c5196": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006847124s Jun 11 00:08:17.522: INFO: Pod "pod-54c6c09b-faa6-47a9-8917-6b8e299c5196": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013192546s Jun 11 00:08:19.529: INFO: Pod "pod-54c6c09b-faa6-47a9-8917-6b8e299c5196": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.020405496s STEP: Saw pod success Jun 11 00:08:19.529: INFO: Pod "pod-54c6c09b-faa6-47a9-8917-6b8e299c5196" satisfied condition "Succeeded or Failed" Jun 11 00:08:19.531: INFO: Trying to get logs from node node2 pod pod-54c6c09b-faa6-47a9-8917-6b8e299c5196 container test-container: STEP: delete the pod Jun 11 00:08:19.551: INFO: Waiting for pod pod-54c6c09b-faa6-47a9-8917-6b8e299c5196 to disappear Jun 11 00:08:19.553: INFO: Pod pod-54c6c09b-faa6-47a9-8917-6b8e299c5196 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:08:19.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1862" for this suite. • [SLOW TEST:6.085 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:48 volume on tmpfs should have the correct mode using FSGroup /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:75 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on tmpfs should have the correct mode using FSGroup","total":-1,"completed":4,"skipped":232,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:08:19.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] Pod with node different from PV's NodeAffinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:354 STEP: Initializing test volumes Jun 11 00:08:21.743: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-1bfccd3c-5cfb-441a-94dc-c44756308f85] Namespace:persistent-local-volumes-test-5182 PodName:hostexec-node1-dd6jh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:08:21.743: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 11 00:08:21.835: INFO: Creating a PV followed by a PVC Jun 11 00:08:21.845: INFO: Waiting for PV local-pvjlvsx to bind to PVC pvc-t4rn8 Jun 11 00:08:21.845: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-t4rn8] to have phase Bound Jun 11 00:08:21.847: INFO: PersistentVolumeClaim pvc-t4rn8 found but phase is Pending instead of Bound. Jun 11 00:08:23.852: INFO: PersistentVolumeClaim pvc-t4rn8 found but phase is Pending instead of Bound. Jun 11 00:08:25.857: INFO: PersistentVolumeClaim pvc-t4rn8 found but phase is Pending instead of Bound. Jun 11 00:08:27.860: INFO: PersistentVolumeClaim pvc-t4rn8 found and phase=Bound (6.015124525s) Jun 11 00:08:27.860: INFO: Waiting up to 3m0s for PersistentVolume local-pvjlvsx to have phase Bound Jun 11 00:08:27.863: INFO: PersistentVolume local-pvjlvsx found and phase=Bound (2.338115ms) [It] should fail scheduling due to different NodeSelector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:379 STEP: local-volume-type: dir Jun 11 00:08:27.880: INFO: Waiting up to 5m0s for pod "pod-033d78ab-774b-4023-bae9-b959088a482f" in namespace "persistent-local-volumes-test-5182" to be "Unschedulable" Jun 11 00:08:27.883: INFO: Pod "pod-033d78ab-774b-4023-bae9-b959088a482f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.239775ms Jun 11 00:08:29.887: INFO: Pod "pod-033d78ab-774b-4023-bae9-b959088a482f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006401759s Jun 11 00:08:29.887: INFO: Pod "pod-033d78ab-774b-4023-bae9-b959088a482f" satisfied condition "Unschedulable" [AfterEach] Pod with node different from PV's NodeAffinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:370 STEP: Cleaning up PVC and PV Jun 11 00:08:29.887: INFO: Deleting PersistentVolumeClaim "pvc-t4rn8" Jun 11 00:08:29.891: INFO: Deleting PersistentVolume "local-pvjlvsx" STEP: Removing the test directory Jun 11 00:08:29.895: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-1bfccd3c-5cfb-441a-94dc-c44756308f85] Namespace:persistent-local-volumes-test-5182 PodName:hostexec-node1-dd6jh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:08:29.895: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:08:30.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5182" for this suite. • [SLOW TEST:10.334 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Pod with node different from PV's NodeAffinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:347 should fail scheduling due to different NodeSelector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:379 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeSelector","total":-1,"completed":5,"skipped":287,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:08:30.073: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:77 Jun 11 00:08:30.097: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:08:30.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-954" for this suite. [AfterEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:111 Jun 11 00:08:30.107: INFO: AfterEach: Cleaning up test resources Jun 11 00:08:30.107: INFO: pvc is nil Jun 11 00:08:30.107: INFO: pv is nil S [SKIPPING] in Spec Setup (BeforeEach) [0.035 seconds] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should test that deleting the PV before the pod does not cause pod deletion to fail on PD detach [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:142 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:85 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:08:30.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:77 Jun 11 00:08:30.178: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:08:30.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-7991" for this suite. [AfterEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:111 Jun 11 00:08:30.188: INFO: AfterEach: Cleaning up test resources Jun 11 00:08:30.188: INFO: pvc is nil Jun 11 00:08:30.188: INFO: pv is nil S [SKIPPING] in Spec Setup (BeforeEach) [0.030 seconds] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:127 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:85 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:08:30.254: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-e8e68bca-a62c-487f-95ad-9e8a8e2bb7b0" Jun 11 00:08:34.315: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-e8e68bca-a62c-487f-95ad-9e8a8e2bb7b0" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-e8e68bca-a62c-487f-95ad-9e8a8e2bb7b0" "/tmp/local-volume-test-e8e68bca-a62c-487f-95ad-9e8a8e2bb7b0"] Namespace:persistent-local-volumes-test-2196 PodName:hostexec-node2-g7mbq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:08:34.315: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 11 00:08:34.414: INFO: Creating a PV followed by a PVC Jun 11 00:08:34.421: INFO: Waiting for PV local-pvl4kq7 to bind to PVC pvc-2gxb6 Jun 11 00:08:34.421: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-2gxb6] to have phase Bound Jun 11 00:08:34.423: INFO: PersistentVolumeClaim pvc-2gxb6 found but phase is Pending instead of Bound. Jun 11 00:08:36.426: INFO: PersistentVolumeClaim pvc-2gxb6 found but phase is Pending instead of Bound. Jun 11 00:08:38.430: INFO: PersistentVolumeClaim pvc-2gxb6 found but phase is Pending instead of Bound. Jun 11 00:08:40.436: INFO: PersistentVolumeClaim pvc-2gxb6 found but phase is Pending instead of Bound. Jun 11 00:08:42.441: INFO: PersistentVolumeClaim pvc-2gxb6 found and phase=Bound (8.020536259s) Jun 11 00:08:42.441: INFO: Waiting up to 3m0s for PersistentVolume local-pvl4kq7 to have phase Bound Jun 11 00:08:42.443: INFO: PersistentVolume local-pvl4kq7 found and phase=Bound (2.069672ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Jun 11 00:08:46.471: INFO: pod "pod-2f3854fb-36d1-4fd3-a52a-0401b7bcfd84" created on Node "node2" STEP: Writing in pod1 Jun 11 00:08:46.471: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-2196 PodName:pod-2f3854fb-36d1-4fd3-a52a-0401b7bcfd84 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:08:46.471: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:08:46.546: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Jun 11 00:08:46.546: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-2196 PodName:pod-2f3854fb-36d1-4fd3-a52a-0401b7bcfd84 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:08:46.546: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:08:46.626: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod1 Jun 11 00:08:46.626: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-e8e68bca-a62c-487f-95ad-9e8a8e2bb7b0 > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-2196 PodName:pod-2f3854fb-36d1-4fd3-a52a-0401b7bcfd84 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:08:46.626: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:08:46.704: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-e8e68bca-a62c-487f-95ad-9e8a8e2bb7b0 > /mnt/volume1/test-file", out: "", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-2f3854fb-36d1-4fd3-a52a-0401b7bcfd84 in namespace persistent-local-volumes-test-2196 [AfterEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 11 00:08:46.709: INFO: Deleting PersistentVolumeClaim "pvc-2gxb6" Jun 11 00:08:46.714: INFO: Deleting PersistentVolume "local-pvl4kq7" STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-e8e68bca-a62c-487f-95ad-9e8a8e2bb7b0" Jun 11 00:08:46.718: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-e8e68bca-a62c-487f-95ad-9e8a8e2bb7b0"] Namespace:persistent-local-volumes-test-2196 PodName:hostexec-node2-g7mbq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:08:46.718: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 11 00:08:46.838: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-e8e68bca-a62c-487f-95ad-9e8a8e2bb7b0] Namespace:persistent-local-volumes-test-2196 PodName:hostexec-node2-g7mbq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:08:46.838: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:08:46.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-2196" for this suite. • [SLOW TEST:16.677 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":6,"skipped":357,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:08:05.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] token should not be plumbed down when CSIDriver is not deployed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1524 STEP: Building a driver namespace object, basename csi-mock-volumes-6106 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Jun 11 00:08:05.482: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6106-1761/csi-attacher Jun 11 00:08:05.485: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-6106 Jun 11 00:08:05.485: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-6106 Jun 11 00:08:05.487: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-6106 Jun 11 00:08:05.490: INFO: creating *v1.Role: csi-mock-volumes-6106-1761/external-attacher-cfg-csi-mock-volumes-6106 Jun 11 00:08:05.492: INFO: creating *v1.RoleBinding: csi-mock-volumes-6106-1761/csi-attacher-role-cfg Jun 11 00:08:05.494: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6106-1761/csi-provisioner Jun 11 00:08:05.497: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-6106 Jun 11 00:08:05.497: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-6106 Jun 11 00:08:05.499: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-6106 Jun 11 00:08:05.502: INFO: creating *v1.Role: csi-mock-volumes-6106-1761/external-provisioner-cfg-csi-mock-volumes-6106 Jun 11 00:08:05.505: INFO: creating *v1.RoleBinding: csi-mock-volumes-6106-1761/csi-provisioner-role-cfg Jun 11 00:08:05.508: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6106-1761/csi-resizer Jun 11 00:08:05.511: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-6106 Jun 11 00:08:05.511: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-6106 Jun 11 00:08:05.513: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-6106 Jun 11 00:08:05.516: INFO: creating *v1.Role: csi-mock-volumes-6106-1761/external-resizer-cfg-csi-mock-volumes-6106 Jun 11 00:08:05.518: INFO: creating *v1.RoleBinding: csi-mock-volumes-6106-1761/csi-resizer-role-cfg Jun 11 00:08:05.521: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6106-1761/csi-snapshotter Jun 11 00:08:05.524: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-6106 Jun 11 00:08:05.525: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-6106 Jun 11 00:08:05.528: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-6106 Jun 11 00:08:05.531: INFO: creating *v1.Role: csi-mock-volumes-6106-1761/external-snapshotter-leaderelection-csi-mock-volumes-6106 Jun 11 00:08:05.533: INFO: creating *v1.RoleBinding: csi-mock-volumes-6106-1761/external-snapshotter-leaderelection Jun 11 00:08:05.536: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6106-1761/csi-mock Jun 11 00:08:05.539: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-6106 Jun 11 00:08:05.541: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-6106 Jun 11 00:08:05.544: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-6106 Jun 11 00:08:05.546: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-6106 Jun 11 00:08:05.549: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-6106 Jun 11 00:08:05.552: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-6106 Jun 11 00:08:05.554: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-6106 Jun 11 00:08:05.557: INFO: creating *v1.StatefulSet: csi-mock-volumes-6106-1761/csi-mockplugin Jun 11 00:08:05.561: INFO: creating *v1.StatefulSet: csi-mock-volumes-6106-1761/csi-mockplugin-attacher Jun 11 00:08:05.565: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-6106 to register on node node1 STEP: Creating pod Jun 11 00:08:10.576: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jun 11 00:08:10.581: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-z44mm] to have phase Bound Jun 11 00:08:10.583: INFO: PersistentVolumeClaim pvc-z44mm found but phase is Pending instead of Bound. Jun 11 00:08:12.586: INFO: PersistentVolumeClaim pvc-z44mm found and phase=Bound (2.005035855s) STEP: Deleting the previously created pod Jun 11 00:08:16.606: INFO: Deleting pod "pvc-volume-tester-csjqt" in namespace "csi-mock-volumes-6106" Jun 11 00:08:16.612: INFO: Wait up to 5m0s for pod "pvc-volume-tester-csjqt" to be fully deleted STEP: Checking CSI driver logs Jun 11 00:08:28.632: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/b8c3cb9c-ef4e-4a4f-975b-b3844d16b4be/volumes/kubernetes.io~csi/pvc-68ada56a-e476-4f67-814c-f5c3c6fc1c94/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:} STEP: Deleting pod pvc-volume-tester-csjqt Jun 11 00:08:28.632: INFO: Deleting pod "pvc-volume-tester-csjqt" in namespace "csi-mock-volumes-6106" STEP: Deleting claim pvc-z44mm Jun 11 00:08:28.640: INFO: Waiting up to 2m0s for PersistentVolume pvc-68ada56a-e476-4f67-814c-f5c3c6fc1c94 to get deleted Jun 11 00:08:28.644: INFO: PersistentVolume pvc-68ada56a-e476-4f67-814c-f5c3c6fc1c94 found and phase=Bound (3.092885ms) Jun 11 00:08:30.648: INFO: PersistentVolume pvc-68ada56a-e476-4f67-814c-f5c3c6fc1c94 was removed STEP: Deleting storageclass csi-mock-volumes-6106-scf9tdf STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-6106 STEP: Waiting for namespaces [csi-mock-volumes-6106] to vanish STEP: uninstalling csi mock driver Jun 11 00:08:36.660: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6106-1761/csi-attacher Jun 11 00:08:36.664: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-6106 Jun 11 00:08:36.668: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-6106 Jun 11 00:08:36.672: INFO: deleting *v1.Role: csi-mock-volumes-6106-1761/external-attacher-cfg-csi-mock-volumes-6106 Jun 11 00:08:36.676: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6106-1761/csi-attacher-role-cfg Jun 11 00:08:36.679: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6106-1761/csi-provisioner Jun 11 00:08:36.683: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-6106 Jun 11 00:08:36.686: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-6106 Jun 11 00:08:36.690: INFO: deleting *v1.Role: csi-mock-volumes-6106-1761/external-provisioner-cfg-csi-mock-volumes-6106 Jun 11 00:08:36.694: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6106-1761/csi-provisioner-role-cfg Jun 11 00:08:36.698: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6106-1761/csi-resizer Jun 11 00:08:36.701: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-6106 Jun 11 00:08:36.705: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-6106 Jun 11 00:08:36.708: INFO: deleting *v1.Role: csi-mock-volumes-6106-1761/external-resizer-cfg-csi-mock-volumes-6106 Jun 11 00:08:36.711: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6106-1761/csi-resizer-role-cfg Jun 11 00:08:36.714: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6106-1761/csi-snapshotter Jun 11 00:08:36.717: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-6106 Jun 11 00:08:36.720: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-6106 Jun 11 00:08:36.724: INFO: deleting *v1.Role: csi-mock-volumes-6106-1761/external-snapshotter-leaderelection-csi-mock-volumes-6106 Jun 11 00:08:36.727: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6106-1761/external-snapshotter-leaderelection Jun 11 00:08:36.730: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6106-1761/csi-mock Jun 11 00:08:36.733: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-6106 Jun 11 00:08:36.737: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-6106 Jun 11 00:08:36.740: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-6106 Jun 11 00:08:36.743: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-6106 Jun 11 00:08:36.746: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-6106 Jun 11 00:08:36.749: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-6106 Jun 11 00:08:36.753: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-6106 Jun 11 00:08:36.756: INFO: deleting *v1.StatefulSet: csi-mock-volumes-6106-1761/csi-mockplugin Jun 11 00:08:36.759: INFO: deleting *v1.StatefulSet: csi-mock-volumes-6106-1761/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-6106-1761 STEP: Waiting for namespaces [csi-mock-volumes-6106-1761] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:08:48.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:43.360 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSIServiceAccountToken /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1496 token should not be plumbed down when CSIDriver is not deployed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1524 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSIServiceAccountToken token should not be plumbed down when CSIDriver is not deployed","total":-1,"completed":9,"skipped":329,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:08:46.981: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-char-dev STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:256 STEP: Create a pod for further testing Jun 11 00:08:47.016: INFO: The status of Pod test-hostpath-type-wlcg8 is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:08:49.019: INFO: The status of Pod test-hostpath-type-wlcg8 is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:08:51.019: INFO: The status of Pod test-hostpath-type-wlcg8 is Running (Ready = true) STEP: running on node node2 STEP: Create a character device for further testing Jun 11 00:08:51.022: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/achardev c 89 1] Namespace:host-path-type-char-dev-8927 PodName:test-hostpath-type-wlcg8 ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:08:51.022: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to mount character device 'achardev' successfully when HostPathType is HostPathUnset /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:281 [AfterEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:08:55.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-char-dev-8927" for this suite. • [SLOW TEST:8.160 seconds] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should be able to mount character device 'achardev' successfully when HostPathType is HostPathUnset /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:281 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Character Device [Slow] Should be able to mount character device 'achardev' successfully when HostPathType is HostPathUnset","total":-1,"completed":7,"skipped":378,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:08:48.787: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Jun 11 00:08:50.842: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-6c1911f1-3317-4bc0-b7fa-a55d0f299e80-backend && mount --bind /tmp/local-volume-test-6c1911f1-3317-4bc0-b7fa-a55d0f299e80-backend /tmp/local-volume-test-6c1911f1-3317-4bc0-b7fa-a55d0f299e80-backend && ln -s /tmp/local-volume-test-6c1911f1-3317-4bc0-b7fa-a55d0f299e80-backend /tmp/local-volume-test-6c1911f1-3317-4bc0-b7fa-a55d0f299e80] Namespace:persistent-local-volumes-test-4111 PodName:hostexec-node2-zq8xf ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:08:50.842: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 11 00:08:50.963: INFO: Creating a PV followed by a PVC Jun 11 00:08:50.970: INFO: Waiting for PV local-pv244w2 to bind to PVC pvc-js9s6 Jun 11 00:08:50.970: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-js9s6] to have phase Bound Jun 11 00:08:50.973: INFO: PersistentVolumeClaim pvc-js9s6 found but phase is Pending instead of Bound. Jun 11 00:08:52.977: INFO: PersistentVolumeClaim pvc-js9s6 found but phase is Pending instead of Bound. Jun 11 00:08:54.981: INFO: PersistentVolumeClaim pvc-js9s6 found but phase is Pending instead of Bound. Jun 11 00:08:56.984: INFO: PersistentVolumeClaim pvc-js9s6 found and phase=Bound (6.013707898s) Jun 11 00:08:56.984: INFO: Waiting up to 3m0s for PersistentVolume local-pv244w2 to have phase Bound Jun 11 00:08:56.987: INFO: PersistentVolume local-pv244w2 found and phase=Bound (2.607675ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Jun 11 00:09:03.014: INFO: pod "pod-593383e7-41d8-4512-b45d-6f8f172f331e" created on Node "node2" STEP: Writing in pod1 Jun 11 00:09:03.014: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4111 PodName:pod-593383e7-41d8-4512-b45d-6f8f172f331e ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:09:03.014: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:09:03.097: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Jun 11 00:09:03.097: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4111 PodName:pod-593383e7-41d8-4512-b45d-6f8f172f331e ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:09:03.097: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:09:03.176: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod1 Jun 11 00:09:03.176: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-6c1911f1-3317-4bc0-b7fa-a55d0f299e80 > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4111 PodName:pod-593383e7-41d8-4512-b45d-6f8f172f331e ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:09:03.176: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:09:03.251: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-6c1911f1-3317-4bc0-b7fa-a55d0f299e80 > /mnt/volume1/test-file", out: "", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-593383e7-41d8-4512-b45d-6f8f172f331e in namespace persistent-local-volumes-test-4111 [AfterEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 11 00:09:03.256: INFO: Deleting PersistentVolumeClaim "pvc-js9s6" Jun 11 00:09:03.260: INFO: Deleting PersistentVolume "local-pv244w2" STEP: Removing the test directory Jun 11 00:09:03.265: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm /tmp/local-volume-test-6c1911f1-3317-4bc0-b7fa-a55d0f299e80 && umount /tmp/local-volume-test-6c1911f1-3317-4bc0-b7fa-a55d0f299e80-backend && rm -r /tmp/local-volume-test-6c1911f1-3317-4bc0-b7fa-a55d0f299e80-backend] Namespace:persistent-local-volumes-test-4111 PodName:hostexec-node2-zq8xf ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:09:03.265: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:09:03.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-4111" for this suite. • [SLOW TEST:14.597 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":10,"skipped":333,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:09:03.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pvc-protection STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:72 Jun 11 00:09:03.465: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable STEP: Creating a PVC Jun 11 00:09:03.470: INFO: error finding default storageClass : No default storage class found [AfterEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:09:03.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pvc-protection-2613" for this suite. [AfterEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:108 S [SKIPPING] in Spec Setup (BeforeEach) [0.038 seconds] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Verify that scheduling of a pod that uses PVC that is being deleted fails and the pod becomes Unschedulable [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:145 error finding default storageClass : No default storage class found /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pv/pv.go:819 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:08:10.510: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should require VolumeAttach for drivers with attachment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:338 STEP: Building a driver namespace object, basename csi-mock-volumes-372 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Jun 11 00:08:10.566: INFO: creating *v1.ServiceAccount: csi-mock-volumes-372-8735/csi-attacher Jun 11 00:08:10.569: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-372 Jun 11 00:08:10.569: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-372 Jun 11 00:08:10.572: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-372 Jun 11 00:08:10.574: INFO: creating *v1.Role: csi-mock-volumes-372-8735/external-attacher-cfg-csi-mock-volumes-372 Jun 11 00:08:10.577: INFO: creating *v1.RoleBinding: csi-mock-volumes-372-8735/csi-attacher-role-cfg Jun 11 00:08:10.580: INFO: creating *v1.ServiceAccount: csi-mock-volumes-372-8735/csi-provisioner Jun 11 00:08:10.582: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-372 Jun 11 00:08:10.582: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-372 Jun 11 00:08:10.585: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-372 Jun 11 00:08:10.597: INFO: creating *v1.Role: csi-mock-volumes-372-8735/external-provisioner-cfg-csi-mock-volumes-372 Jun 11 00:08:10.600: INFO: creating *v1.RoleBinding: csi-mock-volumes-372-8735/csi-provisioner-role-cfg Jun 11 00:08:10.602: INFO: creating *v1.ServiceAccount: csi-mock-volumes-372-8735/csi-resizer Jun 11 00:08:10.605: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-372 Jun 11 00:08:10.605: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-372 Jun 11 00:08:10.607: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-372 Jun 11 00:08:10.610: INFO: creating *v1.Role: csi-mock-volumes-372-8735/external-resizer-cfg-csi-mock-volumes-372 Jun 11 00:08:10.612: INFO: creating *v1.RoleBinding: csi-mock-volumes-372-8735/csi-resizer-role-cfg Jun 11 00:08:10.614: INFO: creating *v1.ServiceAccount: csi-mock-volumes-372-8735/csi-snapshotter Jun 11 00:08:10.617: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-372 Jun 11 00:08:10.617: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-372 Jun 11 00:08:10.620: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-372 Jun 11 00:08:10.622: INFO: creating *v1.Role: csi-mock-volumes-372-8735/external-snapshotter-leaderelection-csi-mock-volumes-372 Jun 11 00:08:10.625: INFO: creating *v1.RoleBinding: csi-mock-volumes-372-8735/external-snapshotter-leaderelection Jun 11 00:08:10.628: INFO: creating *v1.ServiceAccount: csi-mock-volumes-372-8735/csi-mock Jun 11 00:08:10.630: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-372 Jun 11 00:08:10.633: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-372 Jun 11 00:08:10.636: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-372 Jun 11 00:08:10.638: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-372 Jun 11 00:08:10.641: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-372 Jun 11 00:08:10.644: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-372 Jun 11 00:08:10.647: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-372 Jun 11 00:08:10.650: INFO: creating *v1.StatefulSet: csi-mock-volumes-372-8735/csi-mockplugin Jun 11 00:08:10.653: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-372 Jun 11 00:08:10.657: INFO: creating *v1.StatefulSet: csi-mock-volumes-372-8735/csi-mockplugin-attacher Jun 11 00:08:10.660: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-372" Jun 11 00:08:10.662: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-372 to register on node node1 STEP: Creating pod Jun 11 00:08:15.675: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jun 11 00:08:15.679: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-wzpjf] to have phase Bound Jun 11 00:08:15.681: INFO: PersistentVolumeClaim pvc-wzpjf found but phase is Pending instead of Bound. Jun 11 00:08:17.684: INFO: PersistentVolumeClaim pvc-wzpjf found and phase=Bound (2.005302736s) STEP: Checking if VolumeAttachment was created for the pod STEP: Deleting pod pvc-volume-tester-8sh7r Jun 11 00:08:29.716: INFO: Deleting pod "pvc-volume-tester-8sh7r" in namespace "csi-mock-volumes-372" Jun 11 00:08:29.720: INFO: Wait up to 5m0s for pod "pvc-volume-tester-8sh7r" to be fully deleted STEP: Deleting claim pvc-wzpjf Jun 11 00:08:33.732: INFO: Waiting up to 2m0s for PersistentVolume pvc-5f5e632e-6855-4178-b8cd-56f5df05cafa to get deleted Jun 11 00:08:33.734: INFO: PersistentVolume pvc-5f5e632e-6855-4178-b8cd-56f5df05cafa found and phase=Bound (2.021593ms) Jun 11 00:08:35.739: INFO: PersistentVolume pvc-5f5e632e-6855-4178-b8cd-56f5df05cafa was removed STEP: Deleting storageclass csi-mock-volumes-372-scxrhd8 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-372 STEP: Waiting for namespaces [csi-mock-volumes-372] to vanish STEP: uninstalling csi mock driver Jun 11 00:08:41.751: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-372-8735/csi-attacher Jun 11 00:08:41.755: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-372 Jun 11 00:08:41.758: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-372 Jun 11 00:08:41.762: INFO: deleting *v1.Role: csi-mock-volumes-372-8735/external-attacher-cfg-csi-mock-volumes-372 Jun 11 00:08:41.766: INFO: deleting *v1.RoleBinding: csi-mock-volumes-372-8735/csi-attacher-role-cfg Jun 11 00:08:41.770: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-372-8735/csi-provisioner Jun 11 00:08:41.773: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-372 Jun 11 00:08:41.777: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-372 Jun 11 00:08:41.780: INFO: deleting *v1.Role: csi-mock-volumes-372-8735/external-provisioner-cfg-csi-mock-volumes-372 Jun 11 00:08:41.783: INFO: deleting *v1.RoleBinding: csi-mock-volumes-372-8735/csi-provisioner-role-cfg Jun 11 00:08:41.788: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-372-8735/csi-resizer Jun 11 00:08:41.791: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-372 Jun 11 00:08:41.794: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-372 Jun 11 00:08:41.798: INFO: deleting *v1.Role: csi-mock-volumes-372-8735/external-resizer-cfg-csi-mock-volumes-372 Jun 11 00:08:41.801: INFO: deleting *v1.RoleBinding: csi-mock-volumes-372-8735/csi-resizer-role-cfg Jun 11 00:08:41.804: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-372-8735/csi-snapshotter Jun 11 00:08:41.807: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-372 Jun 11 00:08:41.810: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-372 Jun 11 00:08:41.814: INFO: deleting *v1.Role: csi-mock-volumes-372-8735/external-snapshotter-leaderelection-csi-mock-volumes-372 Jun 11 00:08:41.817: INFO: deleting *v1.RoleBinding: csi-mock-volumes-372-8735/external-snapshotter-leaderelection Jun 11 00:08:41.821: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-372-8735/csi-mock Jun 11 00:08:41.824: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-372 Jun 11 00:08:41.827: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-372 Jun 11 00:08:41.831: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-372 Jun 11 00:08:41.834: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-372 Jun 11 00:08:41.837: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-372 Jun 11 00:08:41.841: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-372 Jun 11 00:08:41.844: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-372 Jun 11 00:08:41.847: INFO: deleting *v1.StatefulSet: csi-mock-volumes-372-8735/csi-mockplugin Jun 11 00:08:41.851: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-372 Jun 11 00:08:41.855: INFO: deleting *v1.StatefulSet: csi-mock-volumes-372-8735/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-372-8735 STEP: Waiting for namespaces [csi-mock-volumes-372-8735] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:09:09.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:59.365 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI attach test using mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:316 should require VolumeAttach for drivers with attachment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:338 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should require VolumeAttach for drivers with attachment","total":-1,"completed":12,"skipped":404,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:07:06.670: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should expand volume without restarting pod if attach=off, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:687 STEP: Building a driver namespace object, basename csi-mock-volumes-8853 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Jun 11 00:07:06.741: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8853-254/csi-attacher Jun 11 00:07:06.744: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-8853 Jun 11 00:07:06.744: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-8853 Jun 11 00:07:06.747: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-8853 Jun 11 00:07:06.750: INFO: creating *v1.Role: csi-mock-volumes-8853-254/external-attacher-cfg-csi-mock-volumes-8853 Jun 11 00:07:06.753: INFO: creating *v1.RoleBinding: csi-mock-volumes-8853-254/csi-attacher-role-cfg Jun 11 00:07:06.755: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8853-254/csi-provisioner Jun 11 00:07:06.758: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-8853 Jun 11 00:07:06.758: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-8853 Jun 11 00:07:06.761: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-8853 Jun 11 00:07:06.763: INFO: creating *v1.Role: csi-mock-volumes-8853-254/external-provisioner-cfg-csi-mock-volumes-8853 Jun 11 00:07:06.766: INFO: creating *v1.RoleBinding: csi-mock-volumes-8853-254/csi-provisioner-role-cfg Jun 11 00:07:06.769: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8853-254/csi-resizer Jun 11 00:07:06.772: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-8853 Jun 11 00:07:06.773: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-8853 Jun 11 00:07:06.775: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-8853 Jun 11 00:07:06.778: INFO: creating *v1.Role: csi-mock-volumes-8853-254/external-resizer-cfg-csi-mock-volumes-8853 Jun 11 00:07:06.781: INFO: creating *v1.RoleBinding: csi-mock-volumes-8853-254/csi-resizer-role-cfg Jun 11 00:07:06.784: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8853-254/csi-snapshotter Jun 11 00:07:06.787: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-8853 Jun 11 00:07:06.787: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-8853 Jun 11 00:07:06.790: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-8853 Jun 11 00:07:06.793: INFO: creating *v1.Role: csi-mock-volumes-8853-254/external-snapshotter-leaderelection-csi-mock-volumes-8853 Jun 11 00:07:06.795: INFO: creating *v1.RoleBinding: csi-mock-volumes-8853-254/external-snapshotter-leaderelection Jun 11 00:07:06.799: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8853-254/csi-mock Jun 11 00:07:06.801: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-8853 Jun 11 00:07:06.804: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-8853 Jun 11 00:07:06.807: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-8853 Jun 11 00:07:06.810: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-8853 Jun 11 00:07:06.813: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-8853 Jun 11 00:07:06.815: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-8853 Jun 11 00:07:06.818: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-8853 Jun 11 00:07:06.822: INFO: creating *v1.StatefulSet: csi-mock-volumes-8853-254/csi-mockplugin Jun 11 00:07:06.826: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-8853 Jun 11 00:07:06.830: INFO: creating *v1.StatefulSet: csi-mock-volumes-8853-254/csi-mockplugin-resizer Jun 11 00:07:06.833: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-8853" Jun 11 00:07:06.835: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-8853 to register on node node1 STEP: Creating pod Jun 11 00:07:16.353: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jun 11 00:07:16.358: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-hnp8l] to have phase Bound Jun 11 00:07:16.360: INFO: PersistentVolumeClaim pvc-hnp8l found but phase is Pending instead of Bound. Jun 11 00:07:18.364: INFO: PersistentVolumeClaim pvc-hnp8l found and phase=Bound (2.005564067s) STEP: Expanding current pvc STEP: Waiting for persistent volume resize to finish STEP: Waiting for PVC resize to finish STEP: Deleting pod pvc-volume-tester-hsgls Jun 11 00:08:48.437: INFO: Deleting pod "pvc-volume-tester-hsgls" in namespace "csi-mock-volumes-8853" Jun 11 00:08:48.444: INFO: Wait up to 5m0s for pod "pvc-volume-tester-hsgls" to be fully deleted STEP: Deleting claim pvc-hnp8l Jun 11 00:08:58.460: INFO: Waiting up to 2m0s for PersistentVolume pvc-b9fa9115-be96-472d-b31d-25ab5f13ca7c to get deleted Jun 11 00:08:58.462: INFO: PersistentVolume pvc-b9fa9115-be96-472d-b31d-25ab5f13ca7c found and phase=Bound (2.018695ms) Jun 11 00:09:00.470: INFO: PersistentVolume pvc-b9fa9115-be96-472d-b31d-25ab5f13ca7c was removed STEP: Deleting storageclass csi-mock-volumes-8853-sc99sps STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-8853 STEP: Waiting for namespaces [csi-mock-volumes-8853] to vanish STEP: uninstalling csi mock driver Jun 11 00:09:06.483: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8853-254/csi-attacher Jun 11 00:09:06.488: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-8853 Jun 11 00:09:06.492: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-8853 Jun 11 00:09:06.496: INFO: deleting *v1.Role: csi-mock-volumes-8853-254/external-attacher-cfg-csi-mock-volumes-8853 Jun 11 00:09:06.500: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8853-254/csi-attacher-role-cfg Jun 11 00:09:06.503: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8853-254/csi-provisioner Jun 11 00:09:06.506: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-8853 Jun 11 00:09:06.512: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-8853 Jun 11 00:09:06.521: INFO: deleting *v1.Role: csi-mock-volumes-8853-254/external-provisioner-cfg-csi-mock-volumes-8853 Jun 11 00:09:06.529: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8853-254/csi-provisioner-role-cfg Jun 11 00:09:06.536: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8853-254/csi-resizer Jun 11 00:09:06.542: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-8853 Jun 11 00:09:06.546: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-8853 Jun 11 00:09:06.550: INFO: deleting *v1.Role: csi-mock-volumes-8853-254/external-resizer-cfg-csi-mock-volumes-8853 Jun 11 00:09:06.553: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8853-254/csi-resizer-role-cfg Jun 11 00:09:06.557: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8853-254/csi-snapshotter Jun 11 00:09:06.560: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-8853 Jun 11 00:09:06.564: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-8853 Jun 11 00:09:06.568: INFO: deleting *v1.Role: csi-mock-volumes-8853-254/external-snapshotter-leaderelection-csi-mock-volumes-8853 Jun 11 00:09:06.571: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8853-254/external-snapshotter-leaderelection Jun 11 00:09:06.574: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8853-254/csi-mock Jun 11 00:09:06.577: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-8853 Jun 11 00:09:06.580: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-8853 Jun 11 00:09:06.583: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-8853 Jun 11 00:09:06.587: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-8853 Jun 11 00:09:06.590: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-8853 Jun 11 00:09:06.593: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-8853 Jun 11 00:09:06.598: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-8853 Jun 11 00:09:06.601: INFO: deleting *v1.StatefulSet: csi-mock-volumes-8853-254/csi-mockplugin Jun 11 00:09:06.604: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-8853 Jun 11 00:09:06.608: INFO: deleting *v1.StatefulSet: csi-mock-volumes-8853-254/csi-mockplugin-resizer STEP: deleting the driver namespace: csi-mock-volumes-8853-254 STEP: Waiting for namespaces [csi-mock-volumes-8853-254] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:09:12.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:125.959 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI online volume expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:672 should expand volume without restarting pod if attach=off, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:687 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI online volume expansion should expand volume without restarting pod if attach=off, nodeExpansion=on","total":-1,"completed":10,"skipped":564,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:07:57.933: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should retry NodeStage after NodeStage ephemeral error /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:828 STEP: Building a driver namespace object, basename csi-mock-volumes-2863 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock proxy Jun 11 00:07:58.007: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2863-4114/csi-attacher Jun 11 00:07:58.009: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-2863 Jun 11 00:07:58.009: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-2863 Jun 11 00:07:58.012: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-2863 Jun 11 00:07:58.014: INFO: creating *v1.Role: csi-mock-volumes-2863-4114/external-attacher-cfg-csi-mock-volumes-2863 Jun 11 00:07:58.017: INFO: creating *v1.RoleBinding: csi-mock-volumes-2863-4114/csi-attacher-role-cfg Jun 11 00:07:58.019: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2863-4114/csi-provisioner Jun 11 00:07:58.022: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-2863 Jun 11 00:07:58.022: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-2863 Jun 11 00:07:58.025: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-2863 Jun 11 00:07:58.028: INFO: creating *v1.Role: csi-mock-volumes-2863-4114/external-provisioner-cfg-csi-mock-volumes-2863 Jun 11 00:07:58.031: INFO: creating *v1.RoleBinding: csi-mock-volumes-2863-4114/csi-provisioner-role-cfg Jun 11 00:07:58.033: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2863-4114/csi-resizer Jun 11 00:07:58.036: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-2863 Jun 11 00:07:58.036: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-2863 Jun 11 00:07:58.039: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-2863 Jun 11 00:07:58.042: INFO: creating *v1.Role: csi-mock-volumes-2863-4114/external-resizer-cfg-csi-mock-volumes-2863 Jun 11 00:07:58.045: INFO: creating *v1.RoleBinding: csi-mock-volumes-2863-4114/csi-resizer-role-cfg Jun 11 00:07:58.047: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2863-4114/csi-snapshotter Jun 11 00:07:58.050: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-2863 Jun 11 00:07:58.050: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-2863 Jun 11 00:07:58.052: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-2863 Jun 11 00:07:58.056: INFO: creating *v1.Role: csi-mock-volumes-2863-4114/external-snapshotter-leaderelection-csi-mock-volumes-2863 Jun 11 00:07:58.058: INFO: creating *v1.RoleBinding: csi-mock-volumes-2863-4114/external-snapshotter-leaderelection Jun 11 00:07:58.061: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2863-4114/csi-mock Jun 11 00:07:58.064: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-2863 Jun 11 00:07:58.066: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-2863 Jun 11 00:07:58.069: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-2863 Jun 11 00:07:58.071: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-2863 Jun 11 00:07:58.074: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-2863 Jun 11 00:07:58.076: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-2863 Jun 11 00:07:58.079: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-2863 Jun 11 00:07:58.082: INFO: creating *v1.StatefulSet: csi-mock-volumes-2863-4114/csi-mockplugin Jun 11 00:07:58.087: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-2863 Jun 11 00:07:58.090: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-2863" Jun 11 00:07:58.092: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-2863 to register on node node2 I0611 00:08:04.166001 37 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-2863","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I0611 00:08:04.247494 37 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null} I0611 00:08:04.248941 37 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-2863","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I0611 00:08:04.250658 37 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":"","FullError":null} I0611 00:08:04.252578 37 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null} I0611 00:08:04.988201 37 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-2863"},"Error":"","FullError":null} STEP: Creating pod Jun 11 00:08:07.607: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jun 11 00:08:07.612: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-mvfms] to have phase Bound Jun 11 00:08:07.614: INFO: PersistentVolumeClaim pvc-mvfms found but phase is Pending instead of Bound. I0611 00:08:07.623385 37 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-74072457-e245-4b5b-a9c2-6a5a56653bdc","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-74072457-e245-4b5b-a9c2-6a5a56653bdc"}}},"Error":"","FullError":null} Jun 11 00:08:09.618: INFO: PersistentVolumeClaim pvc-mvfms found and phase=Bound (2.005562004s) Jun 11 00:08:09.634: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-mvfms] to have phase Bound Jun 11 00:08:09.636: INFO: PersistentVolumeClaim pvc-mvfms found and phase=Bound (2.02575ms) STEP: Waiting for expected CSI calls I0611 00:08:10.671494 37 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0611 00:08:10.674987 37 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-74072457-e245-4b5b-a9c2-6a5a56653bdc/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-74072457-e245-4b5b-a9c2-6a5a56653bdc","storage.kubernetes.io/csiProvisionerIdentity":"1654906084252-8081-csi-mock-csi-mock-volumes-2863"}},"Response":null,"Error":"rpc error: code = DeadlineExceeded desc = fake error","FullError":{"code":4,"message":"fake error"}} I0611 00:08:11.275006 37 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0611 00:08:11.276874 37 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-74072457-e245-4b5b-a9c2-6a5a56653bdc/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-74072457-e245-4b5b-a9c2-6a5a56653bdc","storage.kubernetes.io/csiProvisionerIdentity":"1654906084252-8081-csi-mock-csi-mock-volumes-2863"}},"Response":null,"Error":"rpc error: code = DeadlineExceeded desc = fake error","FullError":{"code":4,"message":"fake error"}} I0611 00:08:12.282621 37 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0611 00:08:12.284546 37 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-74072457-e245-4b5b-a9c2-6a5a56653bdc/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-74072457-e245-4b5b-a9c2-6a5a56653bdc","storage.kubernetes.io/csiProvisionerIdentity":"1654906084252-8081-csi-mock-csi-mock-volumes-2863"}},"Response":null,"Error":"rpc error: code = DeadlineExceeded desc = fake error","FullError":{"code":4,"message":"fake error"}} I0611 00:08:14.299084 37 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Jun 11 00:08:14.301: INFO: >>> kubeConfig: /root/.kube/config I0611 00:08:14.431350 37 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-74072457-e245-4b5b-a9c2-6a5a56653bdc/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-74072457-e245-4b5b-a9c2-6a5a56653bdc","storage.kubernetes.io/csiProvisionerIdentity":"1654906084252-8081-csi-mock-csi-mock-volumes-2863"}},"Response":{},"Error":"","FullError":null} I0611 00:08:14.436577 37 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Jun 11 00:08:14.439: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:08:14.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Waiting for pod to be running Jun 11 00:08:14.691: INFO: >>> kubeConfig: /root/.kube/config I0611 00:08:14.819949 37 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-74072457-e245-4b5b-a9c2-6a5a56653bdc/globalmount","target_path":"/var/lib/kubelet/pods/9f1a4868-303c-499a-8d23-7e49bec1bf66/volumes/kubernetes.io~csi/pvc-74072457-e245-4b5b-a9c2-6a5a56653bdc/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-74072457-e245-4b5b-a9c2-6a5a56653bdc","storage.kubernetes.io/csiProvisionerIdentity":"1654906084252-8081-csi-mock-csi-mock-volumes-2863"}},"Response":{},"Error":"","FullError":null} STEP: Deleting the previously created pod Jun 11 00:08:18.646: INFO: Deleting pod "pvc-volume-tester-ll757" in namespace "csi-mock-volumes-2863" Jun 11 00:08:18.650: INFO: Wait up to 5m0s for pod "pvc-volume-tester-ll757" to be fully deleted Jun 11 00:08:22.379: INFO: >>> kubeConfig: /root/.kube/config I0611 00:08:22.462334 37 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/9f1a4868-303c-499a-8d23-7e49bec1bf66/volumes/kubernetes.io~csi/pvc-74072457-e245-4b5b-a9c2-6a5a56653bdc/mount"},"Response":{},"Error":"","FullError":null} I0611 00:08:22.478411 37 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0611 00:08:22.480079 37 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-74072457-e245-4b5b-a9c2-6a5a56653bdc/globalmount"},"Response":{},"Error":"","FullError":null} STEP: Waiting for all remaining expected CSI calls STEP: Deleting pod pvc-volume-tester-ll757 Jun 11 00:08:29.657: INFO: Deleting pod "pvc-volume-tester-ll757" in namespace "csi-mock-volumes-2863" STEP: Deleting claim pvc-mvfms Jun 11 00:08:29.668: INFO: Waiting up to 2m0s for PersistentVolume pvc-74072457-e245-4b5b-a9c2-6a5a56653bdc to get deleted Jun 11 00:08:29.671: INFO: PersistentVolume pvc-74072457-e245-4b5b-a9c2-6a5a56653bdc found and phase=Bound (3.031101ms) I0611 00:08:29.683588 37 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null} Jun 11 00:08:31.675: INFO: PersistentVolume pvc-74072457-e245-4b5b-a9c2-6a5a56653bdc was removed STEP: Deleting storageclass csi-mock-volumes-2863-scwj4dp STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-2863 STEP: Waiting for namespaces [csi-mock-volumes-2863] to vanish STEP: uninstalling csi mock driver Jun 11 00:08:37.703: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2863-4114/csi-attacher Jun 11 00:08:37.707: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-2863 Jun 11 00:08:37.711: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-2863 Jun 11 00:08:37.714: INFO: deleting *v1.Role: csi-mock-volumes-2863-4114/external-attacher-cfg-csi-mock-volumes-2863 Jun 11 00:08:37.718: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2863-4114/csi-attacher-role-cfg Jun 11 00:08:37.722: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2863-4114/csi-provisioner Jun 11 00:08:37.726: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-2863 Jun 11 00:08:37.731: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-2863 Jun 11 00:08:37.734: INFO: deleting *v1.Role: csi-mock-volumes-2863-4114/external-provisioner-cfg-csi-mock-volumes-2863 Jun 11 00:08:37.738: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2863-4114/csi-provisioner-role-cfg Jun 11 00:08:37.741: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2863-4114/csi-resizer Jun 11 00:08:37.744: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-2863 Jun 11 00:08:37.747: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-2863 Jun 11 00:08:37.751: INFO: deleting *v1.Role: csi-mock-volumes-2863-4114/external-resizer-cfg-csi-mock-volumes-2863 Jun 11 00:08:37.756: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2863-4114/csi-resizer-role-cfg Jun 11 00:08:37.765: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2863-4114/csi-snapshotter Jun 11 00:08:37.771: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-2863 Jun 11 00:08:37.775: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-2863 Jun 11 00:08:37.778: INFO: deleting *v1.Role: csi-mock-volumes-2863-4114/external-snapshotter-leaderelection-csi-mock-volumes-2863 Jun 11 00:08:37.782: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2863-4114/external-snapshotter-leaderelection Jun 11 00:08:37.785: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2863-4114/csi-mock Jun 11 00:08:37.789: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-2863 Jun 11 00:08:37.792: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-2863 Jun 11 00:08:37.797: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-2863 Jun 11 00:08:37.801: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-2863 Jun 11 00:08:37.804: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-2863 Jun 11 00:08:37.808: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-2863 Jun 11 00:08:37.812: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-2863 Jun 11 00:08:37.816: INFO: deleting *v1.StatefulSet: csi-mock-volumes-2863-4114/csi-mockplugin Jun 11 00:08:37.820: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-2863 STEP: deleting the driver namespace: csi-mock-volumes-2863-4114 STEP: Waiting for namespaces [csi-mock-volumes-2863-4114] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:09:21.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:83.909 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI NodeStage error cases [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:734 should retry NodeStage after NodeStage ephemeral error /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:828 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI NodeStage error cases [Slow] should retry NodeStage after NodeStage ephemeral error","total":-1,"completed":2,"skipped":96,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:09:21.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:48 STEP: Creating a pod to test hostPath mode Jun 11 00:09:21.991: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-1897" to be "Succeeded or Failed" Jun 11 00:09:21.993: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 1.968604ms Jun 11 00:09:23.998: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006833928s Jun 11 00:09:26.001: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009482615s STEP: Saw pod success Jun 11 00:09:26.001: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed" Jun 11 00:09:26.003: INFO: Trying to get logs from node node2 pod pod-host-path-test container test-container-1: STEP: delete the pod Jun 11 00:09:26.022: INFO: Waiting for pod pod-host-path-test to disappear Jun 11 00:09:26.023: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:09:26.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-1897" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","total":-1,"completed":3,"skipped":149,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:09:03.545: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should not require VolumeAttach for drivers without attachment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:338 STEP: Building a driver namespace object, basename csi-mock-volumes-276 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Jun 11 00:09:03.607: INFO: creating *v1.ServiceAccount: csi-mock-volumes-276-260/csi-attacher Jun 11 00:09:03.609: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-276 Jun 11 00:09:03.609: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-276 Jun 11 00:09:03.613: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-276 Jun 11 00:09:03.616: INFO: creating *v1.Role: csi-mock-volumes-276-260/external-attacher-cfg-csi-mock-volumes-276 Jun 11 00:09:03.618: INFO: creating *v1.RoleBinding: csi-mock-volumes-276-260/csi-attacher-role-cfg Jun 11 00:09:03.621: INFO: creating *v1.ServiceAccount: csi-mock-volumes-276-260/csi-provisioner Jun 11 00:09:03.624: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-276 Jun 11 00:09:03.624: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-276 Jun 11 00:09:03.627: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-276 Jun 11 00:09:03.629: INFO: creating *v1.Role: csi-mock-volumes-276-260/external-provisioner-cfg-csi-mock-volumes-276 Jun 11 00:09:03.633: INFO: creating *v1.RoleBinding: csi-mock-volumes-276-260/csi-provisioner-role-cfg Jun 11 00:09:03.635: INFO: creating *v1.ServiceAccount: csi-mock-volumes-276-260/csi-resizer Jun 11 00:09:03.639: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-276 Jun 11 00:09:03.639: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-276 Jun 11 00:09:03.642: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-276 Jun 11 00:09:03.645: INFO: creating *v1.Role: csi-mock-volumes-276-260/external-resizer-cfg-csi-mock-volumes-276 Jun 11 00:09:03.648: INFO: creating *v1.RoleBinding: csi-mock-volumes-276-260/csi-resizer-role-cfg Jun 11 00:09:03.651: INFO: creating *v1.ServiceAccount: csi-mock-volumes-276-260/csi-snapshotter Jun 11 00:09:03.654: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-276 Jun 11 00:09:03.654: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-276 Jun 11 00:09:03.656: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-276 Jun 11 00:09:03.659: INFO: creating *v1.Role: csi-mock-volumes-276-260/external-snapshotter-leaderelection-csi-mock-volumes-276 Jun 11 00:09:03.661: INFO: creating *v1.RoleBinding: csi-mock-volumes-276-260/external-snapshotter-leaderelection Jun 11 00:09:03.664: INFO: creating *v1.ServiceAccount: csi-mock-volumes-276-260/csi-mock Jun 11 00:09:03.667: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-276 Jun 11 00:09:03.669: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-276 Jun 11 00:09:03.673: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-276 Jun 11 00:09:03.676: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-276 Jun 11 00:09:03.678: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-276 Jun 11 00:09:03.681: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-276 Jun 11 00:09:03.684: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-276 Jun 11 00:09:03.687: INFO: creating *v1.StatefulSet: csi-mock-volumes-276-260/csi-mockplugin Jun 11 00:09:03.691: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-276 Jun 11 00:09:03.694: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-276" Jun 11 00:09:03.696: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-276 to register on node node2 STEP: Creating pod Jun 11 00:09:13.211: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jun 11 00:09:13.216: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-l287b] to have phase Bound Jun 11 00:09:13.218: INFO: PersistentVolumeClaim pvc-l287b found but phase is Pending instead of Bound. Jun 11 00:09:15.226: INFO: PersistentVolumeClaim pvc-l287b found and phase=Bound (2.010479202s) STEP: Checking if VolumeAttachment was created for the pod STEP: Deleting pod pvc-volume-tester-hmw58 Jun 11 00:09:19.260: INFO: Deleting pod "pvc-volume-tester-hmw58" in namespace "csi-mock-volumes-276" Jun 11 00:09:19.265: INFO: Wait up to 5m0s for pod "pvc-volume-tester-hmw58" to be fully deleted STEP: Deleting claim pvc-l287b Jun 11 00:09:27.277: INFO: Waiting up to 2m0s for PersistentVolume pvc-84df0a89-238f-44ea-950b-6e325a670ab8 to get deleted Jun 11 00:09:27.279: INFO: PersistentVolume pvc-84df0a89-238f-44ea-950b-6e325a670ab8 found and phase=Bound (2.003546ms) Jun 11 00:09:29.287: INFO: PersistentVolume pvc-84df0a89-238f-44ea-950b-6e325a670ab8 was removed STEP: Deleting storageclass csi-mock-volumes-276-sckzksh STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-276 STEP: Waiting for namespaces [csi-mock-volumes-276] to vanish STEP: uninstalling csi mock driver Jun 11 00:09:35.301: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-276-260/csi-attacher Jun 11 00:09:35.306: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-276 Jun 11 00:09:35.309: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-276 Jun 11 00:09:35.313: INFO: deleting *v1.Role: csi-mock-volumes-276-260/external-attacher-cfg-csi-mock-volumes-276 Jun 11 00:09:35.316: INFO: deleting *v1.RoleBinding: csi-mock-volumes-276-260/csi-attacher-role-cfg Jun 11 00:09:35.320: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-276-260/csi-provisioner Jun 11 00:09:35.323: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-276 Jun 11 00:09:35.327: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-276 Jun 11 00:09:35.334: INFO: deleting *v1.Role: csi-mock-volumes-276-260/external-provisioner-cfg-csi-mock-volumes-276 Jun 11 00:09:35.342: INFO: deleting *v1.RoleBinding: csi-mock-volumes-276-260/csi-provisioner-role-cfg Jun 11 00:09:35.350: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-276-260/csi-resizer Jun 11 00:09:35.356: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-276 Jun 11 00:09:35.360: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-276 Jun 11 00:09:35.363: INFO: deleting *v1.Role: csi-mock-volumes-276-260/external-resizer-cfg-csi-mock-volumes-276 Jun 11 00:09:35.366: INFO: deleting *v1.RoleBinding: csi-mock-volumes-276-260/csi-resizer-role-cfg Jun 11 00:09:35.369: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-276-260/csi-snapshotter Jun 11 00:09:35.373: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-276 Jun 11 00:09:35.376: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-276 Jun 11 00:09:35.379: INFO: deleting *v1.Role: csi-mock-volumes-276-260/external-snapshotter-leaderelection-csi-mock-volumes-276 Jun 11 00:09:35.382: INFO: deleting *v1.RoleBinding: csi-mock-volumes-276-260/external-snapshotter-leaderelection Jun 11 00:09:35.386: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-276-260/csi-mock Jun 11 00:09:35.389: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-276 Jun 11 00:09:35.392: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-276 Jun 11 00:09:35.396: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-276 Jun 11 00:09:35.399: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-276 Jun 11 00:09:35.402: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-276 Jun 11 00:09:35.406: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-276 Jun 11 00:09:35.410: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-276 Jun 11 00:09:35.413: INFO: deleting *v1.StatefulSet: csi-mock-volumes-276-260/csi-mockplugin Jun 11 00:09:35.417: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-276 STEP: deleting the driver namespace: csi-mock-volumes-276-260 STEP: Waiting for namespaces [csi-mock-volumes-276-260] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:09:41.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:37.888 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI attach test using mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:316 should not require VolumeAttach for drivers without attachment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:338 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should not require VolumeAttach for drivers without attachment","total":-1,"completed":11,"skipped":385,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:09:41.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 Jun 11 00:09:43.631: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-8800 PodName:hostexec-node2-lmbn4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:09:43.631: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:09:43.734: INFO: exec node2: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Jun 11 00:09:43.734: INFO: exec node2: stdout: "0\n" Jun 11 00:09:43.734: INFO: exec node2: stderr: "ls: cannot access /mnt/disks/by-uuid/google-local-ssds-scsi-fs/: No such file or directory\n" Jun 11 00:09:43.734: INFO: exec node2: exit code: 0 Jun 11 00:09:43.734: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:09:43.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-8800" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [2.162 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1254 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:09:43.805: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50 [It] files with FSGroup ownership should support (root,0644,tmpfs) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:67 STEP: Creating a pod to test emptydir 0644 on tmpfs Jun 11 00:09:43.840: INFO: Waiting up to 5m0s for pod "pod-5c2955bf-e379-4d53-9772-7137514fdd8f" in namespace "emptydir-3193" to be "Succeeded or Failed" Jun 11 00:09:43.843: INFO: Pod "pod-5c2955bf-e379-4d53-9772-7137514fdd8f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.408536ms Jun 11 00:09:45.847: INFO: Pod "pod-5c2955bf-e379-4d53-9772-7137514fdd8f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006725936s Jun 11 00:09:47.852: INFO: Pod "pod-5c2955bf-e379-4d53-9772-7137514fdd8f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01117966s STEP: Saw pod success Jun 11 00:09:47.852: INFO: Pod "pod-5c2955bf-e379-4d53-9772-7137514fdd8f" satisfied condition "Succeeded or Failed" Jun 11 00:09:47.854: INFO: Trying to get logs from node node2 pod pod-5c2955bf-e379-4d53-9772-7137514fdd8f container test-container: STEP: delete the pod Jun 11 00:09:47.870: INFO: Waiting for pod pod-5c2955bf-e379-4d53-9772-7137514fdd8f to disappear Jun 11 00:09:47.872: INFO: Pod pod-5c2955bf-e379-4d53-9772-7137514fdd8f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:09:47.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3193" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] files with FSGroup ownership should support (root,0644,tmpfs)","total":-1,"completed":12,"skipped":477,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:04:35.681: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should bringup pod after deploying CSIDriver attach=false [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:373 STEP: Building a driver namespace object, basename csi-mock-volumes-6474 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Jun 11 00:04:35.746: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6474-5154/csi-attacher Jun 11 00:04:35.750: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-6474 Jun 11 00:04:35.750: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-6474 Jun 11 00:04:35.752: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-6474 Jun 11 00:04:35.755: INFO: creating *v1.Role: csi-mock-volumes-6474-5154/external-attacher-cfg-csi-mock-volumes-6474 Jun 11 00:04:35.758: INFO: creating *v1.RoleBinding: csi-mock-volumes-6474-5154/csi-attacher-role-cfg Jun 11 00:04:35.761: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6474-5154/csi-provisioner Jun 11 00:04:35.764: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-6474 Jun 11 00:04:35.764: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-6474 Jun 11 00:04:35.767: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-6474 Jun 11 00:04:35.770: INFO: creating *v1.Role: csi-mock-volumes-6474-5154/external-provisioner-cfg-csi-mock-volumes-6474 Jun 11 00:04:35.773: INFO: creating *v1.RoleBinding: csi-mock-volumes-6474-5154/csi-provisioner-role-cfg Jun 11 00:04:35.775: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6474-5154/csi-resizer Jun 11 00:04:35.778: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-6474 Jun 11 00:04:35.778: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-6474 Jun 11 00:04:35.781: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-6474 Jun 11 00:04:35.783: INFO: creating *v1.Role: csi-mock-volumes-6474-5154/external-resizer-cfg-csi-mock-volumes-6474 Jun 11 00:04:35.785: INFO: creating *v1.RoleBinding: csi-mock-volumes-6474-5154/csi-resizer-role-cfg Jun 11 00:04:35.788: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6474-5154/csi-snapshotter Jun 11 00:04:35.790: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-6474 Jun 11 00:04:35.790: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-6474 Jun 11 00:04:35.792: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-6474 Jun 11 00:04:35.795: INFO: creating *v1.Role: csi-mock-volumes-6474-5154/external-snapshotter-leaderelection-csi-mock-volumes-6474 Jun 11 00:04:35.798: INFO: creating *v1.RoleBinding: csi-mock-volumes-6474-5154/external-snapshotter-leaderelection Jun 11 00:04:35.801: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6474-5154/csi-mock Jun 11 00:04:35.804: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-6474 Jun 11 00:04:35.806: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-6474 Jun 11 00:04:35.809: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-6474 Jun 11 00:04:35.811: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-6474 Jun 11 00:04:35.814: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-6474 Jun 11 00:04:35.818: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-6474 Jun 11 00:04:35.820: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-6474 Jun 11 00:04:35.824: INFO: creating *v1.StatefulSet: csi-mock-volumes-6474-5154/csi-mockplugin Jun 11 00:04:35.828: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-6474 to register on node node1 STEP: Creating pod Jun 11 00:04:52.100: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jun 11 00:04:52.104: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-69cxr] to have phase Bound Jun 11 00:04:52.106: INFO: PersistentVolumeClaim pvc-69cxr found but phase is Pending instead of Bound. Jun 11 00:04:54.110: INFO: PersistentVolumeClaim pvc-69cxr found and phase=Bound (2.005817773s) STEP: Checking if attaching failed and pod cannot start STEP: Checking if VolumeAttachment was created for the pod STEP: Deploy CSIDriver object with attachRequired=false Jun 11 00:06:56.139: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-6474 STEP: Wait for the pod in running status STEP: Wait for the volumeattachment to be deleted up to 7m0s STEP: Deleting pod pvc-volume-tester-krh6z Jun 11 00:09:02.156: INFO: Deleting pod "pvc-volume-tester-krh6z" in namespace "csi-mock-volumes-6474" Jun 11 00:09:02.163: INFO: Wait up to 5m0s for pod "pvc-volume-tester-krh6z" to be fully deleted STEP: Deleting claim pvc-69cxr Jun 11 00:09:08.177: INFO: Waiting up to 2m0s for PersistentVolume pvc-2bb325cd-4ff6-4fc2-bfd1-9c6b4bd96913 to get deleted Jun 11 00:09:08.179: INFO: PersistentVolume pvc-2bb325cd-4ff6-4fc2-bfd1-9c6b4bd96913 found and phase=Bound (1.806437ms) Jun 11 00:09:10.185: INFO: PersistentVolume pvc-2bb325cd-4ff6-4fc2-bfd1-9c6b4bd96913 was removed STEP: Deleting storageclass csi-mock-volumes-6474-scf874k STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-6474 STEP: Waiting for namespaces [csi-mock-volumes-6474] to vanish STEP: uninstalling csi mock driver Jun 11 00:09:16.196: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6474-5154/csi-attacher Jun 11 00:09:16.201: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-6474 Jun 11 00:09:16.205: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-6474 Jun 11 00:09:16.208: INFO: deleting *v1.Role: csi-mock-volumes-6474-5154/external-attacher-cfg-csi-mock-volumes-6474 Jun 11 00:09:16.212: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6474-5154/csi-attacher-role-cfg Jun 11 00:09:16.215: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6474-5154/csi-provisioner Jun 11 00:09:16.218: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-6474 Jun 11 00:09:16.222: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-6474 Jun 11 00:09:16.226: INFO: deleting *v1.Role: csi-mock-volumes-6474-5154/external-provisioner-cfg-csi-mock-volumes-6474 Jun 11 00:09:16.232: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6474-5154/csi-provisioner-role-cfg Jun 11 00:09:16.239: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6474-5154/csi-resizer Jun 11 00:09:16.245: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-6474 Jun 11 00:09:16.252: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-6474 Jun 11 00:09:16.255: INFO: deleting *v1.Role: csi-mock-volumes-6474-5154/external-resizer-cfg-csi-mock-volumes-6474 Jun 11 00:09:16.259: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6474-5154/csi-resizer-role-cfg Jun 11 00:09:16.263: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6474-5154/csi-snapshotter Jun 11 00:09:16.266: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-6474 Jun 11 00:09:16.270: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-6474 Jun 11 00:09:16.273: INFO: deleting *v1.Role: csi-mock-volumes-6474-5154/external-snapshotter-leaderelection-csi-mock-volumes-6474 Jun 11 00:09:16.277: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6474-5154/external-snapshotter-leaderelection Jun 11 00:09:16.280: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6474-5154/csi-mock Jun 11 00:09:16.283: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-6474 Jun 11 00:09:16.286: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-6474 Jun 11 00:09:16.289: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-6474 Jun 11 00:09:16.294: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-6474 Jun 11 00:09:16.296: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-6474 Jun 11 00:09:16.299: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-6474 Jun 11 00:09:16.302: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-6474 Jun 11 00:09:16.305: INFO: deleting *v1.StatefulSet: csi-mock-volumes-6474-5154/csi-mockplugin STEP: deleting the driver namespace: csi-mock-volumes-6474-5154 STEP: Waiting for namespaces [csi-mock-volumes-6474-5154] to vanish Jun 11 00:10:22.316: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-6474 [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:10:22.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:346.645 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI CSIDriver deployment after pod creation using non-attachable mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:372 should bringup pod after deploying CSIDriver attach=false [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:373 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI CSIDriver deployment after pod creation using non-attachable mock driver should bringup pod after deploying CSIDriver attach=false [Slow]","total":-1,"completed":3,"skipped":41,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:05:36.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:557 STEP: Creating configMap with name cm-test-opt-create-7e0e9bbf-7ef7-4693-85d2-14a105acc23b STEP: Creating the pod [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:10:36.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2662" for this suite. • [SLOW TEST:300.075 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:557 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow]","total":-1,"completed":6,"skipped":87,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:09:09.881: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] Pods sharing a single local PV [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:634 [It] all pods should be running /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:657 STEP: Create a PVC STEP: Create 50 pods to use this PVC STEP: Wait for all pods are running [AfterEach] Pods sharing a single local PV [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:648 STEP: Clean PV local-pvtbtp7 [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:10:53.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-7778" for this suite. • [SLOW TEST:103.555 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Pods sharing a single local PV [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:629 all pods should be running /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:657 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local Pods sharing a single local PV [Serial] all pods should be running","total":-1,"completed":13,"skipped":405,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:06:02.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] Should fail non-optional pod creation due to the key in the secret object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_secret.go:421 STEP: Creating secret with name s-test-opt-create-2737f0f2-405d-4eb3-af82-bb2bb6c287db STEP: Creating the pod [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:11:02.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6720" for this suite. • [SLOW TEST:300.062 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 Should fail non-optional pod creation due to the key in the secret object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_secret.go:421 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret Should fail non-optional pod creation due to the key in the secret object does not exist [Slow]","total":-1,"completed":6,"skipped":132,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:06:05.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] Should fail non-optional pod creation due to configMap object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:548 STEP: Creating the pod [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:11:05.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1671" for this suite. • [SLOW TEST:300.056 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 Should fail non-optional pod creation due to configMap object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:548 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap Should fail non-optional pod creation due to configMap object does not exist [Slow]","total":-1,"completed":7,"skipped":152,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:11:05.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Jun 11 00:11:05.375: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:11:05.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-8834" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.031 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create volume metrics with the correct PVC ref [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:204 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PV Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:11:02.985: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv-protection STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PV Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pv_protection.go:51 Jun 11 00:11:03.015: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable STEP: Creating a PV STEP: Waiting for PV to enter phase Available Jun 11 00:11:03.021: INFO: Waiting up to 30s for PersistentVolume hostpath-8tmfj to have phase Available Jun 11 00:11:03.023: INFO: PersistentVolume hostpath-8tmfj found but phase is Pending instead of Available. Jun 11 00:11:04.027: INFO: PersistentVolume hostpath-8tmfj found and phase=Available (1.005735899s) STEP: Checking that PV Protection finalizer is set [It] Verify that PV bound to a PVC is not removed immediately /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pv_protection.go:107 STEP: Creating a PVC STEP: Waiting for PVC to become Bound Jun 11 00:11:04.035: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-hfgj9] to have phase Bound Jun 11 00:11:04.037: INFO: PersistentVolumeClaim pvc-hfgj9 found but phase is Pending instead of Bound. Jun 11 00:11:06.040: INFO: PersistentVolumeClaim pvc-hfgj9 found and phase=Bound (2.005610897s) STEP: Deleting the PV, however, the PV must not be removed from the system as it's bound to a PVC STEP: Checking that the PV status is Terminating STEP: Deleting the PVC that is bound to the PV STEP: Checking that the PV is automatically removed from the system because it's no longer bound to a PVC Jun 11 00:11:06.051: INFO: Waiting up to 3m0s for PersistentVolume hostpath-8tmfj to get deleted Jun 11 00:11:06.053: INFO: PersistentVolume hostpath-8tmfj found and phase=Bound (2.405408ms) Jun 11 00:11:08.057: INFO: PersistentVolume hostpath-8tmfj was removed [AfterEach] [sig-storage] PV Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:11:08.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-protection-1586" for this suite. [AfterEach] [sig-storage] PV Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pv_protection.go:92 Jun 11 00:11:08.077: INFO: AfterEach: Cleaning up test resources. Jun 11 00:11:08.077: INFO: Deleting PersistentVolumeClaim "pvc-hfgj9" Jun 11 00:11:08.082: INFO: Deleting PersistentVolume "hostpath-8tmfj" • [SLOW TEST:5.100 seconds] [sig-storage] PV Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Verify that PV bound to a PVC is not removed immediately /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pv_protection.go:107 ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:09:47.890: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] CSIStorageCapacity used, insufficient capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1299 STEP: Building a driver namespace object, basename csi-mock-volumes-4225 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Jun 11 00:09:47.952: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4225-8267/csi-attacher Jun 11 00:09:47.957: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4225 Jun 11 00:09:47.957: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-4225 Jun 11 00:09:47.961: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4225 Jun 11 00:09:47.963: INFO: creating *v1.Role: csi-mock-volumes-4225-8267/external-attacher-cfg-csi-mock-volumes-4225 Jun 11 00:09:47.966: INFO: creating *v1.RoleBinding: csi-mock-volumes-4225-8267/csi-attacher-role-cfg Jun 11 00:09:47.969: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4225-8267/csi-provisioner Jun 11 00:09:47.972: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4225 Jun 11 00:09:47.972: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-4225 Jun 11 00:09:47.974: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4225 Jun 11 00:09:47.977: INFO: creating *v1.Role: csi-mock-volumes-4225-8267/external-provisioner-cfg-csi-mock-volumes-4225 Jun 11 00:09:47.980: INFO: creating *v1.RoleBinding: csi-mock-volumes-4225-8267/csi-provisioner-role-cfg Jun 11 00:09:47.982: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4225-8267/csi-resizer Jun 11 00:09:47.985: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4225 Jun 11 00:09:47.985: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-4225 Jun 11 00:09:47.987: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4225 Jun 11 00:09:47.991: INFO: creating *v1.Role: csi-mock-volumes-4225-8267/external-resizer-cfg-csi-mock-volumes-4225 Jun 11 00:09:47.994: INFO: creating *v1.RoleBinding: csi-mock-volumes-4225-8267/csi-resizer-role-cfg Jun 11 00:09:47.996: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4225-8267/csi-snapshotter Jun 11 00:09:47.999: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4225 Jun 11 00:09:47.999: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-4225 Jun 11 00:09:48.002: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4225 Jun 11 00:09:48.004: INFO: creating *v1.Role: csi-mock-volumes-4225-8267/external-snapshotter-leaderelection-csi-mock-volumes-4225 Jun 11 00:09:48.007: INFO: creating *v1.RoleBinding: csi-mock-volumes-4225-8267/external-snapshotter-leaderelection Jun 11 00:09:48.010: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4225-8267/csi-mock Jun 11 00:09:48.013: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4225 Jun 11 00:09:48.015: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4225 Jun 11 00:09:48.018: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4225 Jun 11 00:09:48.020: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4225 Jun 11 00:09:48.023: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4225 Jun 11 00:09:48.026: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4225 Jun 11 00:09:48.028: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4225 Jun 11 00:09:48.032: INFO: creating *v1.StatefulSet: csi-mock-volumes-4225-8267/csi-mockplugin Jun 11 00:09:48.036: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-4225 Jun 11 00:09:48.039: INFO: creating *v1.StatefulSet: csi-mock-volumes-4225-8267/csi-mockplugin-attacher Jun 11 00:09:48.043: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-4225" Jun 11 00:09:48.045: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-4225 to register on node node1 STEP: Creating pod Jun 11 00:10:34.650: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil STEP: Deleting the previously created pod Jun 11 00:10:34.670: INFO: Deleting pod "pvc-volume-tester-rcgjg" in namespace "csi-mock-volumes-4225" Jun 11 00:10:34.675: INFO: Wait up to 5m0s for pod "pvc-volume-tester-rcgjg" to be fully deleted STEP: Deleting pod pvc-volume-tester-rcgjg Jun 11 00:10:34.677: INFO: Deleting pod "pvc-volume-tester-rcgjg" in namespace "csi-mock-volumes-4225" STEP: Deleting claim pvc-kd5np STEP: Deleting storageclass mock-csi-storage-capacity-csi-mock-volumes-4225 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-4225 STEP: Waiting for namespaces [csi-mock-volumes-4225] to vanish STEP: uninstalling csi mock driver Jun 11 00:10:40.698: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4225-8267/csi-attacher Jun 11 00:10:40.702: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4225 Jun 11 00:10:40.706: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4225 Jun 11 00:10:40.709: INFO: deleting *v1.Role: csi-mock-volumes-4225-8267/external-attacher-cfg-csi-mock-volumes-4225 Jun 11 00:10:40.712: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4225-8267/csi-attacher-role-cfg Jun 11 00:10:40.716: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4225-8267/csi-provisioner Jun 11 00:10:40.720: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4225 Jun 11 00:10:40.723: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4225 Jun 11 00:10:40.727: INFO: deleting *v1.Role: csi-mock-volumes-4225-8267/external-provisioner-cfg-csi-mock-volumes-4225 Jun 11 00:10:40.730: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4225-8267/csi-provisioner-role-cfg Jun 11 00:10:40.733: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4225-8267/csi-resizer Jun 11 00:10:40.736: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4225 Jun 11 00:10:40.739: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4225 Jun 11 00:10:40.743: INFO: deleting *v1.Role: csi-mock-volumes-4225-8267/external-resizer-cfg-csi-mock-volumes-4225 Jun 11 00:10:40.746: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4225-8267/csi-resizer-role-cfg Jun 11 00:10:40.750: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4225-8267/csi-snapshotter Jun 11 00:10:40.754: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4225 Jun 11 00:10:40.757: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4225 Jun 11 00:10:40.761: INFO: deleting *v1.Role: csi-mock-volumes-4225-8267/external-snapshotter-leaderelection-csi-mock-volumes-4225 Jun 11 00:10:40.764: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4225-8267/external-snapshotter-leaderelection Jun 11 00:10:40.767: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4225-8267/csi-mock Jun 11 00:10:40.770: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4225 Jun 11 00:10:40.774: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4225 Jun 11 00:10:40.777: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4225 Jun 11 00:10:40.780: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4225 Jun 11 00:10:40.783: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4225 Jun 11 00:10:40.786: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4225 Jun 11 00:10:40.789: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4225 Jun 11 00:10:40.793: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4225-8267/csi-mockplugin Jun 11 00:10:40.798: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-4225 Jun 11 00:10:40.801: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4225-8267/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-4225-8267 STEP: Waiting for namespaces [csi-mock-volumes-4225-8267] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:11:08.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:80.933 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSIStorageCapacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1256 CSIStorageCapacity used, insufficient capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1299 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity","total":-1,"completed":13,"skipped":480,"failed":0} SSS ------------------------------ {"msg":"PASSED [sig-storage] PV Protection Verify that PV bound to a PVC is not removed immediately","total":-1,"completed":7,"skipped":133,"failed":0} [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:11:08.087: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:75 STEP: Creating configMap with name configmap-test-volume-a8e58c22-8a2e-4472-abca-53afa15c1845 STEP: Creating a pod to test consume configMaps Jun 11 00:11:08.125: INFO: Waiting up to 5m0s for pod "pod-configmaps-8289efa3-ae47-4afb-b67c-bad70d22723d" in namespace "configmap-6465" to be "Succeeded or Failed" Jun 11 00:11:08.127: INFO: Pod "pod-configmaps-8289efa3-ae47-4afb-b67c-bad70d22723d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.864406ms Jun 11 00:11:10.134: INFO: Pod "pod-configmaps-8289efa3-ae47-4afb-b67c-bad70d22723d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009291885s Jun 11 00:11:12.141: INFO: Pod "pod-configmaps-8289efa3-ae47-4afb-b67c-bad70d22723d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016054779s Jun 11 00:11:14.147: INFO: Pod "pod-configmaps-8289efa3-ae47-4afb-b67c-bad70d22723d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.022016231s STEP: Saw pod success Jun 11 00:11:14.147: INFO: Pod "pod-configmaps-8289efa3-ae47-4afb-b67c-bad70d22723d" satisfied condition "Succeeded or Failed" Jun 11 00:11:14.150: INFO: Trying to get logs from node node2 pod pod-configmaps-8289efa3-ae47-4afb-b67c-bad70d22723d container agnhost-container: STEP: delete the pod Jun 11 00:11:14.196: INFO: Waiting for pod pod-configmaps-8289efa3-ae47-4afb-b67c-bad70d22723d to disappear Jun 11 00:11:14.199: INFO: Pod pod-configmaps-8289efa3-ae47-4afb-b67c-bad70d22723d no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:11:14.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6465" for this suite. • [SLOW TEST:6.122 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:75 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":8,"skipped":133,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:10:36.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-bfb0dca3-87a6-481f-aca0-c65bb55dd4dd" Jun 11 00:11:08.398: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-bfb0dca3-87a6-481f-aca0-c65bb55dd4dd" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-bfb0dca3-87a6-481f-aca0-c65bb55dd4dd" "/tmp/local-volume-test-bfb0dca3-87a6-481f-aca0-c65bb55dd4dd"] Namespace:persistent-local-volumes-test-206 PodName:hostexec-node1-f7r49 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:11:08.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 11 00:11:13.764: INFO: Creating a PV followed by a PVC Jun 11 00:11:13.774: INFO: Waiting for PV local-pv7sp6b to bind to PVC pvc-l49sc Jun 11 00:11:13.774: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-l49sc] to have phase Bound Jun 11 00:11:13.776: INFO: PersistentVolumeClaim pvc-l49sc found but phase is Pending instead of Bound. Jun 11 00:11:15.780: INFO: PersistentVolumeClaim pvc-l49sc found and phase=Bound (2.005675692s) Jun 11 00:11:15.780: INFO: Waiting up to 3m0s for PersistentVolume local-pv7sp6b to have phase Bound Jun 11 00:11:15.783: INFO: PersistentVolume local-pv7sp6b found and phase=Bound (3.456538ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Jun 11 00:11:15.789: INFO: Disabled temporarily, reopen after #73168 is fixed [AfterEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 11 00:11:15.790: INFO: Deleting PersistentVolumeClaim "pvc-l49sc" Jun 11 00:11:15.795: INFO: Deleting PersistentVolume "local-pv7sp6b" STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-bfb0dca3-87a6-481f-aca0-c65bb55dd4dd" Jun 11 00:11:15.799: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-bfb0dca3-87a6-481f-aca0-c65bb55dd4dd"] Namespace:persistent-local-volumes-test-206 PodName:hostexec-node1-f7r49 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:11:15.799: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 11 00:11:17.741: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-bfb0dca3-87a6-481f-aca0-c65bb55dd4dd] Namespace:persistent-local-volumes-test-206 PodName:hostexec-node1-f7r49 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:11:17.741: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:11:18.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-206" for this suite. S [SKIPPING] [41.727 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set different fsGroup for second pod if first pod is deleted [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Disabled temporarily, reopen after #73168 is fixed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:287 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:11:05.569: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-directory STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:57 STEP: Create a pod for further testing Jun 11 00:11:05.604: INFO: The status of Pod test-hostpath-type-znv5x is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:11:07.609: INFO: The status of Pod test-hostpath-type-znv5x is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:11:09.609: INFO: The status of Pod test-hostpath-type-znv5x is Running (Ready = true) STEP: running on node node2 STEP: Should automatically create a new directory 'adir' when HostPathType is HostPathDirectoryOrCreate [It] Should be able to mount directory 'adir' successfully when HostPathType is HostPathUnset /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:80 [AfterEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:11:19.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-directory-6380" for this suite. • [SLOW TEST:14.108 seconds] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should be able to mount directory 'adir' successfully when HostPathType is HostPathUnset /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:80 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Directory [Slow] Should be able to mount directory 'adir' successfully when HostPathType is HostPathUnset","total":-1,"completed":8,"skipped":281,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:11:08.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-file STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:124 STEP: Create a pod for further testing Jun 11 00:11:08.870: INFO: The status of Pod test-hostpath-type-vwlzr is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:11:10.876: INFO: The status of Pod test-hostpath-type-vwlzr is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:11:12.874: INFO: The status of Pod test-hostpath-type-vwlzr is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:11:14.878: INFO: The status of Pod test-hostpath-type-vwlzr is Running (Ready = true) STEP: running on node node2 STEP: Should automatically create a new file 'afile' when HostPathType is HostPathFileOrCreate [It] Should fail on mounting non-existent file 'does-not-exist-file' when HostPathType is HostPathFile /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:137 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:11:20.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-file-3470" for this suite. • [SLOW TEST:12.108 seconds] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting non-existent file 'does-not-exist-file' when HostPathType is HostPathFile /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:137 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType File [Slow] Should fail on mounting non-existent file 'does-not-exist-file' when HostPathType is HostPathFile","total":-1,"completed":14,"skipped":483,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:11:18.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-directory STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:57 STEP: Create a pod for further testing Jun 11 00:11:18.161: INFO: The status of Pod test-hostpath-type-tvnbz is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:11:20.164: INFO: The status of Pod test-hostpath-type-tvnbz is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:11:22.165: INFO: The status of Pod test-hostpath-type-tvnbz is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:11:24.165: INFO: The status of Pod test-hostpath-type-tvnbz is Running (Ready = true) STEP: running on node node2 STEP: Should automatically create a new directory 'adir' when HostPathType is HostPathDirectoryOrCreate [It] Should fail on mounting directory 'adir' when HostPathType is HostPathCharDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:94 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:11:32.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-directory-8074" for this suite. • [SLOW TEST:14.111 seconds] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting directory 'adir' when HostPathType is HostPathCharDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:94 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Directory [Slow] Should fail on mounting directory 'adir' when HostPathType is HostPathCharDev","total":-1,"completed":7,"skipped":127,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:08:55.193: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should not expand volume if resizingOnDriver=off, resizingOnSC=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590 STEP: Building a driver namespace object, basename csi-mock-volumes-3297 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Jun 11 00:08:55.268: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3297-7026/csi-attacher Jun 11 00:08:55.271: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-3297 Jun 11 00:08:55.271: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-3297 Jun 11 00:08:55.273: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-3297 Jun 11 00:08:55.276: INFO: creating *v1.Role: csi-mock-volumes-3297-7026/external-attacher-cfg-csi-mock-volumes-3297 Jun 11 00:08:55.278: INFO: creating *v1.RoleBinding: csi-mock-volumes-3297-7026/csi-attacher-role-cfg Jun 11 00:08:55.282: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3297-7026/csi-provisioner Jun 11 00:08:55.284: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-3297 Jun 11 00:08:55.284: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-3297 Jun 11 00:08:55.287: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-3297 Jun 11 00:08:55.290: INFO: creating *v1.Role: csi-mock-volumes-3297-7026/external-provisioner-cfg-csi-mock-volumes-3297 Jun 11 00:08:55.293: INFO: creating *v1.RoleBinding: csi-mock-volumes-3297-7026/csi-provisioner-role-cfg Jun 11 00:08:55.295: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3297-7026/csi-resizer Jun 11 00:08:55.298: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-3297 Jun 11 00:08:55.298: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-3297 Jun 11 00:08:55.300: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-3297 Jun 11 00:08:55.302: INFO: creating *v1.Role: csi-mock-volumes-3297-7026/external-resizer-cfg-csi-mock-volumes-3297 Jun 11 00:08:55.306: INFO: creating *v1.RoleBinding: csi-mock-volumes-3297-7026/csi-resizer-role-cfg Jun 11 00:08:55.310: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3297-7026/csi-snapshotter Jun 11 00:08:55.313: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-3297 Jun 11 00:08:55.313: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-3297 Jun 11 00:08:55.316: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-3297 Jun 11 00:08:55.318: INFO: creating *v1.Role: csi-mock-volumes-3297-7026/external-snapshotter-leaderelection-csi-mock-volumes-3297 Jun 11 00:08:55.321: INFO: creating *v1.RoleBinding: csi-mock-volumes-3297-7026/external-snapshotter-leaderelection Jun 11 00:08:55.324: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3297-7026/csi-mock Jun 11 00:08:55.327: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-3297 Jun 11 00:08:55.330: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-3297 Jun 11 00:08:55.333: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-3297 Jun 11 00:08:55.335: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-3297 Jun 11 00:08:55.338: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-3297 Jun 11 00:08:55.341: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-3297 Jun 11 00:08:55.344: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-3297 Jun 11 00:08:55.346: INFO: creating *v1.StatefulSet: csi-mock-volumes-3297-7026/csi-mockplugin Jun 11 00:08:55.350: INFO: creating *v1.StatefulSet: csi-mock-volumes-3297-7026/csi-mockplugin-attacher Jun 11 00:08:55.354: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-3297 to register on node node2 STEP: Creating pod Jun 11 00:09:04.868: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jun 11 00:09:04.872: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-hxrgk] to have phase Bound Jun 11 00:09:04.874: INFO: PersistentVolumeClaim pvc-hxrgk found but phase is Pending instead of Bound. Jun 11 00:09:06.877: INFO: PersistentVolumeClaim pvc-hxrgk found and phase=Bound (2.004498609s) STEP: Expanding current pvc STEP: Deleting pod pvc-volume-tester-hztnp Jun 11 00:11:18.920: INFO: Deleting pod "pvc-volume-tester-hztnp" in namespace "csi-mock-volumes-3297" Jun 11 00:11:18.924: INFO: Wait up to 5m0s for pod "pvc-volume-tester-hztnp" to be fully deleted STEP: Deleting claim pvc-hxrgk Jun 11 00:11:22.938: INFO: Waiting up to 2m0s for PersistentVolume pvc-638200f2-d70b-4ea1-b90a-70d29c85f5b6 to get deleted Jun 11 00:11:22.940: INFO: PersistentVolume pvc-638200f2-d70b-4ea1-b90a-70d29c85f5b6 found and phase=Bound (2.105793ms) Jun 11 00:11:24.944: INFO: PersistentVolume pvc-638200f2-d70b-4ea1-b90a-70d29c85f5b6 was removed STEP: Deleting storageclass csi-mock-volumes-3297-scnbxfv STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-3297 STEP: Waiting for namespaces [csi-mock-volumes-3297] to vanish STEP: uninstalling csi mock driver Jun 11 00:11:30.957: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3297-7026/csi-attacher Jun 11 00:11:30.962: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-3297 Jun 11 00:11:30.966: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-3297 Jun 11 00:11:30.969: INFO: deleting *v1.Role: csi-mock-volumes-3297-7026/external-attacher-cfg-csi-mock-volumes-3297 Jun 11 00:11:30.973: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3297-7026/csi-attacher-role-cfg Jun 11 00:11:30.976: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3297-7026/csi-provisioner Jun 11 00:11:30.979: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-3297 Jun 11 00:11:30.985: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-3297 Jun 11 00:11:30.989: INFO: deleting *v1.Role: csi-mock-volumes-3297-7026/external-provisioner-cfg-csi-mock-volumes-3297 Jun 11 00:11:30.995: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3297-7026/csi-provisioner-role-cfg Jun 11 00:11:31.003: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3297-7026/csi-resizer Jun 11 00:11:31.009: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-3297 Jun 11 00:11:31.014: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-3297 Jun 11 00:11:31.017: INFO: deleting *v1.Role: csi-mock-volumes-3297-7026/external-resizer-cfg-csi-mock-volumes-3297 Jun 11 00:11:31.021: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3297-7026/csi-resizer-role-cfg Jun 11 00:11:31.024: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3297-7026/csi-snapshotter Jun 11 00:11:31.028: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-3297 Jun 11 00:11:31.030: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-3297 Jun 11 00:11:31.034: INFO: deleting *v1.Role: csi-mock-volumes-3297-7026/external-snapshotter-leaderelection-csi-mock-volumes-3297 Jun 11 00:11:31.037: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3297-7026/external-snapshotter-leaderelection Jun 11 00:11:31.041: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3297-7026/csi-mock Jun 11 00:11:31.044: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-3297 Jun 11 00:11:31.047: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-3297 Jun 11 00:11:31.050: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-3297 Jun 11 00:11:31.054: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-3297 Jun 11 00:11:31.057: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-3297 Jun 11 00:11:31.060: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-3297 Jun 11 00:11:31.063: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-3297 Jun 11 00:11:31.067: INFO: deleting *v1.StatefulSet: csi-mock-volumes-3297-7026/csi-mockplugin Jun 11 00:11:31.071: INFO: deleting *v1.StatefulSet: csi-mock-volumes-3297-7026/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-3297-7026 STEP: Waiting for namespaces [csi-mock-volumes-3297-7026] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:11:43.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:167.894 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI Volume expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:561 should not expand volume if resizingOnDriver=off, resizingOnSC=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should not expand volume if resizingOnDriver=off, resizingOnSC=on","total":-1,"completed":8,"skipped":401,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:11:14.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:49 [It] should allow deletion of pod with invalid volume : secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55 Jun 11 00:11:44.255: INFO: Deleting pod "pv-7084"/"pod-ephm-test-projected-qfc7" Jun 11 00:11:44.255: INFO: Deleting pod "pod-ephm-test-projected-qfc7" in namespace "pv-7084" Jun 11 00:11:44.259: INFO: Wait up to 5m0s for pod "pod-ephm-test-projected-qfc7" to be fully deleted [AfterEach] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:11:48.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-7084" for this suite. • [SLOW TEST:34.059 seconds] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 When pod refers to non-existent ephemeral storage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:53 should allow deletion of pod with invalid volume : secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55 ------------------------------ {"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : secret","total":-1,"completed":9,"skipped":135,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:11:43.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-socket STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:191 STEP: Create a pod for further testing Jun 11 00:11:43.290: INFO: The status of Pod test-hostpath-type-r7z7f is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:11:45.294: INFO: The status of Pod test-hostpath-type-r7z7f is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:11:47.295: INFO: The status of Pod test-hostpath-type-r7z7f is Running (Ready = true) STEP: running on node node2 [It] Should fail on mounting socket 'asocket' when HostPathType is HostPathFile /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:221 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:11:49.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-socket-4507" for this suite. • [SLOW TEST:6.116 seconds] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting socket 'asocket' when HostPathType is HostPathFile /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:221 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Socket [Slow] Should fail on mounting socket 'asocket' when HostPathType is HostPathFile","total":-1,"completed":9,"skipped":465,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:11:49.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename regional-pd STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:68 Jun 11 00:11:49.377: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:11:49.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "regional-pd-1682" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.032 seconds] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 RegionalPD [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:76 should provision storage with delayed binding [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:81 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:72 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:11:49.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Jun 11 00:11:49.536: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:11:49.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-9225" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.033 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create prometheus metrics for volume provisioning errors [Slow] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:147 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:09:26.148: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Jun 11 00:10:56.207: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-b920e76a-67f4-40b3-9a19-81eec44bcf50 && mount --bind /tmp/local-volume-test-b920e76a-67f4-40b3-9a19-81eec44bcf50 /tmp/local-volume-test-b920e76a-67f4-40b3-9a19-81eec44bcf50] Namespace:persistent-local-volumes-test-8437 PodName:hostexec-node1-r9fbp ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:10:56.207: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 11 00:10:56.309: INFO: Creating a PV followed by a PVC Jun 11 00:10:56.321: INFO: Waiting for PV local-pv2tn8h to bind to PVC pvc-sd9jc Jun 11 00:10:56.321: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-sd9jc] to have phase Bound Jun 11 00:10:56.323: INFO: PersistentVolumeClaim pvc-sd9jc found but phase is Pending instead of Bound. Jun 11 00:10:58.328: INFO: PersistentVolumeClaim pvc-sd9jc found and phase=Bound (2.006525985s) Jun 11 00:10:58.328: INFO: Waiting up to 3m0s for PersistentVolume local-pv2tn8h to have phase Bound Jun 11 00:10:58.330: INFO: PersistentVolume local-pv2tn8h found and phase=Bound (2.912551ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 STEP: Creating pod1 STEP: Creating a pod Jun 11 00:11:36.358: INFO: pod "pod-28061047-ae86-49dd-990b-9242c59de383" created on Node "node1" STEP: Writing in pod1 Jun 11 00:11:36.358: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-8437 PodName:pod-28061047-ae86-49dd-990b-9242c59de383 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:11:36.358: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:11:36.677: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Jun 11 00:11:36.677: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-8437 PodName:pod-28061047-ae86-49dd-990b-9242c59de383 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:11:36.677: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:11:36.776: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-28061047-ae86-49dd-990b-9242c59de383 in namespace persistent-local-volumes-test-8437 STEP: Creating pod2 STEP: Creating a pod Jun 11 00:11:52.804: INFO: pod "pod-eeb79ff8-ee8f-468c-8772-0e31cd59f24c" created on Node "node1" STEP: Reading in pod2 Jun 11 00:11:52.804: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-8437 PodName:pod-eeb79ff8-ee8f-468c-8772-0e31cd59f24c ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:11:52.804: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:11:52.896: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod2 STEP: Deleting pod pod-eeb79ff8-ee8f-468c-8772-0e31cd59f24c in namespace persistent-local-volumes-test-8437 [AfterEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 11 00:11:52.901: INFO: Deleting PersistentVolumeClaim "pvc-sd9jc" Jun 11 00:11:52.905: INFO: Deleting PersistentVolume "local-pv2tn8h" STEP: Removing the test directory Jun 11 00:11:52.909: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-b920e76a-67f4-40b3-9a19-81eec44bcf50 && rm -r /tmp/local-volume-test-b920e76a-67f4-40b3-9a19-81eec44bcf50] Namespace:persistent-local-volumes-test-8437 PodName:hostexec-node1-r9fbp ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:11:52.909: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:11:53.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-8437" for this suite. • [SLOW TEST:146.900 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":4,"skipped":204,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:10:22.390: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should modify fsGroup if fsGroupPolicy=default /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1582 STEP: Building a driver namespace object, basename csi-mock-volumes-3000 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Jun 11 00:10:22.461: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3000-4052/csi-attacher Jun 11 00:10:22.464: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-3000 Jun 11 00:10:22.464: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-3000 Jun 11 00:10:22.466: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-3000 Jun 11 00:10:22.469: INFO: creating *v1.Role: csi-mock-volumes-3000-4052/external-attacher-cfg-csi-mock-volumes-3000 Jun 11 00:10:22.472: INFO: creating *v1.RoleBinding: csi-mock-volumes-3000-4052/csi-attacher-role-cfg Jun 11 00:10:22.475: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3000-4052/csi-provisioner Jun 11 00:10:22.478: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-3000 Jun 11 00:10:22.478: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-3000 Jun 11 00:10:22.481: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-3000 Jun 11 00:10:22.484: INFO: creating *v1.Role: csi-mock-volumes-3000-4052/external-provisioner-cfg-csi-mock-volumes-3000 Jun 11 00:10:22.487: INFO: creating *v1.RoleBinding: csi-mock-volumes-3000-4052/csi-provisioner-role-cfg Jun 11 00:10:22.489: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3000-4052/csi-resizer Jun 11 00:10:22.491: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-3000 Jun 11 00:10:22.491: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-3000 Jun 11 00:10:22.493: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-3000 Jun 11 00:10:22.495: INFO: creating *v1.Role: csi-mock-volumes-3000-4052/external-resizer-cfg-csi-mock-volumes-3000 Jun 11 00:10:22.498: INFO: creating *v1.RoleBinding: csi-mock-volumes-3000-4052/csi-resizer-role-cfg Jun 11 00:10:22.501: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3000-4052/csi-snapshotter Jun 11 00:10:22.503: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-3000 Jun 11 00:10:22.503: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-3000 Jun 11 00:10:22.505: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-3000 Jun 11 00:10:22.508: INFO: creating *v1.Role: csi-mock-volumes-3000-4052/external-snapshotter-leaderelection-csi-mock-volumes-3000 Jun 11 00:10:22.510: INFO: creating *v1.RoleBinding: csi-mock-volumes-3000-4052/external-snapshotter-leaderelection Jun 11 00:10:22.513: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3000-4052/csi-mock Jun 11 00:10:22.515: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-3000 Jun 11 00:10:22.518: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-3000 Jun 11 00:10:22.520: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-3000 Jun 11 00:10:22.523: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-3000 Jun 11 00:10:22.525: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-3000 Jun 11 00:10:22.527: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-3000 Jun 11 00:10:22.530: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-3000 Jun 11 00:10:22.533: INFO: creating *v1.StatefulSet: csi-mock-volumes-3000-4052/csi-mockplugin Jun 11 00:10:22.537: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-3000 Jun 11 00:10:22.539: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-3000" Jun 11 00:10:22.542: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-3000 to register on node node2 STEP: Creating pod with fsGroup Jun 11 00:10:32.558: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jun 11 00:10:32.563: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-z228g] to have phase Bound Jun 11 00:10:32.565: INFO: PersistentVolumeClaim pvc-z228g found but phase is Pending instead of Bound. Jun 11 00:10:34.572: INFO: PersistentVolumeClaim pvc-z228g found and phase=Bound (2.008997735s) Jun 11 00:10:38.596: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir /mnt/test/csi-mock-volumes-3000] Namespace:csi-mock-volumes-3000 PodName:pvc-volume-tester-gfpwj ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:10:38.596: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:10:38.677: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'filecontents' > '/mnt/test/csi-mock-volumes-3000/csi-mock-volumes-3000'; sync] Namespace:csi-mock-volumes-3000 PodName:pvc-volume-tester-gfpwj ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:10:38.677: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:10:40.284: INFO: ExecWithOptions {Command:[/bin/sh -c ls -l /mnt/test/csi-mock-volumes-3000/csi-mock-volumes-3000] Namespace:csi-mock-volumes-3000 PodName:pvc-volume-tester-gfpwj ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:10:40.284: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:10:40.365: INFO: pod csi-mock-volumes-3000/pvc-volume-tester-gfpwj exec for cmd ls -l /mnt/test/csi-mock-volumes-3000/csi-mock-volumes-3000, stdout: -rw-r--r-- 1 root 3006 13 Jun 11 00:10 /mnt/test/csi-mock-volumes-3000/csi-mock-volumes-3000, stderr: Jun 11 00:10:40.365: INFO: ExecWithOptions {Command:[/bin/sh -c rm -fr /mnt/test/csi-mock-volumes-3000] Namespace:csi-mock-volumes-3000 PodName:pvc-volume-tester-gfpwj ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:10:40.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Deleting pod pvc-volume-tester-gfpwj Jun 11 00:10:40.440: INFO: Deleting pod "pvc-volume-tester-gfpwj" in namespace "csi-mock-volumes-3000" Jun 11 00:10:40.445: INFO: Wait up to 5m0s for pod "pvc-volume-tester-gfpwj" to be fully deleted STEP: Deleting claim pvc-z228g Jun 11 00:11:18.457: INFO: Waiting up to 2m0s for PersistentVolume pvc-8b242378-ed22-4cd5-8bf0-5d02862b275b to get deleted Jun 11 00:11:18.459: INFO: PersistentVolume pvc-8b242378-ed22-4cd5-8bf0-5d02862b275b found and phase=Bound (2.097535ms) Jun 11 00:11:20.469: INFO: PersistentVolume pvc-8b242378-ed22-4cd5-8bf0-5d02862b275b was removed STEP: Deleting storageclass csi-mock-volumes-3000-sc5t9f5 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-3000 STEP: Waiting for namespaces [csi-mock-volumes-3000] to vanish STEP: uninstalling csi mock driver Jun 11 00:11:26.482: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3000-4052/csi-attacher Jun 11 00:11:26.485: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-3000 Jun 11 00:11:26.489: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-3000 Jun 11 00:11:26.493: INFO: deleting *v1.Role: csi-mock-volumes-3000-4052/external-attacher-cfg-csi-mock-volumes-3000 Jun 11 00:11:26.496: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3000-4052/csi-attacher-role-cfg Jun 11 00:11:26.499: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3000-4052/csi-provisioner Jun 11 00:11:26.502: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-3000 Jun 11 00:11:26.505: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-3000 Jun 11 00:11:26.510: INFO: deleting *v1.Role: csi-mock-volumes-3000-4052/external-provisioner-cfg-csi-mock-volumes-3000 Jun 11 00:11:26.515: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3000-4052/csi-provisioner-role-cfg Jun 11 00:11:26.519: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3000-4052/csi-resizer Jun 11 00:11:26.529: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-3000 Jun 11 00:11:26.536: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-3000 Jun 11 00:11:26.539: INFO: deleting *v1.Role: csi-mock-volumes-3000-4052/external-resizer-cfg-csi-mock-volumes-3000 Jun 11 00:11:26.543: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3000-4052/csi-resizer-role-cfg Jun 11 00:11:26.547: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3000-4052/csi-snapshotter Jun 11 00:11:26.550: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-3000 Jun 11 00:11:26.554: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-3000 Jun 11 00:11:26.558: INFO: deleting *v1.Role: csi-mock-volumes-3000-4052/external-snapshotter-leaderelection-csi-mock-volumes-3000 Jun 11 00:11:26.561: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3000-4052/external-snapshotter-leaderelection Jun 11 00:11:26.565: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3000-4052/csi-mock Jun 11 00:11:26.571: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-3000 Jun 11 00:11:26.574: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-3000 Jun 11 00:11:26.577: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-3000 Jun 11 00:11:26.580: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-3000 Jun 11 00:11:26.584: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-3000 Jun 11 00:11:26.587: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-3000 Jun 11 00:11:26.591: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-3000 Jun 11 00:11:26.595: INFO: deleting *v1.StatefulSet: csi-mock-volumes-3000-4052/csi-mockplugin Jun 11 00:11:26.599: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-3000 STEP: deleting the driver namespace: csi-mock-volumes-3000-4052 STEP: Waiting for namespaces [csi-mock-volumes-3000-4052] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:11:54.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:92.231 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI FSGroupPolicy [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1558 should modify fsGroup if fsGroupPolicy=default /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1582 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","total":-1,"completed":4,"skipped":69,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:11:53.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Jun 11 00:11:55.104: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-6471b3f5-bd49-4abc-ac47-a0eaa9fd2484-backend && ln -s /tmp/local-volume-test-6471b3f5-bd49-4abc-ac47-a0eaa9fd2484-backend /tmp/local-volume-test-6471b3f5-bd49-4abc-ac47-a0eaa9fd2484] Namespace:persistent-local-volumes-test-4989 PodName:hostexec-node2-sjkkv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:11:55.104: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 11 00:11:55.200: INFO: Creating a PV followed by a PVC Jun 11 00:11:55.207: INFO: Waiting for PV local-pvfr8jm to bind to PVC pvc-gjcgt Jun 11 00:11:55.207: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-gjcgt] to have phase Bound Jun 11 00:11:55.210: INFO: PersistentVolumeClaim pvc-gjcgt found but phase is Pending instead of Bound. Jun 11 00:11:57.216: INFO: PersistentVolumeClaim pvc-gjcgt found and phase=Bound (2.00936152s) Jun 11 00:11:57.216: INFO: Waiting up to 3m0s for PersistentVolume local-pvfr8jm to have phase Bound Jun 11 00:11:57.218: INFO: PersistentVolume local-pvfr8jm found and phase=Bound (2.181331ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Jun 11 00:11:57.223: INFO: Disabled temporarily, reopen after #73168 is fixed [AfterEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 11 00:11:57.224: INFO: Deleting PersistentVolumeClaim "pvc-gjcgt" Jun 11 00:11:57.228: INFO: Deleting PersistentVolume "local-pvfr8jm" STEP: Removing the test directory Jun 11 00:11:57.233: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-6471b3f5-bd49-4abc-ac47-a0eaa9fd2484 && rm -r /tmp/local-volume-test-6471b3f5-bd49-4abc-ac47-a0eaa9fd2484-backend] Namespace:persistent-local-volumes-test-4989 PodName:hostexec-node2-sjkkv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:11:57.233: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:11:57.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-4989" for this suite. S [SKIPPING] [4.335 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set different fsGroup for second pod if first pod is deleted [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Disabled temporarily, reopen after #73168 is fixed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:287 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:11:19.842: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:49 [It] should allow deletion of pod with invalid volume : configmap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55 Jun 11 00:11:49.877: INFO: Deleting pod "pv-3247"/"pod-ephm-test-projected-gl6m" Jun 11 00:11:49.877: INFO: Deleting pod "pod-ephm-test-projected-gl6m" in namespace "pv-3247" Jun 11 00:11:49.882: INFO: Wait up to 5m0s for pod "pod-ephm-test-projected-gl6m" to be fully deleted [AfterEach] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:11:57.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-3247" for this suite. • [SLOW TEST:38.057 seconds] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 When pod refers to non-existent ephemeral storage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:53 should allow deletion of pod with invalid volume : configmap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55 ------------------------------ {"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : configmap","total":-1,"completed":9,"skipped":367,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:09:12.681: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] unlimited /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1080 STEP: Building a driver namespace object, basename csi-mock-volumes-1218 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Jun 11 00:09:12.743: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1218-7554/csi-attacher Jun 11 00:09:12.770: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-1218 Jun 11 00:09:12.770: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-1218 Jun 11 00:09:12.773: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-1218 Jun 11 00:09:12.776: INFO: creating *v1.Role: csi-mock-volumes-1218-7554/external-attacher-cfg-csi-mock-volumes-1218 Jun 11 00:09:12.779: INFO: creating *v1.RoleBinding: csi-mock-volumes-1218-7554/csi-attacher-role-cfg Jun 11 00:09:12.782: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1218-7554/csi-provisioner Jun 11 00:09:12.785: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-1218 Jun 11 00:09:12.785: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-1218 Jun 11 00:09:12.787: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-1218 Jun 11 00:09:12.790: INFO: creating *v1.Role: csi-mock-volumes-1218-7554/external-provisioner-cfg-csi-mock-volumes-1218 Jun 11 00:09:12.793: INFO: creating *v1.RoleBinding: csi-mock-volumes-1218-7554/csi-provisioner-role-cfg Jun 11 00:09:12.796: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1218-7554/csi-resizer Jun 11 00:09:12.799: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-1218 Jun 11 00:09:12.799: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-1218 Jun 11 00:09:12.802: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-1218 Jun 11 00:09:12.805: INFO: creating *v1.Role: csi-mock-volumes-1218-7554/external-resizer-cfg-csi-mock-volumes-1218 Jun 11 00:09:12.808: INFO: creating *v1.RoleBinding: csi-mock-volumes-1218-7554/csi-resizer-role-cfg Jun 11 00:09:12.810: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1218-7554/csi-snapshotter Jun 11 00:09:12.814: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-1218 Jun 11 00:09:12.814: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-1218 Jun 11 00:09:12.816: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-1218 Jun 11 00:09:12.819: INFO: creating *v1.Role: csi-mock-volumes-1218-7554/external-snapshotter-leaderelection-csi-mock-volumes-1218 Jun 11 00:09:12.821: INFO: creating *v1.RoleBinding: csi-mock-volumes-1218-7554/external-snapshotter-leaderelection Jun 11 00:09:12.824: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1218-7554/csi-mock Jun 11 00:09:12.827: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-1218 Jun 11 00:09:12.829: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-1218 Jun 11 00:09:12.832: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-1218 Jun 11 00:09:12.834: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-1218 Jun 11 00:09:12.837: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-1218 Jun 11 00:09:12.840: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-1218 Jun 11 00:09:12.842: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-1218 Jun 11 00:09:12.845: INFO: creating *v1.StatefulSet: csi-mock-volumes-1218-7554/csi-mockplugin Jun 11 00:09:12.849: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-1218 Jun 11 00:09:12.852: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-1218" Jun 11 00:09:12.854: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-1218 to register on node node1 STEP: Creating pod Jun 11 00:09:54.452: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jun 11 00:09:54.460: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-n29rw] to have phase Bound Jun 11 00:09:54.463: INFO: PersistentVolumeClaim pvc-n29rw found but phase is Pending instead of Bound. Jun 11 00:09:56.466: INFO: PersistentVolumeClaim pvc-n29rw found and phase=Bound (2.006673079s) Jun 11 00:11:00.491: INFO: Deleting pod "pvc-volume-tester-xrpfr" in namespace "csi-mock-volumes-1218" Jun 11 00:11:00.497: INFO: Wait up to 5m0s for pod "pvc-volume-tester-xrpfr" to be fully deleted STEP: Checking PVC events Jun 11 00:11:41.531: INFO: PVC event ADDED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-n29rw", GenerateName:"pvc-", Namespace:"csi-mock-volumes-1218", SelfLink:"", UID:"c98d5b2a-387d-4278-aa5d-3ced7b264552", ResourceVersion:"98999", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63790502994, loc:(*time.Location)(0x9e2e180)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003911728), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003911740)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc00383b250), VolumeMode:(*v1.PersistentVolumeMode)(0xc00383b260), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Jun 11 00:11:41.531: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-n29rw", GenerateName:"pvc-", Namespace:"csi-mock-volumes-1218", SelfLink:"", UID:"c98d5b2a-387d-4278-aa5d-3ced7b264552", ResourceVersion:"99000", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63790502994, loc:(*time.Location)(0x9e2e180)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-1218"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0027888b8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0027888d0)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0027888e8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002788900)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc0047a43e0), VolumeMode:(*v1.PersistentVolumeMode)(0xc0047a43f0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Jun 11 00:11:41.532: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-n29rw", GenerateName:"pvc-", Namespace:"csi-mock-volumes-1218", SelfLink:"", UID:"c98d5b2a-387d-4278-aa5d-3ced7b264552", ResourceVersion:"99006", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63790502994, loc:(*time.Location)(0x9e2e180)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-1218"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003428300), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003428318)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003428330), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003428348)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-c98d5b2a-387d-4278-aa5d-3ced7b264552", StorageClassName:(*string)(0xc00383b3d0), VolumeMode:(*v1.PersistentVolumeMode)(0xc00383b3e0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Jun 11 00:11:41.532: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-n29rw", GenerateName:"pvc-", Namespace:"csi-mock-volumes-1218", SelfLink:"", UID:"c98d5b2a-387d-4278-aa5d-3ced7b264552", ResourceVersion:"99007", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63790502994, loc:(*time.Location)(0x9e2e180)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-1218"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003428378), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003428390)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0034283a8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0034283c0)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-c98d5b2a-387d-4278-aa5d-3ced7b264552", StorageClassName:(*string)(0xc00383b410), VolumeMode:(*v1.PersistentVolumeMode)(0xc00383b420), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Jun 11 00:11:41.532: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-n29rw", GenerateName:"pvc-", Namespace:"csi-mock-volumes-1218", SelfLink:"", UID:"c98d5b2a-387d-4278-aa5d-3ced7b264552", ResourceVersion:"100699", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63790502994, loc:(*time.Location)(0x9e2e180)}}, DeletionTimestamp:(*v1.Time)(0xc0034283f0), DeletionGracePeriodSeconds:(*int64)(0xc002934f58), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-1218"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003428408), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003428420)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003428438), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003428450)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-c98d5b2a-387d-4278-aa5d-3ced7b264552", StorageClassName:(*string)(0xc00383b460), VolumeMode:(*v1.PersistentVolumeMode)(0xc00383b470), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Jun 11 00:11:41.532: INFO: PVC event DELETED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-n29rw", GenerateName:"pvc-", Namespace:"csi-mock-volumes-1218", SelfLink:"", UID:"c98d5b2a-387d-4278-aa5d-3ced7b264552", ResourceVersion:"100700", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63790502994, loc:(*time.Location)(0x9e2e180)}}, DeletionTimestamp:(*v1.Time)(0xc003428480), DeletionGracePeriodSeconds:(*int64)(0xc002935008), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-1218"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003428498), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0034284b0)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0034284c8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0034284e0)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-c98d5b2a-387d-4278-aa5d-3ced7b264552", StorageClassName:(*string)(0xc00383b4b0), VolumeMode:(*v1.PersistentVolumeMode)(0xc00383b4c0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} STEP: Deleting pod pvc-volume-tester-xrpfr Jun 11 00:11:41.532: INFO: Deleting pod "pvc-volume-tester-xrpfr" in namespace "csi-mock-volumes-1218" STEP: Deleting claim pvc-n29rw STEP: Deleting storageclass csi-mock-volumes-1218-sct6lpv STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-1218 STEP: Waiting for namespaces [csi-mock-volumes-1218] to vanish STEP: uninstalling csi mock driver Jun 11 00:11:47.554: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1218-7554/csi-attacher Jun 11 00:11:47.559: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-1218 Jun 11 00:11:47.566: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-1218 Jun 11 00:11:47.571: INFO: deleting *v1.Role: csi-mock-volumes-1218-7554/external-attacher-cfg-csi-mock-volumes-1218 Jun 11 00:11:47.577: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1218-7554/csi-attacher-role-cfg Jun 11 00:11:47.582: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1218-7554/csi-provisioner Jun 11 00:11:47.585: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-1218 Jun 11 00:11:47.589: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-1218 Jun 11 00:11:47.592: INFO: deleting *v1.Role: csi-mock-volumes-1218-7554/external-provisioner-cfg-csi-mock-volumes-1218 Jun 11 00:11:47.596: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1218-7554/csi-provisioner-role-cfg Jun 11 00:11:47.600: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1218-7554/csi-resizer Jun 11 00:11:47.604: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-1218 Jun 11 00:11:47.608: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-1218 Jun 11 00:11:47.611: INFO: deleting *v1.Role: csi-mock-volumes-1218-7554/external-resizer-cfg-csi-mock-volumes-1218 Jun 11 00:11:47.614: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1218-7554/csi-resizer-role-cfg Jun 11 00:11:47.618: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1218-7554/csi-snapshotter Jun 11 00:11:47.621: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-1218 Jun 11 00:11:47.624: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-1218 Jun 11 00:11:47.627: INFO: deleting *v1.Role: csi-mock-volumes-1218-7554/external-snapshotter-leaderelection-csi-mock-volumes-1218 Jun 11 00:11:47.631: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1218-7554/external-snapshotter-leaderelection Jun 11 00:11:47.635: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1218-7554/csi-mock Jun 11 00:11:47.638: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-1218 Jun 11 00:11:47.643: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-1218 Jun 11 00:11:47.646: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-1218 Jun 11 00:11:47.650: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-1218 Jun 11 00:11:47.654: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-1218 Jun 11 00:11:47.658: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-1218 Jun 11 00:11:47.661: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-1218 Jun 11 00:11:47.665: INFO: deleting *v1.StatefulSet: csi-mock-volumes-1218-7554/csi-mockplugin Jun 11 00:11:47.669: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-1218 STEP: deleting the driver namespace: csi-mock-volumes-1218-7554 STEP: Waiting for namespaces [csi-mock-volumes-1218-7554] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:11:59.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:167.004 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 storage capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1022 unlimited /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1080 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume storage capacity unlimited","total":-1,"completed":11,"skipped":583,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:11:49.554: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-file STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:124 STEP: Create a pod for further testing Jun 11 00:11:49.588: INFO: The status of Pod test-hostpath-type-6slmj is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:11:51.593: INFO: The status of Pod test-hostpath-type-6slmj is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:11:53.592: INFO: The status of Pod test-hostpath-type-6slmj is Running (Ready = true) STEP: running on node node2 STEP: Should automatically create a new file 'afile' when HostPathType is HostPathFileOrCreate [It] Should be able to mount file 'afile' successfully when HostPathType is HostPathUnset /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:147 [AfterEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:12:01.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-file-9606" for this suite. • [SLOW TEST:12.177 seconds] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should be able to mount file 'afile' successfully when HostPathType is HostPathUnset /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:147 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType File [Slow] Should be able to mount file 'afile' successfully when HostPathType is HostPathUnset","total":-1,"completed":10,"skipped":538,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:11:57.973: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:37 [It] should support r/w [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:65 STEP: Creating a pod to test hostPath r/w Jun 11 00:11:58.006: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-7457" to be "Succeeded or Failed" Jun 11 00:11:58.009: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.936654ms Jun 11 00:12:00.013: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007082923s Jun 11 00:12:02.018: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011346989s Jun 11 00:12:04.023: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016903311s STEP: Saw pod success Jun 11 00:12:04.023: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed" Jun 11 00:12:04.026: INFO: Trying to get logs from node node2 pod pod-host-path-test container test-container-2: STEP: delete the pod Jun 11 00:12:04.067: INFO: Waiting for pod pod-host-path-test to disappear Jun 11 00:12:04.069: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:12:04.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-7457" for this suite. • [SLOW TEST:6.104 seconds] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support r/w [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:65 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should support r/w [NodeConformance]","total":-1,"completed":10,"skipped":399,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:10:53.463: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Jun 11 00:11:45.510: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-35c2875c-a66f-4aab-80ea-d3aa9dc02756-backend && mount --bind /tmp/local-volume-test-35c2875c-a66f-4aab-80ea-d3aa9dc02756-backend /tmp/local-volume-test-35c2875c-a66f-4aab-80ea-d3aa9dc02756-backend && ln -s /tmp/local-volume-test-35c2875c-a66f-4aab-80ea-d3aa9dc02756-backend /tmp/local-volume-test-35c2875c-a66f-4aab-80ea-d3aa9dc02756] Namespace:persistent-local-volumes-test-9727 PodName:hostexec-node1-c9c9z ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:11:45.510: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 11 00:11:45.609: INFO: Creating a PV followed by a PVC Jun 11 00:11:45.621: INFO: Waiting for PV local-pvp5r8d to bind to PVC pvc-zl5nh Jun 11 00:11:45.621: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-zl5nh] to have phase Bound Jun 11 00:11:45.623: INFO: PersistentVolumeClaim pvc-zl5nh found but phase is Pending instead of Bound. Jun 11 00:11:47.630: INFO: PersistentVolumeClaim pvc-zl5nh found and phase=Bound (2.009028113s) Jun 11 00:11:47.630: INFO: Waiting up to 3m0s for PersistentVolume local-pvp5r8d to have phase Bound Jun 11 00:11:47.632: INFO: PersistentVolume local-pvp5r8d found and phase=Bound (2.212485ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 STEP: Create first pod and check fsGroup is set STEP: Creating a pod Jun 11 00:11:59.657: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-9727 exec pod-933e3a58-4742-460b-947f-77e86c37c62f --namespace=persistent-local-volumes-test-9727 -- stat -c %g /mnt/volume1' Jun 11 00:12:00.170: INFO: stderr: "" Jun 11 00:12:00.170: INFO: stdout: "1000\n" Jun 11 00:12:02.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-9727 exec pod-933e3a58-4742-460b-947f-77e86c37c62f --namespace=persistent-local-volumes-test-9727 -- stat -c %g /mnt/volume1' Jun 11 00:12:02.434: INFO: stderr: "" Jun 11 00:12:02.434: INFO: stdout: "1000\n" Jun 11 00:12:04.436: FAIL: failed to get expected fsGroup 1234 on directory /mnt/volume1 in pod pod-933e3a58-4742-460b-947f-77e86c37c62f Unexpected error: <*errors.errorString | 0xc0013a54a0>: { s: "Failed to find \"1234\", last result: \"1000\n\"", } Failed to find "1234", last result: "1000 " occurred Full Stack Trace k8s.io/kubernetes/test/e2e/storage.createPodWithFsGroupTest(0xc00444ae10, 0xc004c98030, 0x4d2, 0x4d2, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:808 +0x317 k8s.io/kubernetes/test/e2e/storage.glob..func21.2.6.3() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:277 +0x8d k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001784d80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc001784d80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc001784d80, 0x70f99e8) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 [AfterEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 11 00:12:04.438: INFO: Deleting PersistentVolumeClaim "pvc-zl5nh" Jun 11 00:12:04.442: INFO: Deleting PersistentVolume "local-pvp5r8d" STEP: Removing the test directory Jun 11 00:12:04.447: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm /tmp/local-volume-test-35c2875c-a66f-4aab-80ea-d3aa9dc02756 && umount /tmp/local-volume-test-35c2875c-a66f-4aab-80ea-d3aa9dc02756-backend && rm -r /tmp/local-volume-test-35c2875c-a66f-4aab-80ea-d3aa9dc02756-backend] Namespace:persistent-local-volumes-test-9727 PodName:hostexec-node1-c9c9z ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:12:04.447: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "persistent-local-volumes-test-9727". STEP: Found 11 events. Jun 11 00:12:04.573: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for hostexec-node1-c9c9z: { } Scheduled: Successfully assigned persistent-local-volumes-test-9727/hostexec-node1-c9c9z to node1 Jun 11 00:12:04.573: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-933e3a58-4742-460b-947f-77e86c37c62f: { } Scheduled: Successfully assigned persistent-local-volumes-test-9727/pod-933e3a58-4742-460b-947f-77e86c37c62f to node1 Jun 11 00:12:04.573: INFO: At 2022-06-11 00:11:14 +0000 UTC - event for hostexec-node1-c9c9z: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Jun 11 00:12:04.573: INFO: At 2022-06-11 00:11:14 +0000 UTC - event for hostexec-node1-c9c9z: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 316.391516ms Jun 11 00:12:04.573: INFO: At 2022-06-11 00:11:17 +0000 UTC - event for hostexec-node1-c9c9z: {kubelet node1} Created: Created container agnhost-container Jun 11 00:12:04.573: INFO: At 2022-06-11 00:11:21 +0000 UTC - event for hostexec-node1-c9c9z: {kubelet node1} Started: Started container agnhost-container Jun 11 00:12:04.573: INFO: At 2022-06-11 00:11:48 +0000 UTC - event for pod-933e3a58-4742-460b-947f-77e86c37c62f: {kubelet node1} AlreadyMountedVolume: The requested fsGroup is 1234, but the volume local-pvp5r8d has GID 1000. The volume may not be shareable. Jun 11 00:12:04.573: INFO: At 2022-06-11 00:11:50 +0000 UTC - event for pod-933e3a58-4742-460b-947f-77e86c37c62f: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/busybox:1.29-1" Jun 11 00:12:04.573: INFO: At 2022-06-11 00:11:51 +0000 UTC - event for pod-933e3a58-4742-460b-947f-77e86c37c62f: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/busybox:1.29-1" in 323.301237ms Jun 11 00:12:04.573: INFO: At 2022-06-11 00:11:51 +0000 UTC - event for pod-933e3a58-4742-460b-947f-77e86c37c62f: {kubelet node1} Created: Created container write-pod Jun 11 00:12:04.573: INFO: At 2022-06-11 00:11:52 +0000 UTC - event for pod-933e3a58-4742-460b-947f-77e86c37c62f: {kubelet node1} Started: Started container write-pod Jun 11 00:12:04.576: INFO: POD NODE PHASE GRACE CONDITIONS Jun 11 00:12:04.576: INFO: hostexec-node1-c9c9z node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-11 00:10:53 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-06-11 00:11:23 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-06-11 00:11:23 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-11 00:10:53 +0000 UTC }] Jun 11 00:12:04.576: INFO: pod-933e3a58-4742-460b-947f-77e86c37c62f node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-11 00:11:47 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-06-11 00:11:52 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-06-11 00:11:52 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-11 00:11:47 +0000 UTC }] Jun 11 00:12:04.576: INFO: Jun 11 00:12:04.581: INFO: Logging node info for node master1 Jun 11 00:12:04.584: INFO: Node Info: &Node{ObjectMeta:{master1 e472448e-87fd-4e8d-bbb7-98d43d3d8a87 101319 0 2022-06-10 19:57:38 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-06-10 19:57:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-06-10 20:00:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-06-10 20:00:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-06-10 20:05:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {nfd-master Update v1 2022-06-10 20:08:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-10 20:03:20 +0000 UTC,LastTransitionTime:2022-06-10 20:03:20 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-11 00:12:01 +0000 UTC,LastTransitionTime:2022-06-10 19:57:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-11 00:12:01 +0000 UTC,LastTransitionTime:2022-06-10 19:57:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-11 00:12:01 +0000 UTC,LastTransitionTime:2022-06-10 19:57:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-11 00:12:01 +0000 UTC,LastTransitionTime:2022-06-10 20:00:33 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3faca96dd267476388422e9ecfe8ffa5,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:a8563bde-8faa-4424-940f-741c59dd35bf,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727708945,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:bec743bd4fe4525edfd5f3c9bb11da21629092dfe60d396ce7f8168ac1088695 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:b92d3b942c8b84da889ac3dc6e83bd20ffb8cd2d8298eba92c8b0bf88d52f03e nginx:1.20.1-alpine],SizeBytes:22721538,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 11 00:12:04.584: INFO: Logging kubelet events for node master1 Jun 11 00:12:04.587: INFO: Logging pods the kubelet thinks is on node master1 Jun 11 00:12:04.614: INFO: kube-flannel-xx9h7 started at 2022-06-10 20:00:20 +0000 UTC (1+1 container statuses recorded) Jun 11 00:12:04.614: INFO: Init container install-cni ready: true, restart count 0 Jun 11 00:12:04.614: INFO: Container kube-flannel ready: true, restart count 1 Jun 11 00:12:04.614: INFO: kube-multus-ds-amd64-t5pr7 started at 2022-06-10 20:00:29 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.614: INFO: Container kube-multus ready: true, restart count 1 Jun 11 00:12:04.614: INFO: dns-autoscaler-7df78bfcfb-kz7px started at 2022-06-10 20:00:58 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.614: INFO: Container autoscaler ready: true, restart count 1 Jun 11 00:12:04.614: INFO: kube-apiserver-master1 started at 2022-06-10 19:58:43 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.614: INFO: Container kube-apiserver ready: true, restart count 0 Jun 11 00:12:04.614: INFO: kube-controller-manager-master1 started at 2022-06-10 20:06:49 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.614: INFO: Container kube-controller-manager ready: true, restart count 2 Jun 11 00:12:04.614: INFO: kube-scheduler-master1 started at 2022-06-10 19:58:43 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.614: INFO: Container kube-scheduler ready: true, restart count 0 Jun 11 00:12:04.614: INFO: kube-proxy-rd4j7 started at 2022-06-10 19:59:24 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.614: INFO: Container kube-proxy ready: true, restart count 3 Jun 11 00:12:04.614: INFO: container-registry-65d7c44b96-rsh2n started at 2022-06-10 20:04:56 +0000 UTC (0+2 container statuses recorded) Jun 11 00:12:04.614: INFO: Container docker-registry ready: true, restart count 0 Jun 11 00:12:04.614: INFO: Container nginx ready: true, restart count 0 Jun 11 00:12:04.614: INFO: node-feature-discovery-controller-cff799f9f-74qhv started at 2022-06-10 20:08:09 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.614: INFO: Container nfd-controller ready: true, restart count 0 Jun 11 00:12:04.614: INFO: prometheus-operator-585ccfb458-kkb8f started at 2022-06-10 20:13:26 +0000 UTC (0+2 container statuses recorded) Jun 11 00:12:04.614: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 11 00:12:04.614: INFO: Container prometheus-operator ready: true, restart count 0 Jun 11 00:12:04.614: INFO: node-exporter-vc67r started at 2022-06-10 20:13:33 +0000 UTC (0+2 container statuses recorded) Jun 11 00:12:04.614: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 11 00:12:04.614: INFO: Container node-exporter ready: true, restart count 0 Jun 11 00:12:04.720: INFO: Latency metrics for node master1 Jun 11 00:12:04.720: INFO: Logging node info for node master2 Jun 11 00:12:04.723: INFO: Node Info: &Node{ObjectMeta:{master2 66c7af40-c8de-462b-933d-792f10a44a43 101271 0 2022-06-10 19:58:07 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-06-10 19:58:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-06-10 20:00:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-06-10 20:00:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-06-10 20:10:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-10 20:03:20 +0000 UTC,LastTransitionTime:2022-06-10 20:03:20 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-11 00:12:00 +0000 UTC,LastTransitionTime:2022-06-10 19:58:07 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-11 00:12:00 +0000 UTC,LastTransitionTime:2022-06-10 19:58:07 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-11 00:12:00 +0000 UTC,LastTransitionTime:2022-06-10 19:58:07 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-11 00:12:00 +0000 UTC,LastTransitionTime:2022-06-10 20:00:25 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:31687d4b1abb46329a442e068ee56c42,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:e234d452-a6d8-4bf0-b98d-a080613c39e9,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727708945,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 11 00:12:04.723: INFO: Logging kubelet events for node master2 Jun 11 00:12:04.726: INFO: Logging pods the kubelet thinks is on node master2 Jun 11 00:12:04.742: INFO: kube-controller-manager-master2 started at 2022-06-10 20:06:49 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.743: INFO: Container kube-controller-manager ready: true, restart count 1 Jun 11 00:12:04.743: INFO: kube-scheduler-master2 started at 2022-06-10 20:06:49 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.743: INFO: Container kube-scheduler ready: true, restart count 3 Jun 11 00:12:04.743: INFO: kube-proxy-2kbvc started at 2022-06-10 19:59:24 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.743: INFO: Container kube-proxy ready: true, restart count 2 Jun 11 00:12:04.743: INFO: kube-flannel-ftn9l started at 2022-06-10 20:00:20 +0000 UTC (1+1 container statuses recorded) Jun 11 00:12:04.743: INFO: Init container install-cni ready: true, restart count 2 Jun 11 00:12:04.743: INFO: Container kube-flannel ready: true, restart count 1 Jun 11 00:12:04.743: INFO: kube-multus-ds-amd64-nrmqq started at 2022-06-10 20:00:29 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.743: INFO: Container kube-multus ready: true, restart count 1 Jun 11 00:12:04.743: INFO: coredns-8474476ff8-hlspd started at 2022-06-10 20:01:00 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.743: INFO: Container coredns ready: true, restart count 1 Jun 11 00:12:04.743: INFO: kube-apiserver-master2 started at 2022-06-10 19:58:44 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.743: INFO: Container kube-apiserver ready: true, restart count 0 Jun 11 00:12:04.743: INFO: node-exporter-6fbrb started at 2022-06-10 20:13:33 +0000 UTC (0+2 container statuses recorded) Jun 11 00:12:04.743: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 11 00:12:04.743: INFO: Container node-exporter ready: true, restart count 0 Jun 11 00:12:04.824: INFO: Latency metrics for node master2 Jun 11 00:12:04.824: INFO: Logging node info for node master3 Jun 11 00:12:04.828: INFO: Node Info: &Node{ObjectMeta:{master3 e51505ec-e791-4bbe-aeb1-bd0671fd4464 101083 0 2022-06-10 19:58:16 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-06-10 19:58:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-06-10 20:00:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-06-10 20:00:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-06-10 20:10:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-10 20:03:14 +0000 UTC,LastTransitionTime:2022-06-10 20:03:14 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-11 00:11:55 +0000 UTC,LastTransitionTime:2022-06-10 19:58:16 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-11 00:11:55 +0000 UTC,LastTransitionTime:2022-06-10 19:58:16 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-11 00:11:55 +0000 UTC,LastTransitionTime:2022-06-10 19:58:16 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-11 00:11:55 +0000 UTC,LastTransitionTime:2022-06-10 20:00:31 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:1f373495c4c54f68a37fa0d50cd1da58,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:a719d949-f9d1-4ee4-a79b-ab3a929b7d00,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727708945,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 11 00:12:04.828: INFO: Logging kubelet events for node master3 Jun 11 00:12:04.831: INFO: Logging pods the kubelet thinks is on node master3 Jun 11 00:12:04.846: INFO: kube-proxy-rm9n6 started at 2022-06-10 19:59:24 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.846: INFO: Container kube-proxy ready: true, restart count 1 Jun 11 00:12:04.846: INFO: kube-flannel-jpd2j started at 2022-06-10 20:00:20 +0000 UTC (1+1 container statuses recorded) Jun 11 00:12:04.846: INFO: Init container install-cni ready: true, restart count 2 Jun 11 00:12:04.846: INFO: Container kube-flannel ready: true, restart count 2 Jun 11 00:12:04.846: INFO: kube-scheduler-master3 started at 2022-06-10 20:03:07 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.846: INFO: Container kube-scheduler ready: true, restart count 1 Jun 11 00:12:04.846: INFO: kube-multus-ds-amd64-8b4tg started at 2022-06-10 20:00:29 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.846: INFO: Container kube-multus ready: true, restart count 1 Jun 11 00:12:04.846: INFO: coredns-8474476ff8-s8q89 started at 2022-06-10 20:00:56 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.846: INFO: Container coredns ready: true, restart count 1 Jun 11 00:12:04.846: INFO: node-exporter-q4rw6 started at 2022-06-10 20:13:33 +0000 UTC (0+2 container statuses recorded) Jun 11 00:12:04.846: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 11 00:12:04.846: INFO: Container node-exporter ready: true, restart count 0 Jun 11 00:12:04.846: INFO: kube-apiserver-master3 started at 2022-06-10 20:03:07 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.846: INFO: Container kube-apiserver ready: true, restart count 0 Jun 11 00:12:04.846: INFO: kube-controller-manager-master3 started at 2022-06-10 20:06:49 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.846: INFO: Container kube-controller-manager ready: true, restart count 2 Jun 11 00:12:04.924: INFO: Latency metrics for node master3 Jun 11 00:12:04.924: INFO: Logging node info for node node1 Jun 11 00:12:04.927: INFO: Node Info: &Node{ObjectMeta:{node1 fa951133-0317-499e-8a0a-fc7a0636a371 101408 0 2022-06-10 19:59:19 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.66.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1973":"csi-mock-csi-mock-volumes-1973","csi-mock-csi-mock-volumes-5230":"csi-mock-csi-mock-volumes-5230"} flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-06-10 19:59:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-06-10 20:00:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-06-10 20:08:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-06-10 20:11:46 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {e2e.test Update v1 2022-06-11 00:01:02 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakePTSRes":{}}}}} {kubelet Update v1 2022-06-11 00:11:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:example.com/fakePTSRes":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {kube-controller-manager Update v1 2022-06-11 00:12:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}},"f:status":{"f:volumesAttached":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-10 20:03:13 +0000 UTC,LastTransitionTime:2022-06-10 20:03:13 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-11 00:11:56 +0000 UTC,LastTransitionTime:2022-06-10 19:59:19 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-11 00:11:56 +0000 UTC,LastTransitionTime:2022-06-10 19:59:19 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-11 00:11:56 +0000 UTC,LastTransitionTime:2022-06-10 19:59:19 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-11 00:11:56 +0000 UTC,LastTransitionTime:2022-06-10 20:00:27 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:aabc551d0ffe4cb3b41c0db91649a9a2,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:fea48af7-d08f-4093-b808-340d06faf38b,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003977815,},ContainerImage{Names:[localhost:30500/cmk@sha256:fa61e6e6fee0a4d296013d2993a9ff5538ff0b2e232e6b9c661a6604d93ce888 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727708945,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:160595fccf5ad4e41cc0a7acf56027802bf1a2310e704f6505baf0f88746e277 k8s.gcr.io/build-image/debian-iptables:buster-v1.6.7],SizeBytes:60182103,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:51645752,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:46131354,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:46041582,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:73408b8d6699bf382b8f7526b6d0a986fad0f037440cd9aabd8985a7e1dbea07 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[localhost:30500/tasextender@sha256:bec743bd4fe4525edfd5f3c9bb11da21629092dfe60d396ce7f8168ac1088695 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:19662887,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:17680993,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:c75ac27b49326926b803b9ed43bf088bc220d22556de1bc5f72d742c91398f69 alpine:3.12],SizeBytes:5581590,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[k8s.gcr.io/busybox@sha256:d8d3bc2c183ed2f9f10e7258f84971202325ee6011ba137112e01e30f206de67 k8s.gcr.io/busybox:latest],SizeBytes:2433303,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-mock-csi-mock-volumes-5230^4,DevicePath:,},},Config:nil,},} Jun 11 00:12:04.928: INFO: Logging kubelet events for node node1 Jun 11 00:12:04.930: INFO: Logging pods the kubelet thinks is on node node1 Jun 11 00:12:04.968: INFO: pod-9dd38be0-2193-43c3-8bb1-251e78713d3e started at 2022-06-11 00:09:10 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.968: INFO: Container write-pod ready: false, restart count 0 Jun 11 00:12:04.968: INFO: pod-3ca7b04b-fe98-4f68-9b15-cedc1ef5a821 started at 2022-06-11 00:09:10 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.968: INFO: Container write-pod ready: false, restart count 0 Jun 11 00:12:04.968: INFO: csi-mockplugin-0 started at 2022-06-11 00:11:57 +0000 UTC (0+3 container statuses recorded) Jun 11 00:12:04.968: INFO: Container csi-provisioner ready: false, restart count 0 Jun 11 00:12:04.968: INFO: Container driver-registrar ready: false, restart count 0 Jun 11 00:12:04.968: INFO: Container mock ready: false, restart count 0 Jun 11 00:12:04.968: INFO: cmk-qjrhs started at 2022-06-10 20:12:29 +0000 UTC (0+2 container statuses recorded) Jun 11 00:12:04.968: INFO: Container nodereport ready: true, restart count 0 Jun 11 00:12:04.968: INFO: Container reconcile ready: true, restart count 0 Jun 11 00:12:04.968: INFO: pod-66d83f0a-cdbd-4eb8-878d-e39eb15054ae started at 2022-06-11 00:09:09 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.968: INFO: Container write-pod ready: false, restart count 0 Jun 11 00:12:04.968: INFO: pod-f42a6c91-abec-4668-9896-70dfb9592a53 started at 2022-06-11 00:09:10 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.968: INFO: Container write-pod ready: false, restart count 0 Jun 11 00:12:04.968: INFO: pod-c39e8a53-4609-45f9-9bae-285cfab3cac3 started at 2022-06-11 00:09:10 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.968: INFO: Container write-pod ready: false, restart count 0 Jun 11 00:12:04.968: INFO: pod-af92aef4-0889-499f-bd58-105f111937d4 started at 2022-06-11 00:09:10 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.968: INFO: Container write-pod ready: false, restart count 0 Jun 11 00:12:04.968: INFO: hostexec-node1-pmv6l started at 2022-06-11 00:11:21 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.968: INFO: Container agnhost-container ready: true, restart count 0 Jun 11 00:12:04.968: INFO: pod-3ace84f7-1771-4241-bd39-8ed9745c16f7 started at 2022-06-11 00:09:10 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.968: INFO: Container write-pod ready: false, restart count 0 Jun 11 00:12:04.968: INFO: pod-b377af00-e975-443d-ac86-2129b643c675 started at 2022-06-11 00:09:10 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.968: INFO: Container write-pod ready: false, restart count 0 Jun 11 00:12:04.968: INFO: pod-fa3bd46b-8308-4bfd-b3c6-bebc3d5c0089 started at 2022-06-11 00:09:10 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.968: INFO: Container write-pod ready: false, restart count 0 Jun 11 00:12:04.968: INFO: pvc-volume-tester-kfhn4 started at (0+0 container statuses recorded) Jun 11 00:12:04.968: INFO: kube-flannel-x926c started at 2022-06-10 20:00:20 +0000 UTC (1+1 container statuses recorded) Jun 11 00:12:04.968: INFO: Init container install-cni ready: true, restart count 2 Jun 11 00:12:04.968: INFO: Container kube-flannel ready: true, restart count 2 Jun 11 00:12:04.968: INFO: pod-97ac6f36-2e43-43a8-925a-46511157c6ec started at 2022-06-11 00:11:42 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.968: INFO: Container write-pod ready: true, restart count 0 Jun 11 00:12:04.968: INFO: pod-7721daa6-0d38-4f02-908b-8b93bd5eacb5 started at 2022-06-11 00:09:10 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.968: INFO: Container write-pod ready: false, restart count 0 Jun 11 00:12:04.968: INFO: pod-52fba314-85a5-4587-8196-adbf3b13284b started at 2022-06-11 00:09:10 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.968: INFO: Container write-pod ready: false, restart count 0 Jun 11 00:12:04.968: INFO: pod-aca68b0d-9d06-4c9f-b690-2993d81635aa started at 2022-06-11 00:11:54 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.968: INFO: Container write-pod ready: false, restart count 0 Jun 11 00:12:04.968: INFO: pod-90d5fd89-8e96-4613-8c78-a87b7a0d89b8 started at 2022-06-11 00:09:09 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.968: INFO: Container write-pod ready: false, restart count 0 Jun 11 00:12:04.968: INFO: pod-d989f824-8d1f-4db5-8f61-e60acd1b7c2f started at 2022-06-11 00:09:09 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.968: INFO: Container write-pod ready: false, restart count 0 Jun 11 00:12:04.968: INFO: pod-eba8b7a4-1410-4d0a-bf44-77d6ac50fb02 started at 2022-06-11 00:09:10 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.968: INFO: Container write-pod ready: false, restart count 0 Jun 11 00:12:04.968: INFO: pod-4bca585a-e363-46e2-bc52-c45a8621c47b started at 2022-06-11 00:09:10 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.968: INFO: Container write-pod ready: false, restart count 0 Jun 11 00:12:04.968: INFO: pod-56642072-6e31-4881-aadd-a579e6fbed3f started at 2022-06-11 00:09:10 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.968: INFO: Container write-pod ready: false, restart count 0 Jun 11 00:12:04.968: INFO: pod-a41a2903-65a6-4601-959c-3bd1e7dc78e0 started at 2022-06-11 00:09:10 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.968: INFO: Container write-pod ready: false, restart count 0 Jun 11 00:12:04.968: INFO: pod-ba5f1a41-e91d-4883-9164-aa689bac8e1b started at 2022-06-11 00:09:10 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.968: INFO: Container write-pod ready: false, restart count 0 Jun 11 00:12:04.968: INFO: csi-mockplugin-attacher-0 started at 2022-06-11 00:11:48 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.968: INFO: Container csi-attacher ready: true, restart count 0 Jun 11 00:12:04.968: INFO: node-feature-discovery-worker-9xsdt started at 2022-06-10 20:08:09 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.968: INFO: Container nfd-worker ready: true, restart count 0 Jun 11 00:12:04.968: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-k4f5v started at 2022-06-10 20:09:21 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.968: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 11 00:12:04.968: INFO: node-exporter-tk8f9 started at 2022-06-10 20:13:33 +0000 UTC (0+2 container statuses recorded) Jun 11 00:12:04.968: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 11 00:12:04.968: INFO: Container node-exporter ready: true, restart count 0 Jun 11 00:12:04.968: INFO: pod-a799c81d-17a1-4518-8fcd-04a5dec084ed started at 2022-06-11 00:09:10 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.968: INFO: Container write-pod ready: false, restart count 0 Jun 11 00:12:04.968: INFO: pod-e1c0f57e-1836-4eb6-931f-7525fb8eb279 started at 2022-06-11 00:09:10 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.968: INFO: Container write-pod ready: false, restart count 0 Jun 11 00:12:04.968: INFO: pod-b2fbaf1b-489e-4857-b224-ba8b6588f421 started at 2022-06-11 00:09:10 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.968: INFO: Container write-pod ready: false, restart count 0 Jun 11 00:12:04.968: INFO: pod-709e92db-fffd-4beb-a8b8-4f9fdfbcceda started at 2022-06-11 00:09:10 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.968: INFO: Container write-pod ready: false, restart count 0 Jun 11 00:12:04.968: INFO: pod-59048f64-6d25-467f-88ec-13d9ad815482 started at 2022-06-11 00:09:10 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.968: INFO: Container write-pod ready: false, restart count 0 Jun 11 00:12:04.968: INFO: pod-02e149d0-d42f-4ef5-b390-2563f868ac60 started at 2022-06-11 00:09:10 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.968: INFO: Container write-pod ready: false, restart count 0 Jun 11 00:12:04.968: INFO: pod-149b5462-0764-457d-af65-dfeb5003a86a started at 2022-06-11 00:09:10 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.968: INFO: Container write-pod ready: false, restart count 0 Jun 11 00:12:04.968: INFO: csi-mockplugin-0 started at 2022-06-11 00:11:48 +0000 UTC (0+3 container statuses recorded) Jun 11 00:12:04.968: INFO: Container csi-provisioner ready: true, restart count 0 Jun 11 00:12:04.968: INFO: Container driver-registrar ready: true, restart count 0 Jun 11 00:12:04.968: INFO: Container mock ready: true, restart count 0 Jun 11 00:12:04.968: INFO: pvc-volume-tester-hr856 started at (0+0 container statuses recorded) Jun 11 00:12:04.968: INFO: nginx-proxy-node1 started at 2022-06-10 19:59:19 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.968: INFO: Container nginx-proxy ready: true, restart count 2 Jun 11 00:12:04.968: INFO: pod-ea629a06-0571-428f-9daa-436f8e1fe2c9 started at 2022-06-11 00:09:10 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.968: INFO: Container write-pod ready: false, restart count 0 Jun 11 00:12:04.968: INFO: pod-4bc2a45c-dfcf-4a12-b7f7-be1e104ef43d started at 2022-06-11 00:09:10 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.968: INFO: Container write-pod ready: false, restart count 0 Jun 11 00:12:04.968: INFO: csi-mockplugin-attacher-0 started at 2022-06-11 00:11:57 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.968: INFO: Container csi-attacher ready: false, restart count 0 Jun 11 00:12:04.968: INFO: pod-35d34441-fe96-4f77-96f4-c2c126bec9af started at 2022-06-11 00:09:10 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.968: INFO: Container write-pod ready: false, restart count 0 Jun 11 00:12:04.968: INFO: kube-proxy-5bkrr started at 2022-06-10 19:59:24 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.968: INFO: Container kube-proxy ready: true, restart count 1 Jun 11 00:12:04.968: INFO: kube-multus-ds-amd64-4gckf started at 2022-06-10 20:00:29 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.968: INFO: Container kube-multus ready: true, restart count 1 Jun 11 00:12:04.968: INFO: tas-telemetry-aware-scheduling-84ff454dfb-lb2mn started at 2022-06-10 20:16:40 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.968: INFO: Container tas-extender ready: true, restart count 0 Jun 11 00:12:04.968: INFO: pod-0880b6ad-246e-4ca3-9a8c-5075dc6ac403 started at 2022-06-11 00:09:10 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.968: INFO: Container write-pod ready: false, restart count 0 Jun 11 00:12:04.968: INFO: pod-d3413e48-7931-4a06-bfb9-a00eb4df4d2c started at 2022-06-11 00:09:10 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.968: INFO: Container write-pod ready: false, restart count 0 Jun 11 00:12:04.968: INFO: pod-53aa00dd-403e-4112-81f2-f9b767480884 started at 2022-06-11 00:09:10 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.968: INFO: Container write-pod ready: false, restart count 0 Jun 11 00:12:04.968: INFO: pod-9d3d2276-60c2-4621-81d7-e4c276797c3f started at 2022-06-11 00:09:10 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.968: INFO: Container write-pod ready: false, restart count 0 Jun 11 00:12:04.968: INFO: hostexec-node1-c9c9z started at 2022-06-11 00:10:53 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.968: INFO: Container agnhost-container ready: true, restart count 0 Jun 11 00:12:04.968: INFO: pod-1cb8a5ae-d011-430d-9f34-ed4eec4dae22 started at 2022-06-11 00:09:09 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.968: INFO: Container write-pod ready: false, restart count 0 Jun 11 00:12:04.968: INFO: pod-72c1ee9f-9b43-4121-bc5e-6d9e22c7e054 started at 2022-06-11 00:09:10 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.968: INFO: Container write-pod ready: false, restart count 0 Jun 11 00:12:04.968: INFO: pod-58bc9bb3-42d2-4f62-89ba-8af9235f26e9 started at 2022-06-11 00:09:10 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.968: INFO: Container write-pod ready: false, restart count 0 Jun 11 00:12:04.968: INFO: pod-5ec9fb7b-f244-4c87-89c6-3d8acbca8129 started at 2022-06-11 00:09:10 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.968: INFO: Container write-pod ready: false, restart count 0 Jun 11 00:12:04.968: INFO: pod-2f037a4e-f7e2-495c-9e7d-c61352c65d2c started at 2022-06-11 00:09:09 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.968: INFO: Container write-pod ready: false, restart count 0 Jun 11 00:12:04.968: INFO: pod-061184ac-f459-4a8f-a91a-a769d34e5685 started at 2022-06-11 00:09:10 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.968: INFO: Container write-pod ready: false, restart count 0 Jun 11 00:12:04.968: INFO: pod-a82a1bab-8038-47ec-b6d6-f3826f1099b9 started at 2022-06-11 00:09:10 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.968: INFO: Container write-pod ready: false, restart count 0 Jun 11 00:12:04.968: INFO: pod-9c432720-6a24-4565-8305-be601127d5b7 started at 2022-06-11 00:09:10 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.968: INFO: Container write-pod ready: false, restart count 0 Jun 11 00:12:04.968: INFO: cmk-init-discover-node1-hlbt6 started at 2022-06-10 20:11:42 +0000 UTC (0+3 container statuses recorded) Jun 11 00:12:04.968: INFO: Container discover ready: false, restart count 0 Jun 11 00:12:04.968: INFO: Container init ready: false, restart count 0 Jun 11 00:12:04.968: INFO: Container install ready: false, restart count 0 Jun 11 00:12:04.968: INFO: prometheus-k8s-0 started at 2022-06-10 20:13:45 +0000 UTC (0+4 container statuses recorded) Jun 11 00:12:04.968: INFO: Container config-reloader ready: true, restart count 0 Jun 11 00:12:04.968: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Jun 11 00:12:04.968: INFO: Container grafana ready: true, restart count 0 Jun 11 00:12:04.968: INFO: Container prometheus ready: true, restart count 1 Jun 11 00:12:04.968: INFO: pod-77a3a5b5-e180-4781-b8ca-fed995052088 started at 2022-06-11 00:09:09 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.968: INFO: Container write-pod ready: false, restart count 0 Jun 11 00:12:04.968: INFO: pod-d76e67ba-cc8d-462c-a30d-7217f4adccba started at 2022-06-11 00:09:09 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.968: INFO: Container write-pod ready: false, restart count 0 Jun 11 00:12:04.968: INFO: pod-04398ff5-1225-4e4d-9745-bd0756bbbb3f started at 2022-06-11 00:09:10 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.968: INFO: Container write-pod ready: false, restart count 0 Jun 11 00:12:04.968: INFO: csi-mockplugin-0 started at 2022-06-11 00:11:32 +0000 UTC (0+4 container statuses recorded) Jun 11 00:12:04.968: INFO: Container busybox ready: true, restart count 0 Jun 11 00:12:04.968: INFO: Container csi-provisioner ready: true, restart count 0 Jun 11 00:12:04.968: INFO: Container driver-registrar ready: true, restart count 0 Jun 11 00:12:04.968: INFO: Container mock ready: true, restart count 0 Jun 11 00:12:04.968: INFO: pod-573faebe-ca06-4bf2-9808-5860e3596738 started at 2022-06-11 00:09:10 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.968: INFO: Container write-pod ready: false, restart count 0 Jun 11 00:12:04.968: INFO: pod-2c9adac7-fd6c-4869-a723-d8d4c2e9d720 started at 2022-06-11 00:09:10 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.968: INFO: Container write-pod ready: false, restart count 0 Jun 11 00:12:04.968: INFO: pod-46dd89e5-0f15-434b-b9b5-5a234800dc29 started at 2022-06-11 00:09:10 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.968: INFO: Container write-pod ready: false, restart count 0 Jun 11 00:12:04.968: INFO: pod-933e3a58-4742-460b-947f-77e86c37c62f started at 2022-06-11 00:11:47 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.968: INFO: Container write-pod ready: true, restart count 0 Jun 11 00:12:04.968: INFO: pod-a35067f0-288c-4fa0-a09c-bafad5ae5dc1 started at 2022-06-11 00:07:33 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.968: INFO: Container write-pod ready: false, restart count 0 Jun 11 00:12:04.968: INFO: pod-e01e550e-81de-4eb4-9576-85ef6b9aa9b9 started at 2022-06-11 00:09:10 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.968: INFO: Container write-pod ready: false, restart count 0 Jun 11 00:12:04.968: INFO: pod-0136b947-9ee8-4091-8c0b-6775ec98c8cb started at 2022-06-11 00:09:10 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.968: INFO: Container write-pod ready: false, restart count 0 Jun 11 00:12:04.968: INFO: pod-2ee7b6c5-dcf2-430b-bca0-ab26f8936b65 started at 2022-06-11 00:09:10 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.968: INFO: Container write-pod ready: false, restart count 0 Jun 11 00:12:04.968: INFO: hostexec-node1-8g5ht started at 2022-06-11 00:11:54 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.968: INFO: Container agnhost-container ready: true, restart count 0 Jun 11 00:12:04.968: INFO: hostexec-node1-bptvs started at 2022-06-11 00:11:59 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.968: INFO: Container agnhost-container ready: false, restart count 0 Jun 11 00:12:04.968: INFO: cmk-webhook-6c9d5f8578-n9w8j started at 2022-06-10 20:12:30 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.968: INFO: Container cmk-webhook ready: true, restart count 0 Jun 11 00:12:04.968: INFO: collectd-kpj5z started at 2022-06-10 20:17:30 +0000 UTC (0+3 container statuses recorded) Jun 11 00:12:04.968: INFO: Container collectd ready: true, restart count 0 Jun 11 00:12:04.968: INFO: Container collectd-exporter ready: true, restart count 0 Jun 11 00:12:04.968: INFO: Container rbac-proxy ready: true, restart count 0 Jun 11 00:12:04.968: INFO: pod-5f9ff9c6-ed7d-469e-a7df-6efa0d3461c5 started at 2022-06-11 00:09:10 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:04.968: INFO: Container write-pod ready: false, restart count 0 Jun 11 00:12:06.591: INFO: Latency metrics for node node1 Jun 11 00:12:06.591: INFO: Logging node info for node node2 Jun 11 00:12:06.594: INFO: Node Info: &Node{ObjectMeta:{node2 e3ba5b73-7a35-4d3f-9138-31db06c90dc3 101391 0 2022-06-10 19:59:19 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.66.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-06-10 19:59:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-06-10 20:00:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-06-10 20:08:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-06-10 20:12:10 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {e2e.test Update v1 2022-06-11 00:01:02 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakePTSRes":{},"f:example.com/fakecpu":{}}}}} {kube-controller-manager Update v1 2022-06-11 00:09:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubelet Update v1 2022-06-11 00:11:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:example.com/fakePTSRes":{},"f:example.com/fakecpu":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269604352 0} {} 196552348Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884603904 0} {} 174691996Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-10 20:03:16 +0000 UTC,LastTransitionTime:2022-06-10 20:03:16 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-11 00:12:04 +0000 UTC,LastTransitionTime:2022-06-10 19:59:19 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-11 00:12:04 +0000 UTC,LastTransitionTime:2022-06-10 19:59:19 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-11 00:12:04 +0000 UTC,LastTransitionTime:2022-06-10 19:59:19 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-11 00:12:04 +0000 UTC,LastTransitionTime:2022-06-10 20:00:31 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bb5fb4a83f9949939cd41b7583e9b343,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:bd9c2046-c9ae-4b83-a147-c07e3487254e,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[localhost:30500/cmk@sha256:fa61e6e6fee0a4d296013d2993a9ff5538ff0b2e232e6b9c661a6604d93ce888 localhost:30500/cmk:v1.5.1],SizeBytes:727708945,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:727708945,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:51645752,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:46131354,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:46041582,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:44576952,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:73408b8d6699bf382b8f7526b6d0a986fad0f037440cd9aabd8985a7e1dbea07 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:42321438,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:19662887,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:17680993,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/busybox@sha256:d8d3bc2c183ed2f9f10e7258f84971202325ee6011ba137112e01e30f206de67 k8s.gcr.io/busybox:latest],SizeBytes:2433303,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 11 00:12:06.595: INFO: Logging kubelet events for node node2 Jun 11 00:12:06.597: INFO: Logging pods the kubelet thinks is on node node2 Jun 11 00:12:06.609: INFO: nginx-proxy-node2 started at 2022-06-10 19:59:19 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:06.609: INFO: Container nginx-proxy ready: true, restart count 2 Jun 11 00:12:06.609: INFO: kube-multus-ds-amd64-nj866 started at 2022-06-10 20:00:29 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:06.609: INFO: Container kube-multus ready: true, restart count 1 Jun 11 00:12:06.609: INFO: kubernetes-dashboard-785dcbb76d-7pmgn started at 2022-06-10 20:01:00 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:06.609: INFO: Container kubernetes-dashboard ready: true, restart count 1 Jun 11 00:12:06.609: INFO: cmk-zpstc started at 2022-06-10 20:12:29 +0000 UTC (0+2 container statuses recorded) Jun 11 00:12:06.609: INFO: Container nodereport ready: true, restart count 0 Jun 11 00:12:06.609: INFO: Container reconcile ready: true, restart count 0 Jun 11 00:12:06.609: INFO: node-feature-discovery-worker-s9mwk started at 2022-06-10 20:08:09 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:06.609: INFO: Container nfd-worker ready: true, restart count 0 Jun 11 00:12:06.609: INFO: test-hostpath-type-5zpqn started at 2022-06-11 00:12:01 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:06.609: INFO: Container host-path-testing ready: true, restart count 0 Jun 11 00:12:06.609: INFO: hostexec-node2-cnwt4 started at 2022-06-11 00:12:04 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:06.609: INFO: Container agnhost-container ready: true, restart count 0 Jun 11 00:12:06.609: INFO: kube-proxy-4clxz started at 2022-06-10 19:59:24 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:06.609: INFO: Container kube-proxy ready: true, restart count 2 Jun 11 00:12:06.609: INFO: kube-flannel-8jl6m started at 2022-06-10 20:00:20 +0000 UTC (1+1 container statuses recorded) Jun 11 00:12:06.609: INFO: Init container install-cni ready: true, restart count 2 Jun 11 00:12:06.609: INFO: Container kube-flannel ready: true, restart count 2 Jun 11 00:12:06.609: INFO: cmk-init-discover-node2-jxvbr started at 2022-06-10 20:12:04 +0000 UTC (0+3 container statuses recorded) Jun 11 00:12:06.609: INFO: Container discover ready: false, restart count 0 Jun 11 00:12:06.609: INFO: Container init ready: false, restart count 0 Jun 11 00:12:06.609: INFO: Container install ready: false, restart count 0 Jun 11 00:12:06.609: INFO: test-hostpath-type-6slmj started at 2022-06-11 00:11:49 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:06.609: INFO: Container host-path-testing ready: true, restart count 0 Jun 11 00:12:06.609: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-z4m46 started at 2022-06-10 20:09:21 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:06.609: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 11 00:12:06.609: INFO: collectd-srmjh started at 2022-06-10 20:17:30 +0000 UTC (0+3 container statuses recorded) Jun 11 00:12:06.609: INFO: Container collectd ready: true, restart count 0 Jun 11 00:12:06.609: INFO: Container collectd-exporter ready: true, restart count 0 Jun 11 00:12:06.609: INFO: Container rbac-proxy ready: true, restart count 0 Jun 11 00:12:06.609: INFO: test-hostpath-type-bbddq started at 2022-06-11 00:12:06 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:06.609: INFO: Container host-path-testing ready: false, restart count 0 Jun 11 00:12:06.609: INFO: kubernetes-metrics-scraper-5558854cb-pf6tn started at 2022-06-10 20:01:01 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:06.609: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Jun 11 00:12:06.609: INFO: node-exporter-trpg7 started at 2022-06-10 20:13:33 +0000 UTC (0+2 container statuses recorded) Jun 11 00:12:06.609: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 11 00:12:06.609: INFO: Container node-exporter ready: true, restart count 0 Jun 11 00:12:06.609: INFO: test-hostpath-type-r7z7f started at 2022-06-11 00:11:43 +0000 UTC (0+1 container statuses recorded) Jun 11 00:12:06.610: INFO: Container host-path-sh-testing ready: true, restart count 0 Jun 11 00:12:08.104: INFO: Latency metrics for node node2 Jun 11 00:12:08.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-9727" for this suite. • Failure [74.651 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set same fsGroup for two pods simultaneously [Slow] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 Jun 11 00:12:04.436: failed to get expected fsGroup 1234 on directory /mnt/volume1 in pod pod-933e3a58-4742-460b-947f-77e86c37c62f Unexpected error: <*errors.errorString | 0xc0013a54a0>: { s: "Failed to find \"1234\", last result: \"1000\n\"", } Failed to find "1234", last result: "1000 " occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:808 ------------------------------ {"msg":"FAILED [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]","total":-1,"completed":13,"skipped":415,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]"]} S ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:11:20.969: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node1" using path "/tmp/local-volume-test-93cc5b12-c0d9-44fb-891f-e18beabe970a" Jun 11 00:11:39.019: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-93cc5b12-c0d9-44fb-891f-e18beabe970a && dd if=/dev/zero of=/tmp/local-volume-test-93cc5b12-c0d9-44fb-891f-e18beabe970a/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-93cc5b12-c0d9-44fb-891f-e18beabe970a/file] Namespace:persistent-local-volumes-test-9936 PodName:hostexec-node1-pmv6l ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:11:39.019: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:11:39.242: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-93cc5b12-c0d9-44fb-891f-e18beabe970a/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-9936 PodName:hostexec-node1-pmv6l ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:11:39.242: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:11:39.769: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkfs -t ext4 /dev/loop0 && mount -t ext4 /dev/loop0 /tmp/local-volume-test-93cc5b12-c0d9-44fb-891f-e18beabe970a && chmod o+rwx /tmp/local-volume-test-93cc5b12-c0d9-44fb-891f-e18beabe970a] Namespace:persistent-local-volumes-test-9936 PodName:hostexec-node1-pmv6l ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:11:39.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 11 00:11:39.979: INFO: Creating a PV followed by a PVC Jun 11 00:11:39.986: INFO: Waiting for PV local-pvq7t5q to bind to PVC pvc-7cdw2 Jun 11 00:11:39.986: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-7cdw2] to have phase Bound Jun 11 00:11:39.988: INFO: PersistentVolumeClaim pvc-7cdw2 found but phase is Pending instead of Bound. Jun 11 00:11:41.993: INFO: PersistentVolumeClaim pvc-7cdw2 found and phase=Bound (2.007068778s) Jun 11 00:11:41.993: INFO: Waiting up to 3m0s for PersistentVolume local-pvq7t5q to have phase Bound Jun 11 00:11:41.995: INFO: PersistentVolume local-pvq7t5q found and phase=Bound (2.455838ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 STEP: Creating pod1 to write to the PV STEP: Creating a pod Jun 11 00:11:54.021: INFO: pod "pod-97ac6f36-2e43-43a8-925a-46511157c6ec" created on Node "node1" STEP: Writing in pod1 Jun 11 00:11:54.021: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9936 PodName:pod-97ac6f36-2e43-43a8-925a-46511157c6ec ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:11:54.021: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:11:54.178: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Jun 11 00:11:54.178: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9936 PodName:pod-97ac6f36-2e43-43a8-925a-46511157c6ec ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:11:54.178: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:11:54.278: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Creating pod2 to read from the PV STEP: Creating a pod Jun 11 00:12:08.303: INFO: pod "pod-aca68b0d-9d06-4c9f-b690-2993d81635aa" created on Node "node1" Jun 11 00:12:08.303: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9936 PodName:pod-aca68b0d-9d06-4c9f-b690-2993d81635aa ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:12:08.303: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:12:08.395: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod2 Jun 11 00:12:08.395: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-93cc5b12-c0d9-44fb-891f-e18beabe970a > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9936 PodName:pod-aca68b0d-9d06-4c9f-b690-2993d81635aa ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:12:08.395: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:12:08.481: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-93cc5b12-c0d9-44fb-891f-e18beabe970a > /mnt/volume1/test-file", out: "", stderr: "", err: STEP: Reading in pod1 Jun 11 00:12:08.481: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9936 PodName:pod-97ac6f36-2e43-43a8-925a-46511157c6ec ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:12:08.481: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:12:08.561: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "/tmp/local-volume-test-93cc5b12-c0d9-44fb-891f-e18beabe970a", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-97ac6f36-2e43-43a8-925a-46511157c6ec in namespace persistent-local-volumes-test-9936 STEP: Deleting pod2 STEP: Deleting pod pod-aca68b0d-9d06-4c9f-b690-2993d81635aa in namespace persistent-local-volumes-test-9936 [AfterEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 11 00:12:08.570: INFO: Deleting PersistentVolumeClaim "pvc-7cdw2" Jun 11 00:12:08.574: INFO: Deleting PersistentVolume "local-pvq7t5q" Jun 11 00:12:08.578: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-93cc5b12-c0d9-44fb-891f-e18beabe970a] Namespace:persistent-local-volumes-test-9936 PodName:hostexec-node1-pmv6l ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:12:08.578: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:12:08.677: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-93cc5b12-c0d9-44fb-891f-e18beabe970a/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-9936 PodName:hostexec-node1-pmv6l ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:12:08.677: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "node1" at path /tmp/local-volume-test-93cc5b12-c0d9-44fb-891f-e18beabe970a/file Jun 11 00:12:08.768: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-9936 PodName:hostexec-node1-pmv6l ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:12:08.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-93cc5b12-c0d9-44fb-891f-e18beabe970a Jun 11 00:12:08.867: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-93cc5b12-c0d9-44fb-891f-e18beabe970a] Namespace:persistent-local-volumes-test-9936 PodName:hostexec-node1-pmv6l ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:12:08.867: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:12:08.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-9936" for this suite. • [SLOW TEST:48.016 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":15,"skipped":495,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:11:59.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 Jun 11 00:12:09.766: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-1927 PodName:hostexec-node1-bptvs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:12:09.766: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:12:09.994: INFO: exec node1: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Jun 11 00:12:09.994: INFO: exec node1: stdout: "0\n" Jun 11 00:12:09.994: INFO: exec node1: stderr: "ls: cannot access /mnt/disks/by-uuid/google-local-ssds-scsi-fs/: No such file or directory\n" Jun 11 00:12:09.994: INFO: exec node1: exit code: 0 Jun 11 00:12:09.994: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:12:09.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-1927" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [10.285 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1254 ------------------------------ [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:12:01.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-char-dev STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:256 STEP: Create a pod for further testing Jun 11 00:12:01.906: INFO: The status of Pod test-hostpath-type-5zpqn is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:12:03.911: INFO: The status of Pod test-hostpath-type-5zpqn is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:12:05.911: INFO: The status of Pod test-hostpath-type-5zpqn is Running (Ready = true) STEP: running on node node2 STEP: Create a character device for further testing Jun 11 00:12:05.913: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/achardev c 89 1] Namespace:host-path-type-char-dev-3468 PodName:test-hostpath-type-5zpqn ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:12:05.913: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to mount character device 'achardev' successfully when HostPathType is HostPathCharDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:277 [AfterEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:12:10.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-char-dev-3468" for this suite. • [SLOW TEST:8.159 seconds] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should be able to mount character device 'achardev' successfully when HostPathType is HostPathCharDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:277 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Character Device [Slow] Should be able to mount character device 'achardev' successfully when HostPathType is HostPathCharDev","total":-1,"completed":11,"skipped":607,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:12:10.009: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50 [It] nonexistent volume subPath should have the correct mode and owner using FSGroup /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:63 STEP: Creating a pod to test emptydir subpath on tmpfs Jun 11 00:12:10.043: INFO: Waiting up to 5m0s for pod "pod-a7a2ddc1-4b71-411d-835f-d332fff00431" in namespace "emptydir-1848" to be "Succeeded or Failed" Jun 11 00:12:10.045: INFO: Pod "pod-a7a2ddc1-4b71-411d-835f-d332fff00431": Phase="Pending", Reason="", readiness=false. Elapsed: 2.199268ms Jun 11 00:12:12.049: INFO: Pod "pod-a7a2ddc1-4b71-411d-835f-d332fff00431": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005530243s Jun 11 00:12:14.053: INFO: Pod "pod-a7a2ddc1-4b71-411d-835f-d332fff00431": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010417312s Jun 11 00:12:16.057: INFO: Pod "pod-a7a2ddc1-4b71-411d-835f-d332fff00431": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013841503s STEP: Saw pod success Jun 11 00:12:16.057: INFO: Pod "pod-a7a2ddc1-4b71-411d-835f-d332fff00431" satisfied condition "Succeeded or Failed" Jun 11 00:12:16.059: INFO: Trying to get logs from node node2 pod pod-a7a2ddc1-4b71-411d-835f-d332fff00431 container test-container: STEP: delete the pod Jun 11 00:12:16.071: INFO: Waiting for pod pod-a7a2ddc1-4b71-411d-835f-d332fff00431 to disappear Jun 11 00:12:16.073: INFO: Pod pod-a7a2ddc1-4b71-411d-835f-d332fff00431 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:12:16.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1848" for this suite. • [SLOW TEST:6.073 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:48 nonexistent volume subPath should have the correct mode and owner using FSGroup /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:63 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] nonexistent volume subPath should have the correct mode and owner using FSGroup","total":-1,"completed":12,"skipped":597,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:12:08.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-socket STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:191 STEP: Create a pod for further testing Jun 11 00:12:08.159: INFO: The status of Pod test-hostpath-type-x9tq9 is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:12:10.162: INFO: The status of Pod test-hostpath-type-x9tq9 is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:12:12.163: INFO: The status of Pod test-hostpath-type-x9tq9 is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:12:14.163: INFO: The status of Pod test-hostpath-type-x9tq9 is Running (Ready = true) STEP: running on node node2 [It] Should fail on mounting socket 'asocket' when HostPathType is HostPathDirectory /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:216 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:12:16.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-socket-252" for this suite. • [SLOW TEST:8.077 seconds] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting socket 'asocket' when HostPathType is HostPathDirectory /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:216 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Socket [Slow] Should fail on mounting socket 'asocket' when HostPathType is HostPathDirectory","total":-1,"completed":14,"skipped":416,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:12:09.110: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-socket STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:191 STEP: Create a pod for further testing Jun 11 00:12:09.151: INFO: The status of Pod test-hostpath-type-vp6q8 is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:12:11.155: INFO: The status of Pod test-hostpath-type-vp6q8 is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:12:13.155: INFO: The status of Pod test-hostpath-type-vp6q8 is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:12:15.156: INFO: The status of Pod test-hostpath-type-vp6q8 is Running (Ready = true) STEP: running on node node2 [It] Should fail on mounting non-existent socket 'does-not-exist-socket' when HostPathType is HostPathSocket /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:202 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:12:17.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-socket-7331" for this suite. • [SLOW TEST:8.083 seconds] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting non-existent socket 'does-not-exist-socket' when HostPathType is HostPathSocket /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:202 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Socket [Slow] Should fail on mounting non-existent socket 'does-not-exist-socket' when HostPathType is HostPathSocket","total":-1,"completed":16,"skipped":548,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:12:04.182: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node2" using path "/tmp/local-volume-test-45378538-3ea5-4f9b-b0f8-e4ae91ad1529" Jun 11 00:12:06.232: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-45378538-3ea5-4f9b-b0f8-e4ae91ad1529 && dd if=/dev/zero of=/tmp/local-volume-test-45378538-3ea5-4f9b-b0f8-e4ae91ad1529/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-45378538-3ea5-4f9b-b0f8-e4ae91ad1529/file] Namespace:persistent-local-volumes-test-57 PodName:hostexec-node2-cnwt4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:12:06.232: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:12:06.363: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-45378538-3ea5-4f9b-b0f8-e4ae91ad1529/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-57 PodName:hostexec-node2-cnwt4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:12:06.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 11 00:12:06.538: INFO: Creating a PV followed by a PVC Jun 11 00:12:06.545: INFO: Waiting for PV local-pvd4mfj to bind to PVC pvc-ddwhb Jun 11 00:12:06.545: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-ddwhb] to have phase Bound Jun 11 00:12:06.547: INFO: PersistentVolumeClaim pvc-ddwhb found but phase is Pending instead of Bound. Jun 11 00:12:08.553: INFO: PersistentVolumeClaim pvc-ddwhb found but phase is Pending instead of Bound. Jun 11 00:12:10.559: INFO: PersistentVolumeClaim pvc-ddwhb found but phase is Pending instead of Bound. Jun 11 00:12:12.564: INFO: PersistentVolumeClaim pvc-ddwhb found and phase=Bound (6.018778417s) Jun 11 00:12:12.564: INFO: Waiting up to 3m0s for PersistentVolume local-pvd4mfj to have phase Bound Jun 11 00:12:12.566: INFO: PersistentVolume local-pvd4mfj found and phase=Bound (1.9501ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 STEP: Creating pod1 to write to the PV STEP: Creating a pod Jun 11 00:12:16.592: INFO: pod "pod-c2ded367-efd5-444d-90c9-a876bb625bc2" created on Node "node2" STEP: Writing in pod1 Jun 11 00:12:16.592: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /tmp/mnt/volume1; echo test-file-content > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file] Namespace:persistent-local-volumes-test-57 PodName:pod-c2ded367-efd5-444d-90c9-a876bb625bc2 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:12:16.592: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:12:16.675: INFO: podRWCmdExec cmd: "mkdir -p /tmp/mnt/volume1; echo test-file-content > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file", out: "", stderr: "0+1 records in\n0+1 records out\n18 bytes (18B) copied, 0.000157 seconds, 112.0KB/s", err: Jun 11 00:12:16.675: INFO: ExecWithOptions {Command:[/bin/sh -c hexdump -n 100 -e '100 "%_p"' /mnt/volume1 | head -1] Namespace:persistent-local-volumes-test-57 PodName:pod-c2ded367-efd5-444d-90c9-a876bb625bc2 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:12:16.675: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:12:16.756: INFO: podRWCmdExec cmd: "hexdump -n 100 -e '100 \"%_p\"' /mnt/volume1 | head -1", out: "test-file-content...................................................................................", stderr: "", err: STEP: Creating pod2 to read from the PV STEP: Creating a pod Jun 11 00:12:24.780: INFO: pod "pod-aa9a5d5d-2d96-4741-b4c9-637a78c2ad83" created on Node "node2" Jun 11 00:12:24.780: INFO: ExecWithOptions {Command:[/bin/sh -c hexdump -n 100 -e '100 "%_p"' /mnt/volume1 | head -1] Namespace:persistent-local-volumes-test-57 PodName:pod-aa9a5d5d-2d96-4741-b4c9-637a78c2ad83 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:12:24.780: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:12:24.944: INFO: podRWCmdExec cmd: "hexdump -n 100 -e '100 \"%_p\"' /mnt/volume1 | head -1", out: "test-file-content...................................................................................", stderr: "", err: STEP: Writing in pod2 Jun 11 00:12:24.944: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /tmp/mnt/volume1; echo /dev/loop0 > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file] Namespace:persistent-local-volumes-test-57 PodName:pod-aa9a5d5d-2d96-4741-b4c9-637a78c2ad83 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:12:24.944: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:12:25.133: INFO: podRWCmdExec cmd: "mkdir -p /tmp/mnt/volume1; echo /dev/loop0 > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file", out: "", stderr: "0+1 records in\n0+1 records out\n11 bytes (11B) copied, 0.000038 seconds, 282.7KB/s", err: STEP: Reading in pod1 Jun 11 00:12:25.133: INFO: ExecWithOptions {Command:[/bin/sh -c hexdump -n 100 -e '100 "%_p"' /mnt/volume1 | head -1] Namespace:persistent-local-volumes-test-57 PodName:pod-c2ded367-efd5-444d-90c9-a876bb625bc2 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:12:25.133: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:12:25.263: INFO: podRWCmdExec cmd: "hexdump -n 100 -e '100 \"%_p\"' /mnt/volume1 | head -1", out: "/dev/loop0.ontent...................................................................................", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-c2ded367-efd5-444d-90c9-a876bb625bc2 in namespace persistent-local-volumes-test-57 STEP: Deleting pod2 STEP: Deleting pod pod-aa9a5d5d-2d96-4741-b4c9-637a78c2ad83 in namespace persistent-local-volumes-test-57 [AfterEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 11 00:12:25.273: INFO: Deleting PersistentVolumeClaim "pvc-ddwhb" Jun 11 00:12:25.277: INFO: Deleting PersistentVolume "local-pvd4mfj" Jun 11 00:12:25.283: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-45378538-3ea5-4f9b-b0f8-e4ae91ad1529/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-57 PodName:hostexec-node2-cnwt4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:12:25.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "node2" at path /tmp/local-volume-test-45378538-3ea5-4f9b-b0f8-e4ae91ad1529/file Jun 11 00:12:25.369: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-57 PodName:hostexec-node2-cnwt4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:12:25.369: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-45378538-3ea5-4f9b-b0f8-e4ae91ad1529 Jun 11 00:12:25.462: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-45378538-3ea5-4f9b-b0f8-e4ae91ad1529] Namespace:persistent-local-volumes-test-57 PodName:hostexec-node2-cnwt4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:12:25.462: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:12:25.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-57" for this suite. • [SLOW TEST:21.386 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: block] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":11,"skipped":446,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Mounted volume expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:12:25.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename mounted-volume-expand STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Mounted volume expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/mounted_volume_resize.go:61 Jun 11 00:12:25.747: INFO: Only supported for providers [aws gce] (not local) [AfterEach] [sig-storage] Mounted volume expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:12:25.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "mounted-volume-expand-1064" for this suite. [AfterEach] [sig-storage] Mounted volume expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/mounted_volume_resize.go:108 Jun 11 00:12:25.757: INFO: AfterEach: Cleaning up resources for mounted volume resize S [SKIPPING] in Spec Setup (BeforeEach) [0.036 seconds] [sig-storage] Mounted volume expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should verify mounted devices can be resized [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/mounted_volume_resize.go:122 Only supported for providers [aws gce] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/mounted_volume_resize.go:62 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:12:16.119: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-char-dev STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:256 STEP: Create a pod for further testing Jun 11 00:12:16.155: INFO: The status of Pod test-hostpath-type-4jwhh is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:12:18.159: INFO: The status of Pod test-hostpath-type-4jwhh is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:12:20.164: INFO: The status of Pod test-hostpath-type-4jwhh is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:12:22.162: INFO: The status of Pod test-hostpath-type-4jwhh is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:12:24.163: INFO: The status of Pod test-hostpath-type-4jwhh is Running (Ready = true) STEP: running on node node2 STEP: Create a character device for further testing Jun 11 00:12:24.165: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/achardev c 89 1] Namespace:host-path-type-char-dev-3754 PodName:test-hostpath-type-4jwhh ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:12:24.165: INFO: >>> kubeConfig: /root/.kube/config [It] Should fail on mounting character device 'achardev' when HostPathType is HostPathSocket /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:295 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:12:26.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-char-dev-3754" for this suite. • [SLOW TEST:10.164 seconds] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting character device 'achardev' when HostPathType is HostPathSocket /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:295 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Character Device [Slow] Should fail on mounting character device 'achardev' when HostPathType is HostPathSocket","total":-1,"completed":13,"skipped":610,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:12:26.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:68 Jun 11 00:12:26.327: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian) [AfterEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:12:26.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-8688" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.032 seconds] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 NFSv4 [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:78 should be mountable for NFSv4 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:79 Only supported for node OS distro [gci ubuntu custom] (not debian) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:69 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:11:54.718: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-cae0609b-881b-4fa6-bb60-7619c6c42648" Jun 11 00:12:02.765: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-cae0609b-881b-4fa6-bb60-7619c6c42648" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-cae0609b-881b-4fa6-bb60-7619c6c42648" "/tmp/local-volume-test-cae0609b-881b-4fa6-bb60-7619c6c42648"] Namespace:persistent-local-volumes-test-5946 PodName:hostexec-node1-8g5ht ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:12:02.765: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 11 00:12:02.887: INFO: Creating a PV followed by a PVC Jun 11 00:12:02.894: INFO: Waiting for PV local-pvfns7x to bind to PVC pvc-6zqpm Jun 11 00:12:02.894: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-6zqpm] to have phase Bound Jun 11 00:12:02.897: INFO: PersistentVolumeClaim pvc-6zqpm found but phase is Pending instead of Bound. Jun 11 00:12:04.902: INFO: PersistentVolumeClaim pvc-6zqpm found but phase is Pending instead of Bound. Jun 11 00:12:06.906: INFO: PersistentVolumeClaim pvc-6zqpm found but phase is Pending instead of Bound. Jun 11 00:12:08.912: INFO: PersistentVolumeClaim pvc-6zqpm found but phase is Pending instead of Bound. Jun 11 00:12:10.916: INFO: PersistentVolumeClaim pvc-6zqpm found but phase is Pending instead of Bound. Jun 11 00:12:12.919: INFO: PersistentVolumeClaim pvc-6zqpm found and phase=Bound (10.025096358s) Jun 11 00:12:12.919: INFO: Waiting up to 3m0s for PersistentVolume local-pvfns7x to have phase Bound Jun 11 00:12:12.921: INFO: PersistentVolume local-pvfns7x found and phase=Bound (2.038645ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 STEP: Creating pod1 to write to the PV STEP: Creating a pod Jun 11 00:12:20.948: INFO: pod "pod-6a61399b-9f2c-477a-ae88-f61f7e712dec" created on Node "node1" STEP: Writing in pod1 Jun 11 00:12:20.948: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5946 PodName:pod-6a61399b-9f2c-477a-ae88-f61f7e712dec ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:12:20.948: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:12:21.039: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Jun 11 00:12:21.039: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5946 PodName:pod-6a61399b-9f2c-477a-ae88-f61f7e712dec ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:12:21.039: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:12:21.124: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Creating pod2 to read from the PV STEP: Creating a pod Jun 11 00:12:27.149: INFO: pod "pod-044ecae2-4264-493a-973d-47e916ba8e5b" created on Node "node1" Jun 11 00:12:27.149: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5946 PodName:pod-044ecae2-4264-493a-973d-47e916ba8e5b ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:12:27.149: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:12:27.349: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod2 Jun 11 00:12:27.349: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-cae0609b-881b-4fa6-bb60-7619c6c42648 > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5946 PodName:pod-044ecae2-4264-493a-973d-47e916ba8e5b ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:12:27.349: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:12:27.427: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-cae0609b-881b-4fa6-bb60-7619c6c42648 > /mnt/volume1/test-file", out: "", stderr: "", err: STEP: Reading in pod1 Jun 11 00:12:27.427: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5946 PodName:pod-6a61399b-9f2c-477a-ae88-f61f7e712dec ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:12:27.427: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:12:27.521: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "/tmp/local-volume-test-cae0609b-881b-4fa6-bb60-7619c6c42648", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-6a61399b-9f2c-477a-ae88-f61f7e712dec in namespace persistent-local-volumes-test-5946 STEP: Deleting pod2 STEP: Deleting pod pod-044ecae2-4264-493a-973d-47e916ba8e5b in namespace persistent-local-volumes-test-5946 [AfterEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 11 00:12:27.530: INFO: Deleting PersistentVolumeClaim "pvc-6zqpm" Jun 11 00:12:27.534: INFO: Deleting PersistentVolume "local-pvfns7x" STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-cae0609b-881b-4fa6-bb60-7619c6c42648" Jun 11 00:12:27.538: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-cae0609b-881b-4fa6-bb60-7619c6c42648"] Namespace:persistent-local-volumes-test-5946 PodName:hostexec-node1-8g5ht ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:12:27.538: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 11 00:12:27.659: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-cae0609b-881b-4fa6-bb60-7619c6c42648] Namespace:persistent-local-volumes-test-5946 PodName:hostexec-node1-8g5ht ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:12:27.659: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:12:27.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5946" for this suite. • [SLOW TEST:33.081 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":5,"skipped":112,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:12:27.809: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:77 Jun 11 00:12:27.831: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:12:27.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-3738" for this suite. [AfterEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:111 Jun 11 00:12:27.841: INFO: AfterEach: Cleaning up test resources Jun 11 00:12:27.841: INFO: pvc is nil Jun 11 00:12:27.841: INFO: pv is nil S [SKIPPING] in Spec Setup (BeforeEach) [0.032 seconds] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should test that deleting the Namespace of a PVC and Pod causes the successful detach of Persistent Disk [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:156 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:85 ------------------------------ SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:12:17.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-socket STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:191 STEP: Create a pod for further testing Jun 11 00:12:17.271: INFO: The status of Pod test-hostpath-type-xw5c4 is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:12:19.274: INFO: The status of Pod test-hostpath-type-xw5c4 is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:12:21.274: INFO: The status of Pod test-hostpath-type-xw5c4 is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:12:23.275: INFO: The status of Pod test-hostpath-type-xw5c4 is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:12:25.274: INFO: The status of Pod test-hostpath-type-xw5c4 is Running (Ready = true) STEP: running on node node2 [It] Should be able to mount socket 'asocket' successfully when HostPathType is HostPathSocket /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:208 [AfterEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:12:35.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-socket-1992" for this suite. • [SLOW TEST:18.078 seconds] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should be able to mount socket 'asocket' successfully when HostPathType is HostPathSocket /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:208 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Socket [Slow] Should be able to mount socket 'asocket' successfully when HostPathType is HostPathSocket","total":-1,"completed":17,"skipped":561,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:12:35.319: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume-provisioning STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:146 [It] should not provision a volume in an unmanaged GCE zone. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:451 Jun 11 00:12:35.340: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:12:35.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-provisioning-4503" for this suite. S [SKIPPING] [0.031 seconds] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 DynamicProvisioner [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:152 should not provision a volume in an unmanaged GCE zone. [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:451 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:452 ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:12:10.097: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Jun 11 00:12:18.144: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-b59ff962-40fc-48dd-a512-b269cd40d9b2-backend && ln -s /tmp/local-volume-test-b59ff962-40fc-48dd-a512-b269cd40d9b2-backend /tmp/local-volume-test-b59ff962-40fc-48dd-a512-b269cd40d9b2] Namespace:persistent-local-volumes-test-4978 PodName:hostexec-node1-s46xh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:12:18.144: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 11 00:12:18.235: INFO: Creating a PV followed by a PVC Jun 11 00:12:18.241: INFO: Waiting for PV local-pvppmlw to bind to PVC pvc-x75nj Jun 11 00:12:18.241: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-x75nj] to have phase Bound Jun 11 00:12:18.243: INFO: PersistentVolumeClaim pvc-x75nj found but phase is Pending instead of Bound. Jun 11 00:12:20.247: INFO: PersistentVolumeClaim pvc-x75nj found but phase is Pending instead of Bound. Jun 11 00:12:22.251: INFO: PersistentVolumeClaim pvc-x75nj found but phase is Pending instead of Bound. Jun 11 00:12:24.255: INFO: PersistentVolumeClaim pvc-x75nj found but phase is Pending instead of Bound. Jun 11 00:12:26.258: INFO: PersistentVolumeClaim pvc-x75nj found but phase is Pending instead of Bound. Jun 11 00:12:28.262: INFO: PersistentVolumeClaim pvc-x75nj found and phase=Bound (10.020540381s) Jun 11 00:12:28.262: INFO: Waiting up to 3m0s for PersistentVolume local-pvppmlw to have phase Bound Jun 11 00:12:28.264: INFO: PersistentVolume local-pvppmlw found and phase=Bound (2.076888ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Jun 11 00:12:36.289: INFO: pod "pod-3869425b-f722-4398-930f-9b24dc66251f" created on Node "node1" STEP: Writing in pod1 Jun 11 00:12:36.289: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4978 PodName:pod-3869425b-f722-4398-930f-9b24dc66251f ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:12:36.289: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:12:36.377: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 STEP: Reading in pod1 Jun 11 00:12:36.378: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4978 PodName:pod-3869425b-f722-4398-930f-9b24dc66251f ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:12:36.378: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:12:36.458: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-3869425b-f722-4398-930f-9b24dc66251f in namespace persistent-local-volumes-test-4978 [AfterEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 11 00:12:36.463: INFO: Deleting PersistentVolumeClaim "pvc-x75nj" Jun 11 00:12:36.466: INFO: Deleting PersistentVolume "local-pvppmlw" STEP: Removing the test directory Jun 11 00:12:36.470: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-b59ff962-40fc-48dd-a512-b269cd40d9b2 && rm -r /tmp/local-volume-test-b59ff962-40fc-48dd-a512-b269cd40d9b2-backend] Namespace:persistent-local-volumes-test-4978 PodName:hostexec-node1-s46xh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:12:36.470: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:12:36.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-4978" for this suite. • [SLOW TEST:26.491 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":12,"skipped":638,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:12:27.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:91 STEP: Creating a pod to test downward API volume plugin Jun 11 00:12:27.902: INFO: Waiting up to 5m0s for pod "metadata-volume-cb911268-063a-4ac3-ba6a-9b2d2b64647a" in namespace "projected-1570" to be "Succeeded or Failed" Jun 11 00:12:27.905: INFO: Pod "metadata-volume-cb911268-063a-4ac3-ba6a-9b2d2b64647a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.258654ms Jun 11 00:12:29.909: INFO: Pod "metadata-volume-cb911268-063a-4ac3-ba6a-9b2d2b64647a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006603839s Jun 11 00:12:31.912: INFO: Pod "metadata-volume-cb911268-063a-4ac3-ba6a-9b2d2b64647a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009300984s Jun 11 00:12:33.916: INFO: Pod "metadata-volume-cb911268-063a-4ac3-ba6a-9b2d2b64647a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013195014s Jun 11 00:12:35.921: INFO: Pod "metadata-volume-cb911268-063a-4ac3-ba6a-9b2d2b64647a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.018970965s Jun 11 00:12:37.924: INFO: Pod "metadata-volume-cb911268-063a-4ac3-ba6a-9b2d2b64647a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.021940721s STEP: Saw pod success Jun 11 00:12:37.924: INFO: Pod "metadata-volume-cb911268-063a-4ac3-ba6a-9b2d2b64647a" satisfied condition "Succeeded or Failed" Jun 11 00:12:37.927: INFO: Trying to get logs from node node2 pod metadata-volume-cb911268-063a-4ac3-ba6a-9b2d2b64647a container client-container: STEP: delete the pod Jun 11 00:12:38.128: INFO: Waiting for pod metadata-volume-cb911268-063a-4ac3-ba6a-9b2d2b64647a to disappear Jun 11 00:12:38.130: INFO: Pod metadata-volume-cb911268-063a-4ac3-ba6a-9b2d2b64647a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:12:38.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1570" for this suite. • [SLOW TEST:10.268 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:91 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":6,"skipped":128,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:12:16.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Jun 11 00:12:22.314: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-c19c181f-8c75-4609-a348-a54b2f23c72f && mount --bind /tmp/local-volume-test-c19c181f-8c75-4609-a348-a54b2f23c72f /tmp/local-volume-test-c19c181f-8c75-4609-a348-a54b2f23c72f] Namespace:persistent-local-volumes-test-5025 PodName:hostexec-node2-fgz88 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:12:22.314: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 11 00:12:22.411: INFO: Creating a PV followed by a PVC Jun 11 00:12:22.418: INFO: Waiting for PV local-pvt5lxr to bind to PVC pvc-wwh49 Jun 11 00:12:22.418: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-wwh49] to have phase Bound Jun 11 00:12:22.422: INFO: PersistentVolumeClaim pvc-wwh49 found but phase is Pending instead of Bound. Jun 11 00:12:24.426: INFO: PersistentVolumeClaim pvc-wwh49 found and phase=Bound (2.007976968s) Jun 11 00:12:24.426: INFO: Waiting up to 3m0s for PersistentVolume local-pvt5lxr to have phase Bound Jun 11 00:12:24.429: INFO: PersistentVolume local-pvt5lxr found and phase=Bound (2.338236ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 STEP: Creating pod1 to write to the PV STEP: Creating a pod Jun 11 00:12:32.459: INFO: pod "pod-f183f1cc-feb0-4ade-89c0-566c3aeb921b" created on Node "node2" STEP: Writing in pod1 Jun 11 00:12:32.459: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5025 PodName:pod-f183f1cc-feb0-4ade-89c0-566c3aeb921b ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:12:32.459: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:12:32.543: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Jun 11 00:12:32.543: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5025 PodName:pod-f183f1cc-feb0-4ade-89c0-566c3aeb921b ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:12:32.543: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:12:32.625: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Creating pod2 to read from the PV STEP: Creating a pod Jun 11 00:12:44.650: INFO: pod "pod-2c758d5c-4877-4035-bc16-f24de8f84960" created on Node "node2" Jun 11 00:12:44.650: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5025 PodName:pod-2c758d5c-4877-4035-bc16-f24de8f84960 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:12:44.650: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:12:44.970: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod2 Jun 11 00:12:44.970: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-c19c181f-8c75-4609-a348-a54b2f23c72f > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5025 PodName:pod-2c758d5c-4877-4035-bc16-f24de8f84960 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:12:44.970: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:12:45.053: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-c19c181f-8c75-4609-a348-a54b2f23c72f > /mnt/volume1/test-file", out: "", stderr: "", err: STEP: Reading in pod1 Jun 11 00:12:45.053: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5025 PodName:pod-f183f1cc-feb0-4ade-89c0-566c3aeb921b ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:12:45.053: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:12:45.131: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "/tmp/local-volume-test-c19c181f-8c75-4609-a348-a54b2f23c72f", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-f183f1cc-feb0-4ade-89c0-566c3aeb921b in namespace persistent-local-volumes-test-5025 STEP: Deleting pod2 STEP: Deleting pod pod-2c758d5c-4877-4035-bc16-f24de8f84960 in namespace persistent-local-volumes-test-5025 [AfterEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 11 00:12:45.141: INFO: Deleting PersistentVolumeClaim "pvc-wwh49" Jun 11 00:12:45.145: INFO: Deleting PersistentVolume "local-pvt5lxr" STEP: Removing the test directory Jun 11 00:12:45.150: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-c19c181f-8c75-4609-a348-a54b2f23c72f && rm -r /tmp/local-volume-test-c19c181f-8c75-4609-a348-a54b2f23c72f] Namespace:persistent-local-volumes-test-5025 PodName:hostexec-node2-fgz88 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:12:45.150: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:12:45.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5025" for this suite. • [SLOW TEST:29.001 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":15,"skipped":447,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]"]} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:12:26.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-block-dev STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:325 STEP: Create a pod for further testing Jun 11 00:12:26.412: INFO: The status of Pod test-hostpath-type-8x7vk is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:12:28.417: INFO: The status of Pod test-hostpath-type-8x7vk is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:12:30.418: INFO: The status of Pod test-hostpath-type-8x7vk is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:12:32.418: INFO: The status of Pod test-hostpath-type-8x7vk is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:12:34.418: INFO: The status of Pod test-hostpath-type-8x7vk is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:12:36.416: INFO: The status of Pod test-hostpath-type-8x7vk is Running (Ready = true) STEP: running on node node2 STEP: Create a block device for further testing Jun 11 00:12:36.418: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/ablkdev b 89 1] Namespace:host-path-type-block-dev-6787 PodName:test-hostpath-type-8x7vk ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:12:36.418: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to mount block device 'ablkdev' successfully when HostPathType is HostPathUnset /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:350 [AfterEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:12:46.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-block-dev-6787" for this suite. • [SLOW TEST:20.231 seconds] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should be able to mount block device 'ablkdev' successfully when HostPathType is HostPathUnset /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:350 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Block Device [Slow] Should be able to mount block device 'ablkdev' successfully when HostPathType is HostPathUnset","total":-1,"completed":14,"skipped":637,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:12:46.673: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-disks STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:74 [It] using 1 containers and 2 PDs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:254 Jun 11 00:12:46.704: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:12:46.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-disks-6648" for this suite. S [SKIPPING] [0.041 seconds] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 schedule a pod w/ RW PD(s) mounted to 1 or more containers, write to PD, verify content, delete pod, and repeat in rapid succession [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:231 using 1 containers and 2 PDs [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:254 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:255 ------------------------------ SSS ------------------------------ [BeforeEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:12:46.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:68 Jun 11 00:12:46.743: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian) [AfterEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:12:46.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-5202" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.029 seconds] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 NFSv3 [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:102 should be mountable for NFSv3 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:103 Only supported for node OS distro [gci ubuntu custom] (not debian) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:69 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:12:38.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:75 STEP: Creating configMap with name projected-configmap-test-volume-24cca58a-538b-43ee-9067-70e5c706ba53 STEP: Creating a pod to test consume configMaps Jun 11 00:12:38.198: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-78d15b3a-c4d6-43de-9be3-e1fdd3976764" in namespace "projected-6102" to be "Succeeded or Failed" Jun 11 00:12:38.201: INFO: Pod "pod-projected-configmaps-78d15b3a-c4d6-43de-9be3-e1fdd3976764": Phase="Pending", Reason="", readiness=false. Elapsed: 2.949026ms Jun 11 00:12:40.205: INFO: Pod "pod-projected-configmaps-78d15b3a-c4d6-43de-9be3-e1fdd3976764": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006976354s Jun 11 00:12:42.209: INFO: Pod "pod-projected-configmaps-78d15b3a-c4d6-43de-9be3-e1fdd3976764": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010737213s Jun 11 00:12:44.216: INFO: Pod "pod-projected-configmaps-78d15b3a-c4d6-43de-9be3-e1fdd3976764": Phase="Pending", Reason="", readiness=false. Elapsed: 6.017582049s Jun 11 00:12:46.221: INFO: Pod "pod-projected-configmaps-78d15b3a-c4d6-43de-9be3-e1fdd3976764": Phase="Pending", Reason="", readiness=false. Elapsed: 8.022254392s Jun 11 00:12:48.225: INFO: Pod "pod-projected-configmaps-78d15b3a-c4d6-43de-9be3-e1fdd3976764": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.026023722s STEP: Saw pod success Jun 11 00:12:48.225: INFO: Pod "pod-projected-configmaps-78d15b3a-c4d6-43de-9be3-e1fdd3976764" satisfied condition "Succeeded or Failed" Jun 11 00:12:48.227: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-78d15b3a-c4d6-43de-9be3-e1fdd3976764 container agnhost-container: STEP: delete the pod Jun 11 00:12:48.240: INFO: Waiting for pod pod-projected-configmaps-78d15b3a-c4d6-43de-9be3-e1fdd3976764 to disappear Jun 11 00:12:48.242: INFO: Pod pod-projected-configmaps-78d15b3a-c4d6-43de-9be3-e1fdd3976764 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:12:48.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6102" for this suite. • [SLOW TEST:10.083 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:75 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":7,"skipped":139,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:12:48.276: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume-provisioning STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:146 [It] should provision storage with non-default reclaim policy Retain /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:403 Jun 11 00:12:48.296: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:12:48.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-provisioning-8239" for this suite. S [SKIPPING] [0.030 seconds] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 DynamicProvisioner [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:152 should provision storage with non-default reclaim policy Retain [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:403 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:404 ------------------------------ SSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:12:25.887: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node2" using path "/tmp/local-volume-test-b978f706-21ec-4bb6-9c13-d27802b5f84d" Jun 11 00:12:31.935: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-b978f706-21ec-4bb6-9c13-d27802b5f84d && dd if=/dev/zero of=/tmp/local-volume-test-b978f706-21ec-4bb6-9c13-d27802b5f84d/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-b978f706-21ec-4bb6-9c13-d27802b5f84d/file] Namespace:persistent-local-volumes-test-6567 PodName:hostexec-node2-kxnn4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:12:31.935: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:12:32.098: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-b978f706-21ec-4bb6-9c13-d27802b5f84d/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-6567 PodName:hostexec-node2-kxnn4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:12:32.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 11 00:12:32.278: INFO: Creating a PV followed by a PVC Jun 11 00:12:32.284: INFO: Waiting for PV local-pvz54zt to bind to PVC pvc-ggdzl Jun 11 00:12:32.284: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-ggdzl] to have phase Bound Jun 11 00:12:32.287: INFO: PersistentVolumeClaim pvc-ggdzl found but phase is Pending instead of Bound. Jun 11 00:12:34.292: INFO: PersistentVolumeClaim pvc-ggdzl found and phase=Bound (2.00739748s) Jun 11 00:12:34.292: INFO: Waiting up to 3m0s for PersistentVolume local-pvz54zt to have phase Bound Jun 11 00:12:34.295: INFO: PersistentVolume local-pvz54zt found and phase=Bound (2.763968ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Jun 11 00:12:48.321: INFO: pod "pod-26613f8e-0555-4d05-90a9-8fd6f0a89f69" created on Node "node2" STEP: Writing in pod1 Jun 11 00:12:48.321: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /tmp/mnt/volume1; echo test-file-content > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6567 PodName:pod-26613f8e-0555-4d05-90a9-8fd6f0a89f69 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:12:48.321: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:12:48.427: INFO: podRWCmdExec cmd: "mkdir -p /tmp/mnt/volume1; echo test-file-content > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file", out: "", stderr: "0+1 records in\n0+1 records out\n18 bytes (18B) copied, 0.000158 seconds, 111.3KB/s", err: [It] should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 STEP: Reading in pod1 Jun 11 00:12:48.427: INFO: ExecWithOptions {Command:[/bin/sh -c hexdump -n 100 -e '100 "%_p"' /mnt/volume1 | head -1] Namespace:persistent-local-volumes-test-6567 PodName:pod-26613f8e-0555-4d05-90a9-8fd6f0a89f69 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:12:48.427: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:12:48.525: INFO: podRWCmdExec cmd: "hexdump -n 100 -e '100 \"%_p\"' /mnt/volume1 | head -1", out: "test-file-content...................................................................................", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-26613f8e-0555-4d05-90a9-8fd6f0a89f69 in namespace persistent-local-volumes-test-6567 [AfterEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 11 00:12:48.529: INFO: Deleting PersistentVolumeClaim "pvc-ggdzl" Jun 11 00:12:48.533: INFO: Deleting PersistentVolume "local-pvz54zt" Jun 11 00:12:48.537: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-b978f706-21ec-4bb6-9c13-d27802b5f84d/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-6567 PodName:hostexec-node2-kxnn4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:12:48.537: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "node2" at path /tmp/local-volume-test-b978f706-21ec-4bb6-9c13-d27802b5f84d/file Jun 11 00:12:48.624: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-6567 PodName:hostexec-node2-kxnn4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:12:48.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-b978f706-21ec-4bb6-9c13-d27802b5f84d Jun 11 00:12:48.711: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-b978f706-21ec-4bb6-9c13-d27802b5f84d] Namespace:persistent-local-volumes-test-6567 PodName:hostexec-node2-kxnn4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:12:48.711: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:12:48.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-6567" for this suite. • [SLOW TEST:22.921 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:11:48.306: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] CSIStorageCapacity disabled /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1299 STEP: Building a driver namespace object, basename csi-mock-volumes-5230 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Jun 11 00:11:48.368: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5230-1492/csi-attacher Jun 11 00:11:48.372: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-5230 Jun 11 00:11:48.372: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-5230 Jun 11 00:11:48.378: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-5230 Jun 11 00:11:48.381: INFO: creating *v1.Role: csi-mock-volumes-5230-1492/external-attacher-cfg-csi-mock-volumes-5230 Jun 11 00:11:48.384: INFO: creating *v1.RoleBinding: csi-mock-volumes-5230-1492/csi-attacher-role-cfg Jun 11 00:11:48.388: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5230-1492/csi-provisioner Jun 11 00:11:48.391: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-5230 Jun 11 00:11:48.391: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-5230 Jun 11 00:11:48.394: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-5230 Jun 11 00:11:48.396: INFO: creating *v1.Role: csi-mock-volumes-5230-1492/external-provisioner-cfg-csi-mock-volumes-5230 Jun 11 00:11:48.399: INFO: creating *v1.RoleBinding: csi-mock-volumes-5230-1492/csi-provisioner-role-cfg Jun 11 00:11:48.402: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5230-1492/csi-resizer Jun 11 00:11:48.405: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-5230 Jun 11 00:11:48.405: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-5230 Jun 11 00:11:48.407: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-5230 Jun 11 00:11:48.410: INFO: creating *v1.Role: csi-mock-volumes-5230-1492/external-resizer-cfg-csi-mock-volumes-5230 Jun 11 00:11:48.413: INFO: creating *v1.RoleBinding: csi-mock-volumes-5230-1492/csi-resizer-role-cfg Jun 11 00:11:48.416: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5230-1492/csi-snapshotter Jun 11 00:11:48.419: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-5230 Jun 11 00:11:48.419: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-5230 Jun 11 00:11:48.422: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-5230 Jun 11 00:11:48.425: INFO: creating *v1.Role: csi-mock-volumes-5230-1492/external-snapshotter-leaderelection-csi-mock-volumes-5230 Jun 11 00:11:48.427: INFO: creating *v1.RoleBinding: csi-mock-volumes-5230-1492/external-snapshotter-leaderelection Jun 11 00:11:48.430: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5230-1492/csi-mock Jun 11 00:11:48.433: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-5230 Jun 11 00:11:48.435: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-5230 Jun 11 00:11:48.438: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-5230 Jun 11 00:11:48.441: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-5230 Jun 11 00:11:48.444: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-5230 Jun 11 00:11:48.447: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-5230 Jun 11 00:11:48.450: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-5230 Jun 11 00:11:48.452: INFO: creating *v1.StatefulSet: csi-mock-volumes-5230-1492/csi-mockplugin Jun 11 00:11:48.456: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-5230 Jun 11 00:11:48.459: INFO: creating *v1.StatefulSet: csi-mock-volumes-5230-1492/csi-mockplugin-attacher Jun 11 00:11:48.463: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-5230" Jun 11 00:11:48.465: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-5230 to register on node node1 STEP: Creating pod Jun 11 00:12:02.984: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil STEP: Deleting the previously created pod Jun 11 00:12:19.008: INFO: Deleting pod "pvc-volume-tester-kfhn4" in namespace "csi-mock-volumes-5230" Jun 11 00:12:19.014: INFO: Wait up to 5m0s for pod "pvc-volume-tester-kfhn4" to be fully deleted STEP: Deleting pod pvc-volume-tester-kfhn4 Jun 11 00:12:23.020: INFO: Deleting pod "pvc-volume-tester-kfhn4" in namespace "csi-mock-volumes-5230" STEP: Deleting claim pvc-klpbx Jun 11 00:12:23.030: INFO: Waiting up to 2m0s for PersistentVolume pvc-e0166230-3d4d-41ba-a823-f3789952b89e to get deleted Jun 11 00:12:23.032: INFO: PersistentVolume pvc-e0166230-3d4d-41ba-a823-f3789952b89e found and phase=Bound (2.140278ms) Jun 11 00:12:25.038: INFO: PersistentVolume pvc-e0166230-3d4d-41ba-a823-f3789952b89e found and phase=Released (2.008610026s) Jun 11 00:12:27.044: INFO: PersistentVolume pvc-e0166230-3d4d-41ba-a823-f3789952b89e found and phase=Released (4.014719091s) Jun 11 00:12:29.050: INFO: PersistentVolume pvc-e0166230-3d4d-41ba-a823-f3789952b89e found and phase=Released (6.020609407s) Jun 11 00:12:31.053: INFO: PersistentVolume pvc-e0166230-3d4d-41ba-a823-f3789952b89e was removed STEP: Deleting storageclass mock-csi-storage-capacity-csi-mock-volumes-5230 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-5230 STEP: Waiting for namespaces [csi-mock-volumes-5230] to vanish STEP: uninstalling csi mock driver Jun 11 00:12:37.064: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5230-1492/csi-attacher Jun 11 00:12:37.068: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-5230 Jun 11 00:12:37.072: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-5230 Jun 11 00:12:37.075: INFO: deleting *v1.Role: csi-mock-volumes-5230-1492/external-attacher-cfg-csi-mock-volumes-5230 Jun 11 00:12:37.078: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5230-1492/csi-attacher-role-cfg Jun 11 00:12:37.081: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5230-1492/csi-provisioner Jun 11 00:12:37.085: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-5230 Jun 11 00:12:37.089: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-5230 Jun 11 00:12:37.092: INFO: deleting *v1.Role: csi-mock-volumes-5230-1492/external-provisioner-cfg-csi-mock-volumes-5230 Jun 11 00:12:37.095: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5230-1492/csi-provisioner-role-cfg Jun 11 00:12:37.098: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5230-1492/csi-resizer Jun 11 00:12:37.101: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-5230 Jun 11 00:12:37.104: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-5230 Jun 11 00:12:37.108: INFO: deleting *v1.Role: csi-mock-volumes-5230-1492/external-resizer-cfg-csi-mock-volumes-5230 Jun 11 00:12:37.113: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5230-1492/csi-resizer-role-cfg Jun 11 00:12:37.116: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5230-1492/csi-snapshotter Jun 11 00:12:37.119: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-5230 Jun 11 00:12:37.123: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-5230 Jun 11 00:12:37.126: INFO: deleting *v1.Role: csi-mock-volumes-5230-1492/external-snapshotter-leaderelection-csi-mock-volumes-5230 Jun 11 00:12:37.129: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5230-1492/external-snapshotter-leaderelection Jun 11 00:12:37.132: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5230-1492/csi-mock Jun 11 00:12:37.135: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-5230 Jun 11 00:12:37.139: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-5230 Jun 11 00:12:37.142: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-5230 Jun 11 00:12:37.145: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-5230 Jun 11 00:12:37.148: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-5230 Jun 11 00:12:37.151: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-5230 Jun 11 00:12:37.155: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-5230 Jun 11 00:12:37.158: INFO: deleting *v1.StatefulSet: csi-mock-volumes-5230-1492/csi-mockplugin Jun 11 00:12:37.162: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-5230 Jun 11 00:12:37.165: INFO: deleting *v1.StatefulSet: csi-mock-volumes-5230-1492/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-5230-1492 STEP: Waiting for namespaces [csi-mock-volumes-5230-1492] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:12:49.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:60.880 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSIStorageCapacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1256 CSIStorageCapacity disabled /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1299 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity disabled","total":-1,"completed":10,"skipped":146,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:12:49.193: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Jun 11 00:12:49.216: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:12:49.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-7128" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.035 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:383 should create total pv count metrics for with plugin and volume mode labels after creating pv /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:513 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:12:35.353: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-socket STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:191 STEP: Create a pod for further testing Jun 11 00:12:35.404: INFO: The status of Pod test-hostpath-type-54vw6 is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:12:37.408: INFO: The status of Pod test-hostpath-type-54vw6 is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:12:39.409: INFO: The status of Pod test-hostpath-type-54vw6 is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:12:41.409: INFO: The status of Pod test-hostpath-type-54vw6 is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:12:43.409: INFO: The status of Pod test-hostpath-type-54vw6 is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:12:45.408: INFO: The status of Pod test-hostpath-type-54vw6 is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:12:47.410: INFO: The status of Pod test-hostpath-type-54vw6 is Running (Ready = true) STEP: running on node node2 [It] Should fail on mounting socket 'asocket' when HostPathType is HostPathCharDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:226 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:12:49.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-socket-7589" for this suite. • [SLOW TEST:14.097 seconds] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting socket 'asocket' when HostPathType is HostPathCharDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:226 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Socket [Slow] Should fail on mounting socket 'asocket' when HostPathType is HostPathCharDev","total":-1,"completed":18,"skipped":564,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:11:57.461: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] token should be plumbed down when csiServiceAccountTokenEnabled=true /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1524 STEP: Building a driver namespace object, basename csi-mock-volumes-2768 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Jun 11 00:11:57.520: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2768-2028/csi-attacher Jun 11 00:11:57.524: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-2768 Jun 11 00:11:57.524: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-2768 Jun 11 00:11:57.526: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-2768 Jun 11 00:11:57.529: INFO: creating *v1.Role: csi-mock-volumes-2768-2028/external-attacher-cfg-csi-mock-volumes-2768 Jun 11 00:11:57.532: INFO: creating *v1.RoleBinding: csi-mock-volumes-2768-2028/csi-attacher-role-cfg Jun 11 00:11:57.535: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2768-2028/csi-provisioner Jun 11 00:11:57.538: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-2768 Jun 11 00:11:57.538: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-2768 Jun 11 00:11:57.541: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-2768 Jun 11 00:11:57.544: INFO: creating *v1.Role: csi-mock-volumes-2768-2028/external-provisioner-cfg-csi-mock-volumes-2768 Jun 11 00:11:57.546: INFO: creating *v1.RoleBinding: csi-mock-volumes-2768-2028/csi-provisioner-role-cfg Jun 11 00:11:57.549: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2768-2028/csi-resizer Jun 11 00:11:57.552: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-2768 Jun 11 00:11:57.552: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-2768 Jun 11 00:11:57.555: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-2768 Jun 11 00:11:57.557: INFO: creating *v1.Role: csi-mock-volumes-2768-2028/external-resizer-cfg-csi-mock-volumes-2768 Jun 11 00:11:57.560: INFO: creating *v1.RoleBinding: csi-mock-volumes-2768-2028/csi-resizer-role-cfg Jun 11 00:11:57.562: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2768-2028/csi-snapshotter Jun 11 00:11:57.564: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-2768 Jun 11 00:11:57.565: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-2768 Jun 11 00:11:57.567: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-2768 Jun 11 00:11:57.569: INFO: creating *v1.Role: csi-mock-volumes-2768-2028/external-snapshotter-leaderelection-csi-mock-volumes-2768 Jun 11 00:11:57.572: INFO: creating *v1.RoleBinding: csi-mock-volumes-2768-2028/external-snapshotter-leaderelection Jun 11 00:11:57.574: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2768-2028/csi-mock Jun 11 00:11:57.577: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-2768 Jun 11 00:11:57.579: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-2768 Jun 11 00:11:57.582: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-2768 Jun 11 00:11:57.584: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-2768 Jun 11 00:11:57.588: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-2768 Jun 11 00:11:57.590: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-2768 Jun 11 00:11:57.592: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-2768 Jun 11 00:11:57.595: INFO: creating *v1.StatefulSet: csi-mock-volumes-2768-2028/csi-mockplugin Jun 11 00:11:57.599: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-2768 Jun 11 00:11:57.602: INFO: creating *v1.StatefulSet: csi-mock-volumes-2768-2028/csi-mockplugin-attacher Jun 11 00:11:57.605: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-2768" Jun 11 00:11:57.608: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-2768 to register on node node1 STEP: Creating pod Jun 11 00:12:07.125: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jun 11 00:12:07.130: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-zkzmk] to have phase Bound Jun 11 00:12:07.132: INFO: PersistentVolumeClaim pvc-zkzmk found but phase is Pending instead of Bound. Jun 11 00:12:09.137: INFO: PersistentVolumeClaim pvc-zkzmk found and phase=Bound (2.006900059s) STEP: Deleting the previously created pod Jun 11 00:12:26.169: INFO: Deleting pod "pvc-volume-tester-9xj58" in namespace "csi-mock-volumes-2768" Jun 11 00:12:26.173: INFO: Wait up to 5m0s for pod "pvc-volume-tester-9xj58" to be fully deleted STEP: Checking CSI driver logs Jun 11 00:12:32.192: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.tokens: {"":{"token":"eyJhbGciOiJSUzI1NiIsImtpZCI6IkdON2dBYjVoZkMzajNod2ZMNDU1Q1dfWlBQeldoRWNDYmJuaTA1cl91XzQifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNjU0OTA2OTQxLCJpYXQiOjE2NTQ5MDYzNDEsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJjc2ktbW9jay12b2x1bWVzLTI3NjgiLCJwb2QiOnsibmFtZSI6InB2Yy12b2x1bWUtdGVzdGVyLTl4ajU4IiwidWlkIjoiMmI4YjBiYjAtN2QxNy00Mzg3LWE3NzktYzI1YmE1MmI2YzgwIn0sInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJkZWZhdWx0IiwidWlkIjoiNmRkOWNlNWItMmMyZC00ZWJiLWI4NDQtYWMzMGI0ZDM1OTg2In19LCJuYmYiOjE2NTQ5MDYzNDEsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpjc2ktbW9jay12b2x1bWVzLTI3Njg6ZGVmYXVsdCJ9.O48Rezw2OtNEFdK-TJK6xqej7HEDvEv0DHvXGq7_EcLaErEckBhYK7ap0jNxyrofgmjhndYuGgNQ9bBWLjGw8hqcFJicsLJH8orrQdmO1496FhrCcQZ7ZGdICr-pJLbth-QsFMxHv5gk0vTMxwlAAvZVlFihFwKU3W0nVyclafAT9ORI_exV7c8VTEdJzSEDrVF2D9dUnSsi3EKjBtgGY8MnZCg-fvqvoJsmDfwLJ1rD67yBxm7N1Ndw17-mpxMVEJ7qzwuT0oJl2EbwbH6-rk9tUPfutHSuaLwiMqz_uGjJmkvuGIo1EvahIStDJjcQ7zdP1TW5umLUmNTqmqYXgw","expirationTimestamp":"2022-06-11T00:22:21Z"}} Jun 11 00:12:32.192: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/2b8b0bb0-7d17-4387-a779-c25ba52b6c80/volumes/kubernetes.io~csi/pvc-2c737538-8e15-4924-9c68-e876b7a8443e/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:} STEP: Deleting pod pvc-volume-tester-9xj58 Jun 11 00:12:32.193: INFO: Deleting pod "pvc-volume-tester-9xj58" in namespace "csi-mock-volumes-2768" STEP: Deleting claim pvc-zkzmk Jun 11 00:12:32.202: INFO: Waiting up to 2m0s for PersistentVolume pvc-2c737538-8e15-4924-9c68-e876b7a8443e to get deleted Jun 11 00:12:32.204: INFO: PersistentVolume pvc-2c737538-8e15-4924-9c68-e876b7a8443e found and phase=Bound (2.080433ms) Jun 11 00:12:34.207: INFO: PersistentVolume pvc-2c737538-8e15-4924-9c68-e876b7a8443e found and phase=Released (2.005919738s) Jun 11 00:12:36.211: INFO: PersistentVolume pvc-2c737538-8e15-4924-9c68-e876b7a8443e found and phase=Released (4.009053745s) Jun 11 00:12:38.213: INFO: PersistentVolume pvc-2c737538-8e15-4924-9c68-e876b7a8443e found and phase=Released (6.011772704s) Jun 11 00:12:40.220: INFO: PersistentVolume pvc-2c737538-8e15-4924-9c68-e876b7a8443e was removed STEP: Deleting storageclass csi-mock-volumes-2768-scc2mwv STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-2768 STEP: Waiting for namespaces [csi-mock-volumes-2768] to vanish STEP: uninstalling csi mock driver Jun 11 00:12:46.232: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2768-2028/csi-attacher Jun 11 00:12:46.237: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-2768 Jun 11 00:12:46.240: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-2768 Jun 11 00:12:46.244: INFO: deleting *v1.Role: csi-mock-volumes-2768-2028/external-attacher-cfg-csi-mock-volumes-2768 Jun 11 00:12:46.249: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2768-2028/csi-attacher-role-cfg Jun 11 00:12:46.252: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2768-2028/csi-provisioner Jun 11 00:12:46.256: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-2768 Jun 11 00:12:46.261: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-2768 Jun 11 00:12:46.265: INFO: deleting *v1.Role: csi-mock-volumes-2768-2028/external-provisioner-cfg-csi-mock-volumes-2768 Jun 11 00:12:46.271: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2768-2028/csi-provisioner-role-cfg Jun 11 00:12:46.278: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2768-2028/csi-resizer Jun 11 00:12:46.282: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-2768 Jun 11 00:12:46.288: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-2768 Jun 11 00:12:46.291: INFO: deleting *v1.Role: csi-mock-volumes-2768-2028/external-resizer-cfg-csi-mock-volumes-2768 Jun 11 00:12:46.295: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2768-2028/csi-resizer-role-cfg Jun 11 00:12:46.298: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2768-2028/csi-snapshotter Jun 11 00:12:46.301: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-2768 Jun 11 00:12:46.304: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-2768 Jun 11 00:12:46.307: INFO: deleting *v1.Role: csi-mock-volumes-2768-2028/external-snapshotter-leaderelection-csi-mock-volumes-2768 Jun 11 00:12:46.311: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2768-2028/external-snapshotter-leaderelection Jun 11 00:12:46.314: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2768-2028/csi-mock Jun 11 00:12:46.317: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-2768 Jun 11 00:12:46.321: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-2768 Jun 11 00:12:46.324: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-2768 Jun 11 00:12:46.328: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-2768 Jun 11 00:12:46.331: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-2768 Jun 11 00:12:46.334: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-2768 Jun 11 00:12:46.337: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-2768 Jun 11 00:12:46.342: INFO: deleting *v1.StatefulSet: csi-mock-volumes-2768-2028/csi-mockplugin Jun 11 00:12:46.346: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-2768 Jun 11 00:12:46.350: INFO: deleting *v1.StatefulSet: csi-mock-volumes-2768-2028/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-2768-2028 STEP: Waiting for namespaces [csi-mock-volumes-2768-2028] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:12:58.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:60.909 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSIServiceAccountToken /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1496 token should be plumbed down when csiServiceAccountTokenEnabled=true /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1524 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSIServiceAccountToken token should be plumbed down when csiServiceAccountTokenEnabled=true","total":-1,"completed":5,"skipped":237,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:12:58.388: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Jun 11 00:12:58.411: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:12:58.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-5948" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.033 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:383 should create none metrics for pvc controller before creating any PV or PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:481 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:12:45.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node2" using path "/tmp/local-volume-test-a2069177-1883-4053-b9b1-38eb8bac473c" Jun 11 00:12:53.371: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-a2069177-1883-4053-b9b1-38eb8bac473c && dd if=/dev/zero of=/tmp/local-volume-test-a2069177-1883-4053-b9b1-38eb8bac473c/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-a2069177-1883-4053-b9b1-38eb8bac473c/file] Namespace:persistent-local-volumes-test-618 PodName:hostexec-node2-h76tv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:12:53.371: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:12:53.685: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-a2069177-1883-4053-b9b1-38eb8bac473c/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-618 PodName:hostexec-node2-h76tv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:12:53.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 11 00:12:54.019: INFO: Creating a PV followed by a PVC Jun 11 00:12:54.024: INFO: Waiting for PV local-pvxrn74 to bind to PVC pvc-kwzdp Jun 11 00:12:54.024: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-kwzdp] to have phase Bound Jun 11 00:12:54.027: INFO: PersistentVolumeClaim pvc-kwzdp found but phase is Pending instead of Bound. Jun 11 00:12:56.030: INFO: PersistentVolumeClaim pvc-kwzdp found but phase is Pending instead of Bound. Jun 11 00:12:58.033: INFO: PersistentVolumeClaim pvc-kwzdp found and phase=Bound (4.00854809s) Jun 11 00:12:58.033: INFO: Waiting up to 3m0s for PersistentVolume local-pvxrn74 to have phase Bound Jun 11 00:12:58.035: INFO: PersistentVolume local-pvxrn74 found and phase=Bound (2.014675ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 Jun 11 00:12:58.039: INFO: We don't set fsGroup on block device, skipped. [AfterEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 11 00:12:58.041: INFO: Deleting PersistentVolumeClaim "pvc-kwzdp" Jun 11 00:12:58.044: INFO: Deleting PersistentVolume "local-pvxrn74" Jun 11 00:12:58.048: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-a2069177-1883-4053-b9b1-38eb8bac473c/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-618 PodName:hostexec-node2-h76tv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:12:58.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "node2" at path /tmp/local-volume-test-a2069177-1883-4053-b9b1-38eb8bac473c/file Jun 11 00:12:58.330: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-618 PodName:hostexec-node2-h76tv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:12:58.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-a2069177-1883-4053-b9b1-38eb8bac473c Jun 11 00:12:58.422: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-a2069177-1883-4053-b9b1-38eb8bac473c] Namespace:persistent-local-volumes-test-618 PodName:hostexec-node2-h76tv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:12:58.423: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:12:58.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-618" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [13.204 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set fsGroup for one pod [Slow] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 We don't set fsGroup on block device, skipped. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:263 ------------------------------ S ------------------------------ [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:12:58.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume-provisioning STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:146 [It] should report an error and create no PV /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:825 Jun 11 00:12:58.525: INFO: Only supported for providers [aws] (not local) [AfterEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:12:58.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-provisioning-9844" for this suite. S [SKIPPING] [0.032 seconds] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Invalid AWS KMS key /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:824 should report an error and create no PV [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:825 Only supported for providers [aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:826 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:12:58.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-disks STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:74 Jun 11 00:12:58.575: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:12:58.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-disks-4013" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.041 seconds] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 schedule pods each with a PD, delete pod and verify detach [Slow] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:93 for read-only PD with pod delete grace period of "immediate (0s)" /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:135 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:75 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:12:49.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-char-dev STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:256 STEP: Create a pod for further testing Jun 11 00:12:49.491: INFO: The status of Pod test-hostpath-type-wlqvb is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:12:51.495: INFO: The status of Pod test-hostpath-type-wlqvb is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:12:53.494: INFO: The status of Pod test-hostpath-type-wlqvb is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:12:55.496: INFO: The status of Pod test-hostpath-type-wlqvb is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:12:57.498: INFO: The status of Pod test-hostpath-type-wlqvb is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:12:59.499: INFO: The status of Pod test-hostpath-type-wlqvb is Running (Ready = true) STEP: running on node node2 STEP: Create a character device for further testing Jun 11 00:12:59.501: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/achardev c 89 1] Namespace:host-path-type-char-dev-1843 PodName:test-hostpath-type-wlqvb ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:12:59.501: INFO: >>> kubeConfig: /root/.kube/config [It] Should fail on mounting character device 'achardev' when HostPathType is HostPathFile /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:290 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:13:01.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-char-dev-1843" for this suite. • [SLOW TEST:12.259 seconds] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting character device 'achardev' when HostPathType is HostPathFile /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:290 ------------------------------ [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:12:48.314: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-directory STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:57 STEP: Create a pod for further testing Jun 11 00:12:48.351: INFO: The status of Pod test-hostpath-type-7tl5w is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:12:50.354: INFO: The status of Pod test-hostpath-type-7tl5w is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:12:52.357: INFO: The status of Pod test-hostpath-type-7tl5w is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:12:54.357: INFO: The status of Pod test-hostpath-type-7tl5w is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:12:56.355: INFO: The status of Pod test-hostpath-type-7tl5w is Running (Ready = true) STEP: running on node node1 STEP: Should automatically create a new directory 'adir' when HostPathType is HostPathDirectoryOrCreate [It] Should fail on mounting directory 'adir' when HostPathType is HostPathSocket /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:89 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:13:02.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-directory-9562" for this suite. • [SLOW TEST:14.101 seconds] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting directory 'adir' when HostPathType is HostPathSocket /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:89 ------------------------------ [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:12:49.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-file STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:124 STEP: Create a pod for further testing Jun 11 00:12:49.297: INFO: The status of Pod test-hostpath-type-lbdsd is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:12:51.300: INFO: The status of Pod test-hostpath-type-lbdsd is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:12:53.300: INFO: The status of Pod test-hostpath-type-lbdsd is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:12:55.303: INFO: The status of Pod test-hostpath-type-lbdsd is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:12:57.300: INFO: The status of Pod test-hostpath-type-lbdsd is Running (Ready = true) STEP: running on node node1 STEP: Should automatically create a new file 'afile' when HostPathType is HostPathFileOrCreate [It] Should fail on mounting file 'afile' when HostPathType is HostPathDirectory /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:151 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:13:03.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-file-4988" for this suite. • [SLOW TEST:14.096 seconds] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting file 'afile' when HostPathType is HostPathDirectory /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:151 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType File [Slow] Should fail on mounting file 'afile' when HostPathType is HostPathDirectory","total":-1,"completed":11,"skipped":162,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:12:36.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Jun 11 00:12:42.684: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-47d313c1-af9d-4650-9f05-67cda6f689c9-backend && mount --bind /tmp/local-volume-test-47d313c1-af9d-4650-9f05-67cda6f689c9-backend /tmp/local-volume-test-47d313c1-af9d-4650-9f05-67cda6f689c9-backend && ln -s /tmp/local-volume-test-47d313c1-af9d-4650-9f05-67cda6f689c9-backend /tmp/local-volume-test-47d313c1-af9d-4650-9f05-67cda6f689c9] Namespace:persistent-local-volumes-test-948 PodName:hostexec-node2-fx4x7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:12:42.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 11 00:12:42.978: INFO: Creating a PV followed by a PVC Jun 11 00:12:42.985: INFO: Waiting for PV local-pvck484 to bind to PVC pvc-rgv7v Jun 11 00:12:42.985: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-rgv7v] to have phase Bound Jun 11 00:12:42.987: INFO: PersistentVolumeClaim pvc-rgv7v found but phase is Pending instead of Bound. Jun 11 00:12:44.992: INFO: PersistentVolumeClaim pvc-rgv7v found but phase is Pending instead of Bound. Jun 11 00:12:46.996: INFO: PersistentVolumeClaim pvc-rgv7v found but phase is Pending instead of Bound. Jun 11 00:12:48.998: INFO: PersistentVolumeClaim pvc-rgv7v found but phase is Pending instead of Bound. Jun 11 00:12:51.001: INFO: PersistentVolumeClaim pvc-rgv7v found but phase is Pending instead of Bound. Jun 11 00:12:53.005: INFO: PersistentVolumeClaim pvc-rgv7v found but phase is Pending instead of Bound. Jun 11 00:12:55.008: INFO: PersistentVolumeClaim pvc-rgv7v found but phase is Pending instead of Bound. Jun 11 00:12:57.013: INFO: PersistentVolumeClaim pvc-rgv7v found and phase=Bound (14.028182103s) Jun 11 00:12:57.013: INFO: Waiting up to 3m0s for PersistentVolume local-pvck484 to have phase Bound Jun 11 00:12:57.016: INFO: PersistentVolume local-pvck484 found and phase=Bound (2.640843ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Jun 11 00:13:11.039: INFO: pod "pod-77ecfd50-8307-4b35-9a87-6fa3535b0d58" created on Node "node2" STEP: Writing in pod1 Jun 11 00:13:11.039: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-948 PodName:pod-77ecfd50-8307-4b35-9a87-6fa3535b0d58 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:13:11.039: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:13:11.151: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 STEP: Reading in pod1 Jun 11 00:13:11.151: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-948 PodName:pod-77ecfd50-8307-4b35-9a87-6fa3535b0d58 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:13:11.151: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:13:11.234: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-77ecfd50-8307-4b35-9a87-6fa3535b0d58 in namespace persistent-local-volumes-test-948 [AfterEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 11 00:13:11.239: INFO: Deleting PersistentVolumeClaim "pvc-rgv7v" Jun 11 00:13:11.243: INFO: Deleting PersistentVolume "local-pvck484" STEP: Removing the test directory Jun 11 00:13:11.247: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm /tmp/local-volume-test-47d313c1-af9d-4650-9f05-67cda6f689c9 && umount /tmp/local-volume-test-47d313c1-af9d-4650-9f05-67cda6f689c9-backend && rm -r /tmp/local-volume-test-47d313c1-af9d-4650-9f05-67cda6f689c9-backend] Namespace:persistent-local-volumes-test-948 PodName:hostexec-node2-fx4x7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:13:11.247: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:13:11.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-948" for this suite. • [SLOW TEST:34.804 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":13,"skipped":659,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:12:58.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-file STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:124 STEP: Create a pod for further testing Jun 11 00:12:58.617: INFO: The status of Pod test-hostpath-type-lb757 is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:13:00.622: INFO: The status of Pod test-hostpath-type-lb757 is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:13:02.622: INFO: The status of Pod test-hostpath-type-lb757 is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:13:04.621: INFO: The status of Pod test-hostpath-type-lb757 is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:13:06.622: INFO: The status of Pod test-hostpath-type-lb757 is Running (Ready = true) STEP: running on node node1 STEP: Should automatically create a new file 'afile' when HostPathType is HostPathFileOrCreate [It] Should fail on mounting file 'afile' when HostPathType is HostPathSocket /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:156 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:13:12.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-file-85" for this suite. • [SLOW TEST:14.100 seconds] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting file 'afile' when HostPathType is HostPathSocket /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:156 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType File [Slow] Should fail on mounting file 'afile' when HostPathType is HostPathSocket","total":-1,"completed":6,"skipped":305,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:13:03.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:106 STEP: Creating a pod to test downward API volume plugin Jun 11 00:13:03.581: INFO: Waiting up to 5m0s for pod "metadata-volume-47edd3dd-a7db-408e-9dde-e2ffd3435c1f" in namespace "downward-api-1026" to be "Succeeded or Failed" Jun 11 00:13:03.585: INFO: Pod "metadata-volume-47edd3dd-a7db-408e-9dde-e2ffd3435c1f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.282968ms Jun 11 00:13:05.590: INFO: Pod "metadata-volume-47edd3dd-a7db-408e-9dde-e2ffd3435c1f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008428472s Jun 11 00:13:07.593: INFO: Pod "metadata-volume-47edd3dd-a7db-408e-9dde-e2ffd3435c1f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011718011s Jun 11 00:13:09.598: INFO: Pod "metadata-volume-47edd3dd-a7db-408e-9dde-e2ffd3435c1f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016625616s Jun 11 00:13:11.603: INFO: Pod "metadata-volume-47edd3dd-a7db-408e-9dde-e2ffd3435c1f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.022107353s Jun 11 00:13:13.607: INFO: Pod "metadata-volume-47edd3dd-a7db-408e-9dde-e2ffd3435c1f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.026048692s STEP: Saw pod success Jun 11 00:13:13.607: INFO: Pod "metadata-volume-47edd3dd-a7db-408e-9dde-e2ffd3435c1f" satisfied condition "Succeeded or Failed" Jun 11 00:13:13.610: INFO: Trying to get logs from node node2 pod metadata-volume-47edd3dd-a7db-408e-9dde-e2ffd3435c1f container client-container: STEP: delete the pod Jun 11 00:13:13.628: INFO: Waiting for pod metadata-volume-47edd3dd-a7db-408e-9dde-e2ffd3435c1f to disappear Jun 11 00:13:13.630: INFO: Pod metadata-volume-47edd3dd-a7db-408e-9dde-e2ffd3435c1f no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:13:13.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1026" for this suite. • [SLOW TEST:10.090 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:106 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":12,"skipped":257,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:11:32.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] exhausted, late binding, with topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1080 STEP: Building a driver namespace object, basename csi-mock-volumes-1973 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock proxy Jun 11 00:11:32.354: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1973-9220/csi-attacher Jun 11 00:11:32.357: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-1973 Jun 11 00:11:32.357: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-1973 Jun 11 00:11:32.360: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-1973 Jun 11 00:11:32.363: INFO: creating *v1.Role: csi-mock-volumes-1973-9220/external-attacher-cfg-csi-mock-volumes-1973 Jun 11 00:11:32.366: INFO: creating *v1.RoleBinding: csi-mock-volumes-1973-9220/csi-attacher-role-cfg Jun 11 00:11:32.369: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1973-9220/csi-provisioner Jun 11 00:11:32.372: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-1973 Jun 11 00:11:32.372: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-1973 Jun 11 00:11:32.375: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-1973 Jun 11 00:11:32.379: INFO: creating *v1.Role: csi-mock-volumes-1973-9220/external-provisioner-cfg-csi-mock-volumes-1973 Jun 11 00:11:32.381: INFO: creating *v1.RoleBinding: csi-mock-volumes-1973-9220/csi-provisioner-role-cfg Jun 11 00:11:32.384: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1973-9220/csi-resizer Jun 11 00:11:32.387: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-1973 Jun 11 00:11:32.387: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-1973 Jun 11 00:11:32.389: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-1973 Jun 11 00:11:32.392: INFO: creating *v1.Role: csi-mock-volumes-1973-9220/external-resizer-cfg-csi-mock-volumes-1973 Jun 11 00:11:32.395: INFO: creating *v1.RoleBinding: csi-mock-volumes-1973-9220/csi-resizer-role-cfg Jun 11 00:11:32.397: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1973-9220/csi-snapshotter Jun 11 00:11:32.399: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-1973 Jun 11 00:11:32.399: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-1973 Jun 11 00:11:32.402: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-1973 Jun 11 00:11:32.405: INFO: creating *v1.Role: csi-mock-volumes-1973-9220/external-snapshotter-leaderelection-csi-mock-volumes-1973 Jun 11 00:11:32.408: INFO: creating *v1.RoleBinding: csi-mock-volumes-1973-9220/external-snapshotter-leaderelection Jun 11 00:11:32.411: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1973-9220/csi-mock Jun 11 00:11:32.413: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-1973 Jun 11 00:11:32.416: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-1973 Jun 11 00:11:32.419: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-1973 Jun 11 00:11:32.421: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-1973 Jun 11 00:11:32.424: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-1973 Jun 11 00:11:32.427: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-1973 Jun 11 00:11:32.430: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-1973 Jun 11 00:11:32.432: INFO: creating *v1.StatefulSet: csi-mock-volumes-1973-9220/csi-mockplugin Jun 11 00:11:32.437: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-1973 Jun 11 00:11:32.440: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-1973" Jun 11 00:11:32.442: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-1973 to register on node node1 I0611 00:11:48.560233 30 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-1973","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I0611 00:11:48.654735 30 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null} I0611 00:11:48.656516 30 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-1973","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I0611 00:11:48.697369 30 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}},{"Type":{"Service":{"type":2}}}]},"Error":"","FullError":null} I0611 00:11:48.700583 30 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null} I0611 00:11:48.770722 30 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-1973","accessible_topology":{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}},"Error":"","FullError":null} STEP: Creating pod Jun 11 00:11:58.843: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil I0611 00:11:58.886120 30 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-04e678cc-82b8-4e0a-bbbd-7fae4f293378","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}],"accessibility_requirements":{"requisite":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}],"preferred":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}]}},"Response":null,"Error":"rpc error: code = ResourceExhausted desc = fake error","FullError":{"code":8,"message":"fake error"}} I0611 00:12:01.660069 30 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-04e678cc-82b8-4e0a-bbbd-7fae4f293378","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}],"accessibility_requirements":{"requisite":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}],"preferred":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}]}},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-04e678cc-82b8-4e0a-bbbd-7fae4f293378"},"accessible_topology":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}]}},"Error":"","FullError":null} I0611 00:12:04.587990 30 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Jun 11 00:12:04.590: INFO: >>> kubeConfig: /root/.kube/config I0611 00:12:04.676812 30 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-04e678cc-82b8-4e0a-bbbd-7fae4f293378/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-04e678cc-82b8-4e0a-bbbd-7fae4f293378","storage.kubernetes.io/csiProvisionerIdentity":"1654906308703-8081-csi-mock-csi-mock-volumes-1973"}},"Response":{},"Error":"","FullError":null} I0611 00:12:05.622033 30 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Jun 11 00:12:05.624: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:12:05.713: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:12:05.805: INFO: >>> kubeConfig: /root/.kube/config I0611 00:12:05.895435 30 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-04e678cc-82b8-4e0a-bbbd-7fae4f293378/globalmount","target_path":"/var/lib/kubelet/pods/3d46a304-cfba-45f9-a90f-6a131c511ba9/volumes/kubernetes.io~csi/pvc-04e678cc-82b8-4e0a-bbbd-7fae4f293378/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-04e678cc-82b8-4e0a-bbbd-7fae4f293378","storage.kubernetes.io/csiProvisionerIdentity":"1654906308703-8081-csi-mock-csi-mock-volumes-1973"}},"Response":{},"Error":"","FullError":null} Jun 11 00:12:12.867: INFO: Deleting pod "pvc-volume-tester-hr856" in namespace "csi-mock-volumes-1973" Jun 11 00:12:12.871: INFO: Wait up to 5m0s for pod "pvc-volume-tester-hr856" to be fully deleted Jun 11 00:12:18.917: INFO: >>> kubeConfig: /root/.kube/config I0611 00:12:19.019044 30 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/3d46a304-cfba-45f9-a90f-6a131c511ba9/volumes/kubernetes.io~csi/pvc-04e678cc-82b8-4e0a-bbbd-7fae4f293378/mount"},"Response":{},"Error":"","FullError":null} I0611 00:12:19.119262 30 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0611 00:12:19.121004 30 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-04e678cc-82b8-4e0a-bbbd-7fae4f293378/globalmount"},"Response":{},"Error":"","FullError":null} I0611 00:12:30.909135 30 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null} STEP: Checking PVC events Jun 11 00:12:31.882: INFO: PVC event ADDED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-5tpd5", GenerateName:"pvc-", Namespace:"csi-mock-volumes-1973", SelfLink:"", UID:"04e678cc-82b8-4e0a-bbbd-7fae4f293378", ResourceVersion:"101239", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63790503118, loc:(*time.Location)(0x9e2e180)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002c977b8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002c977d0)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc00366ba00), VolumeMode:(*v1.PersistentVolumeMode)(0xc00366ba10), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Jun 11 00:12:31.882: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-5tpd5", GenerateName:"pvc-", Namespace:"csi-mock-volumes-1973", SelfLink:"", UID:"04e678cc-82b8-4e0a-bbbd-7fae4f293378", ResourceVersion:"101242", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63790503118, loc:(*time.Location)(0x9e2e180)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.kubernetes.io/selected-node":"node1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00458fc08), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00458fc20)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00458fc38), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00458fc50)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc003a4c110), VolumeMode:(*v1.PersistentVolumeMode)(0xc003a4c120), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Jun 11 00:12:31.882: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-5tpd5", GenerateName:"pvc-", Namespace:"csi-mock-volumes-1973", SelfLink:"", UID:"04e678cc-82b8-4e0a-bbbd-7fae4f293378", ResourceVersion:"101243", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63790503118, loc:(*time.Location)(0x9e2e180)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-1973", "volume.kubernetes.io/selected-node":"node1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc001822558), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001822570)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc001822588), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0018225a0)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0018225b8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0018225d0)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc00358e310), VolumeMode:(*v1.PersistentVolumeMode)(0xc00358e330), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Jun 11 00:12:31.882: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-5tpd5", GenerateName:"pvc-", Namespace:"csi-mock-volumes-1973", SelfLink:"", UID:"04e678cc-82b8-4e0a-bbbd-7fae4f293378", ResourceVersion:"101247", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63790503118, loc:(*time.Location)(0x9e2e180)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-1973"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0018225e8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001822600)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc001822618), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001822630)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc001822648), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001822660)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc00358e360), VolumeMode:(*v1.PersistentVolumeMode)(0xc00358e370), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Jun 11 00:12:31.882: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-5tpd5", GenerateName:"pvc-", Namespace:"csi-mock-volumes-1973", SelfLink:"", UID:"04e678cc-82b8-4e0a-bbbd-7fae4f293378", ResourceVersion:"101305", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63790503118, loc:(*time.Location)(0x9e2e180)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-1973", "volume.kubernetes.io/selected-node":"node1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003480840), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003480858)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003480870), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003480888)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0034808a0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0034808b8)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc001083180), VolumeMode:(*v1.PersistentVolumeMode)(0xc001083200), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Jun 11 00:12:31.882: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-5tpd5", GenerateName:"pvc-", Namespace:"csi-mock-volumes-1973", SelfLink:"", UID:"04e678cc-82b8-4e0a-bbbd-7fae4f293378", ResourceVersion:"101311", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63790503118, loc:(*time.Location)(0x9e2e180)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-1973", "volume.kubernetes.io/selected-node":"node1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0034808e8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003480900)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003480918), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003480930)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003480948), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003480960)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-04e678cc-82b8-4e0a-bbbd-7fae4f293378", StorageClassName:(*string)(0xc001083290), VolumeMode:(*v1.PersistentVolumeMode)(0xc0010832d0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Jun 11 00:12:31.883: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-5tpd5", GenerateName:"pvc-", Namespace:"csi-mock-volumes-1973", SelfLink:"", UID:"04e678cc-82b8-4e0a-bbbd-7fae4f293378", ResourceVersion:"101312", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63790503118, loc:(*time.Location)(0x9e2e180)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-1973", "volume.kubernetes.io/selected-node":"node1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003480990), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0034809a8)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0034809c0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0034809d8)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0034809f0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003480a08)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-04e678cc-82b8-4e0a-bbbd-7fae4f293378", StorageClassName:(*string)(0xc0010833e0), VolumeMode:(*v1.PersistentVolumeMode)(0xc001083400), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Jun 11 00:12:31.883: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-5tpd5", GenerateName:"pvc-", Namespace:"csi-mock-volumes-1973", SelfLink:"", UID:"04e678cc-82b8-4e0a-bbbd-7fae4f293378", ResourceVersion:"102093", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63790503118, loc:(*time.Location)(0x9e2e180)}}, DeletionTimestamp:(*v1.Time)(0xc003480a38), DeletionGracePeriodSeconds:(*int64)(0xc003132d58), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-1973", "volume.kubernetes.io/selected-node":"node1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003480a50), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003480a68)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003480a80), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003480a98)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003480ab0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003480ac8)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-04e678cc-82b8-4e0a-bbbd-7fae4f293378", StorageClassName:(*string)(0xc001083440), VolumeMode:(*v1.PersistentVolumeMode)(0xc001083460), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Jun 11 00:12:31.883: INFO: PVC event DELETED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-5tpd5", GenerateName:"pvc-", Namespace:"csi-mock-volumes-1973", SelfLink:"", UID:"04e678cc-82b8-4e0a-bbbd-7fae4f293378", ResourceVersion:"102094", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63790503118, loc:(*time.Location)(0x9e2e180)}}, DeletionTimestamp:(*v1.Time)(0xc003cb4168), DeletionGracePeriodSeconds:(*int64)(0xc0027f1b58), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-1973", "volume.kubernetes.io/selected-node":"node1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003cb4180), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003cb4198)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003cb41b0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003cb41c8)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003cb41e0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003cb41f8)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-04e678cc-82b8-4e0a-bbbd-7fae4f293378", StorageClassName:(*string)(0xc003a4d4f0), VolumeMode:(*v1.PersistentVolumeMode)(0xc003a4d500), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} STEP: Deleting pod pvc-volume-tester-hr856 Jun 11 00:12:31.883: INFO: Deleting pod "pvc-volume-tester-hr856" in namespace "csi-mock-volumes-1973" STEP: Deleting claim pvc-5tpd5 STEP: Deleting storageclass csi-mock-volumes-1973-scx6m24 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-1973 STEP: Waiting for namespaces [csi-mock-volumes-1973] to vanish STEP: uninstalling csi mock driver Jun 11 00:12:37.922: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1973-9220/csi-attacher Jun 11 00:12:37.926: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-1973 Jun 11 00:12:37.930: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-1973 Jun 11 00:12:37.934: INFO: deleting *v1.Role: csi-mock-volumes-1973-9220/external-attacher-cfg-csi-mock-volumes-1973 Jun 11 00:12:37.937: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1973-9220/csi-attacher-role-cfg Jun 11 00:12:37.941: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1973-9220/csi-provisioner Jun 11 00:12:37.945: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-1973 Jun 11 00:12:37.948: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-1973 Jun 11 00:12:37.951: INFO: deleting *v1.Role: csi-mock-volumes-1973-9220/external-provisioner-cfg-csi-mock-volumes-1973 Jun 11 00:12:37.954: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1973-9220/csi-provisioner-role-cfg Jun 11 00:12:37.958: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1973-9220/csi-resizer Jun 11 00:12:37.961: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-1973 Jun 11 00:12:37.965: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-1973 Jun 11 00:12:37.968: INFO: deleting *v1.Role: csi-mock-volumes-1973-9220/external-resizer-cfg-csi-mock-volumes-1973 Jun 11 00:12:37.972: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1973-9220/csi-resizer-role-cfg Jun 11 00:12:37.975: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1973-9220/csi-snapshotter Jun 11 00:12:37.978: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-1973 Jun 11 00:12:37.982: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-1973 Jun 11 00:12:37.985: INFO: deleting *v1.Role: csi-mock-volumes-1973-9220/external-snapshotter-leaderelection-csi-mock-volumes-1973 Jun 11 00:12:37.988: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1973-9220/external-snapshotter-leaderelection Jun 11 00:12:37.991: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1973-9220/csi-mock Jun 11 00:12:37.994: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-1973 Jun 11 00:12:37.998: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-1973 Jun 11 00:12:38.001: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-1973 Jun 11 00:12:38.004: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-1973 Jun 11 00:12:38.008: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-1973 Jun 11 00:12:38.011: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-1973 Jun 11 00:12:38.014: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-1973 Jun 11 00:12:38.017: INFO: deleting *v1.StatefulSet: csi-mock-volumes-1973-9220/csi-mockplugin Jun 11 00:12:38.021: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-1973 STEP: deleting the driver namespace: csi-mock-volumes-1973-9220 STEP: Waiting for namespaces [csi-mock-volumes-1973-9220] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:13:22.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:109.750 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 storage capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1022 exhausted, late binding, with topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1080 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume storage capacity exhausted, late binding, with topology","total":-1,"completed":8,"skipped":150,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:12:46.799: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node2" using path "/tmp/local-volume-test-3c18e8bf-9705-4fa8-9b8f-fcb2d6dfc5e8" Jun 11 00:12:54.846: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-3c18e8bf-9705-4fa8-9b8f-fcb2d6dfc5e8 && dd if=/dev/zero of=/tmp/local-volume-test-3c18e8bf-9705-4fa8-9b8f-fcb2d6dfc5e8/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-3c18e8bf-9705-4fa8-9b8f-fcb2d6dfc5e8/file] Namespace:persistent-local-volumes-test-9375 PodName:hostexec-node2-7gzhb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:12:54.846: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:12:54.958: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-3c18e8bf-9705-4fa8-9b8f-fcb2d6dfc5e8/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-9375 PodName:hostexec-node2-7gzhb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:12:54.959: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 11 00:12:55.052: INFO: Creating a PV followed by a PVC Jun 11 00:12:55.058: INFO: Waiting for PV local-pvsjxbp to bind to PVC pvc-dws62 Jun 11 00:12:55.058: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-dws62] to have phase Bound Jun 11 00:12:55.061: INFO: PersistentVolumeClaim pvc-dws62 found but phase is Pending instead of Bound. Jun 11 00:12:57.065: INFO: PersistentVolumeClaim pvc-dws62 found and phase=Bound (2.006479892s) Jun 11 00:12:57.065: INFO: Waiting up to 3m0s for PersistentVolume local-pvsjxbp to have phase Bound Jun 11 00:12:57.067: INFO: PersistentVolume local-pvsjxbp found and phase=Bound (1.839546ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 STEP: Creating pod1 STEP: Creating a pod Jun 11 00:13:09.094: INFO: pod "pod-fef30c7c-c7bb-40ca-b8b4-20605bbac3d1" created on Node "node2" STEP: Writing in pod1 Jun 11 00:13:09.094: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /tmp/mnt/volume1; echo test-file-content > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9375 PodName:pod-fef30c7c-c7bb-40ca-b8b4-20605bbac3d1 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:13:09.094: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:13:09.431: INFO: podRWCmdExec cmd: "mkdir -p /tmp/mnt/volume1; echo test-file-content > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file", out: "", stderr: "0+1 records in\n0+1 records out\n18 bytes (18B) copied, 0.000176 seconds, 99.9KB/s", err: Jun 11 00:13:09.431: INFO: ExecWithOptions {Command:[/bin/sh -c hexdump -n 100 -e '100 "%_p"' /mnt/volume1 | head -1] Namespace:persistent-local-volumes-test-9375 PodName:pod-fef30c7c-c7bb-40ca-b8b4-20605bbac3d1 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:13:09.431: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:13:09.782: INFO: podRWCmdExec cmd: "hexdump -n 100 -e '100 \"%_p\"' /mnt/volume1 | head -1", out: "test-file-content...................................................................................", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-fef30c7c-c7bb-40ca-b8b4-20605bbac3d1 in namespace persistent-local-volumes-test-9375 STEP: Creating pod2 STEP: Creating a pod Jun 11 00:13:21.811: INFO: pod "pod-1b9e4d59-a5ae-4496-8c0a-9a3b48ad606c" created on Node "node2" STEP: Reading in pod2 Jun 11 00:13:21.811: INFO: ExecWithOptions {Command:[/bin/sh -c hexdump -n 100 -e '100 "%_p"' /mnt/volume1 | head -1] Namespace:persistent-local-volumes-test-9375 PodName:pod-1b9e4d59-a5ae-4496-8c0a-9a3b48ad606c ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:13:21.811: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:13:21.902: INFO: podRWCmdExec cmd: "hexdump -n 100 -e '100 \"%_p\"' /mnt/volume1 | head -1", out: "test-file-content...................................................................................", stderr: "", err: STEP: Deleting pod2 STEP: Deleting pod pod-1b9e4d59-a5ae-4496-8c0a-9a3b48ad606c in namespace persistent-local-volumes-test-9375 [AfterEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 11 00:13:21.906: INFO: Deleting PersistentVolumeClaim "pvc-dws62" Jun 11 00:13:21.910: INFO: Deleting PersistentVolume "local-pvsjxbp" Jun 11 00:13:21.914: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-3c18e8bf-9705-4fa8-9b8f-fcb2d6dfc5e8/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-9375 PodName:hostexec-node2-7gzhb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:13:21.914: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop1" on node "node2" at path /tmp/local-volume-test-3c18e8bf-9705-4fa8-9b8f-fcb2d6dfc5e8/file Jun 11 00:13:22.013: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop1] Namespace:persistent-local-volumes-test-9375 PodName:hostexec-node2-7gzhb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:13:22.014: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-3c18e8bf-9705-4fa8-9b8f-fcb2d6dfc5e8 Jun 11 00:13:22.109: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-3c18e8bf-9705-4fa8-9b8f-fcb2d6dfc5e8] Namespace:persistent-local-volumes-test-9375 PodName:hostexec-node2-7gzhb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:13:22.109: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:13:22.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-9375" for this suite. • [SLOW TEST:35.425 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: block] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":15,"skipped":694,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Character Device [Slow] Should fail on mounting character device 'achardev' when HostPathType is HostPathFile","total":-1,"completed":19,"skipped":566,"failed":0} [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:13:01.718: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Jun 11 00:13:05.770: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-65d1a256-91a8-466a-870c-bc87cabbfa03-backend && ln -s /tmp/local-volume-test-65d1a256-91a8-466a-870c-bc87cabbfa03-backend /tmp/local-volume-test-65d1a256-91a8-466a-870c-bc87cabbfa03] Namespace:persistent-local-volumes-test-7703 PodName:hostexec-node1-jlq42 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:13:05.770: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 11 00:13:05.870: INFO: Creating a PV followed by a PVC Jun 11 00:13:05.877: INFO: Waiting for PV local-pvcfshk to bind to PVC pvc-zn5lg Jun 11 00:13:05.877: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-zn5lg] to have phase Bound Jun 11 00:13:05.879: INFO: PersistentVolumeClaim pvc-zn5lg found but phase is Pending instead of Bound. Jun 11 00:13:07.882: INFO: PersistentVolumeClaim pvc-zn5lg found but phase is Pending instead of Bound. Jun 11 00:13:09.888: INFO: PersistentVolumeClaim pvc-zn5lg found but phase is Pending instead of Bound. Jun 11 00:13:11.891: INFO: PersistentVolumeClaim pvc-zn5lg found and phase=Bound (6.014297833s) Jun 11 00:13:11.891: INFO: Waiting up to 3m0s for PersistentVolume local-pvcfshk to have phase Bound Jun 11 00:13:11.897: INFO: PersistentVolume local-pvcfshk found and phase=Bound (6.011939ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 STEP: Creating pod1 to write to the PV STEP: Creating a pod Jun 11 00:13:17.922: INFO: pod "pod-69b83727-22fb-473f-a100-2572bdae1b5a" created on Node "node1" STEP: Writing in pod1 Jun 11 00:13:17.923: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-7703 PodName:pod-69b83727-22fb-473f-a100-2572bdae1b5a ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:13:17.923: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:13:18.027: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Jun 11 00:13:18.028: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-7703 PodName:pod-69b83727-22fb-473f-a100-2572bdae1b5a ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:13:18.028: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:13:19.011: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Creating pod2 to read from the PV STEP: Creating a pod Jun 11 00:13:29.037: INFO: pod "pod-ad706e52-3b25-4b5f-8210-5eed50b681ce" created on Node "node1" Jun 11 00:13:29.037: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-7703 PodName:pod-ad706e52-3b25-4b5f-8210-5eed50b681ce ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:13:29.037: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:13:29.124: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod2 Jun 11 00:13:29.124: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-65d1a256-91a8-466a-870c-bc87cabbfa03 > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-7703 PodName:pod-ad706e52-3b25-4b5f-8210-5eed50b681ce ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:13:29.124: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:13:29.254: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-65d1a256-91a8-466a-870c-bc87cabbfa03 > /mnt/volume1/test-file", out: "", stderr: "", err: STEP: Reading in pod1 Jun 11 00:13:29.254: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-7703 PodName:pod-69b83727-22fb-473f-a100-2572bdae1b5a ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:13:29.254: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:13:29.346: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "/tmp/local-volume-test-65d1a256-91a8-466a-870c-bc87cabbfa03", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-69b83727-22fb-473f-a100-2572bdae1b5a in namespace persistent-local-volumes-test-7703 STEP: Deleting pod2 STEP: Deleting pod pod-ad706e52-3b25-4b5f-8210-5eed50b681ce in namespace persistent-local-volumes-test-7703 [AfterEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 11 00:13:29.356: INFO: Deleting PersistentVolumeClaim "pvc-zn5lg" Jun 11 00:13:29.360: INFO: Deleting PersistentVolume "local-pvcfshk" STEP: Removing the test directory Jun 11 00:13:29.365: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-65d1a256-91a8-466a-870c-bc87cabbfa03 && rm -r /tmp/local-volume-test-65d1a256-91a8-466a-870c-bc87cabbfa03-backend] Namespace:persistent-local-volumes-test-7703 PodName:hostexec-node1-jlq42 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:13:29.365: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:13:29.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-7703" for this suite. • [SLOW TEST:27.755 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":20,"skipped":566,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:13:12.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume-provisioning STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:146 [It] should let an external dynamic provisioner create and delete persistent volumes [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:627 Jun 11 00:13:12.730: INFO: Found ClusterRoles; assuming RBAC is enabled. STEP: creating an external dynamic provisioner pod STEP: locating the provisioner pod STEP: creating a StorageClass STEP: Creating a StorageClass Jun 11 00:13:28.870: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil STEP: creating a claim with a external provisioning annotation STEP: creating claim=&PersistentVolumeClaim{ObjectMeta:{ pvc- volume-provisioning-6062 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1572864000 0} {} 1500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*volume-provisioning-6062-externalg8fxp,VolumeMode:nil,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} Jun 11 00:13:28.877: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-ljpf6] to have phase Bound Jun 11 00:13:28.879: INFO: PersistentVolumeClaim pvc-ljpf6 found but phase is Pending instead of Bound. Jun 11 00:13:30.883: INFO: PersistentVolumeClaim pvc-ljpf6 found and phase=Bound (2.006595112s) STEP: checking the claim STEP: checking the PV STEP: deleting claim "volume-provisioning-6062"/"pvc-ljpf6" STEP: deleting the claim's PV "pvc-602a0d9f-7c72-4ec8-bbb4-80420b0814cc" Jun 11 00:13:30.892: INFO: Waiting up to 20m0s for PersistentVolume pvc-602a0d9f-7c72-4ec8-bbb4-80420b0814cc to get deleted Jun 11 00:13:30.895: INFO: PersistentVolume pvc-602a0d9f-7c72-4ec8-bbb4-80420b0814cc found and phase=Bound (2.381894ms) Jun 11 00:13:35.900: INFO: PersistentVolume pvc-602a0d9f-7c72-4ec8-bbb4-80420b0814cc was removed Jun 11 00:13:35.900: INFO: deleting claim "volume-provisioning-6062"/"pvc-ljpf6" Jun 11 00:13:35.903: INFO: deleting storage class volume-provisioning-6062-externalg8fxp STEP: Deleting pod external-provisioner-f6dsq in namespace volume-provisioning-6062 [AfterEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:13:35.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-provisioning-6062" for this suite. • [SLOW TEST:23.222 seconds] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 DynamicProvisioner External /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:626 should let an external dynamic provisioner create and delete persistent volumes [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:627 ------------------------------ {"msg":"PASSED [sig-storage] Dynamic Provisioning DynamicProvisioner External should let an external dynamic provisioner create and delete persistent volumes [Slow]","total":-1,"completed":7,"skipped":311,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:13:22.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-directory STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:57 STEP: Create a pod for further testing Jun 11 00:13:22.091: INFO: The status of Pod test-hostpath-type-h566f is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:13:24.097: INFO: The status of Pod test-hostpath-type-h566f is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:13:26.094: INFO: The status of Pod test-hostpath-type-h566f is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:13:28.094: INFO: The status of Pod test-hostpath-type-h566f is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:13:30.098: INFO: The status of Pod test-hostpath-type-h566f is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:13:32.097: INFO: The status of Pod test-hostpath-type-h566f is Running (Ready = true) STEP: running on node node1 STEP: Should automatically create a new directory 'adir' when HostPathType is HostPathDirectoryOrCreate [It] Should fail on mounting non-existent directory 'does-not-exist-dir' when HostPathType is HostPathDirectory /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:70 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:13:38.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-directory-4163" for this suite. • [SLOW TEST:16.105 seconds] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting non-existent directory 'does-not-exist-dir' when HostPathType is HostPathDirectory /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:70 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Directory [Slow] Should fail on mounting non-existent directory 'does-not-exist-dir' when HostPathType is HostPathDirectory","total":-1,"completed":9,"skipped":153,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:13:22.276: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-file STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:124 STEP: Create a pod for further testing Jun 11 00:13:22.312: INFO: The status of Pod test-hostpath-type-ch79p is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:13:24.314: INFO: The status of Pod test-hostpath-type-ch79p is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:13:26.315: INFO: The status of Pod test-hostpath-type-ch79p is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:13:28.316: INFO: The status of Pod test-hostpath-type-ch79p is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:13:30.316: INFO: The status of Pod test-hostpath-type-ch79p is Running (Ready = true) STEP: running on node node1 STEP: Should automatically create a new file 'afile' when HostPathType is HostPathFileOrCreate [It] Should fail on mounting file 'afile' when HostPathType is HostPathBlockDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:166 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:13:38.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-file-2182" for this suite. • [SLOW TEST:16.098 seconds] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting file 'afile' when HostPathType is HostPathBlockDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:166 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType File [Slow] Should fail on mounting file 'afile' when HostPathType is HostPathBlockDev","total":-1,"completed":16,"skipped":714,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:13:38.389: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-block-dev STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:325 STEP: Create a pod for further testing Jun 11 00:13:38.427: INFO: The status of Pod test-hostpath-type-kj648 is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:13:40.431: INFO: The status of Pod test-hostpath-type-kj648 is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:13:42.433: INFO: The status of Pod test-hostpath-type-kj648 is Running (Ready = true) STEP: running on node node2 STEP: Create a block device for further testing Jun 11 00:13:42.435: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/ablkdev b 89 1] Namespace:host-path-type-block-dev-3574 PodName:test-hostpath-type-kj648 ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:13:42.435: INFO: >>> kubeConfig: /root/.kube/config [It] Should fail on mounting block device 'ablkdev' when HostPathType is HostPathCharDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:369 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:13:44.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-block-dev-3574" for this suite. • [SLOW TEST:6.175 seconds] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting block device 'ablkdev' when HostPathType is HostPathCharDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:369 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Block Device [Slow] Should fail on mounting block device 'ablkdev' when HostPathType is HostPathCharDev","total":-1,"completed":17,"skipped":719,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":12,"skipped":578,"failed":0} [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:12:48.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should not be passed when podInfoOnMount=nil /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 STEP: Building a driver namespace object, basename csi-mock-volumes-4694 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Jun 11 00:12:48.877: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4694-9600/csi-attacher Jun 11 00:12:48.880: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4694 Jun 11 00:12:48.880: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-4694 Jun 11 00:12:48.884: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4694 Jun 11 00:12:48.891: INFO: creating *v1.Role: csi-mock-volumes-4694-9600/external-attacher-cfg-csi-mock-volumes-4694 Jun 11 00:12:48.897: INFO: creating *v1.RoleBinding: csi-mock-volumes-4694-9600/csi-attacher-role-cfg Jun 11 00:12:48.906: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4694-9600/csi-provisioner Jun 11 00:12:48.909: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4694 Jun 11 00:12:48.909: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-4694 Jun 11 00:12:48.912: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4694 Jun 11 00:12:48.915: INFO: creating *v1.Role: csi-mock-volumes-4694-9600/external-provisioner-cfg-csi-mock-volumes-4694 Jun 11 00:12:48.918: INFO: creating *v1.RoleBinding: csi-mock-volumes-4694-9600/csi-provisioner-role-cfg Jun 11 00:12:48.921: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4694-9600/csi-resizer Jun 11 00:12:48.924: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4694 Jun 11 00:12:48.924: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-4694 Jun 11 00:12:48.927: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4694 Jun 11 00:12:48.930: INFO: creating *v1.Role: csi-mock-volumes-4694-9600/external-resizer-cfg-csi-mock-volumes-4694 Jun 11 00:12:48.933: INFO: creating *v1.RoleBinding: csi-mock-volumes-4694-9600/csi-resizer-role-cfg Jun 11 00:12:48.936: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4694-9600/csi-snapshotter Jun 11 00:12:48.939: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4694 Jun 11 00:12:48.939: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-4694 Jun 11 00:12:48.942: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4694 Jun 11 00:12:48.945: INFO: creating *v1.Role: csi-mock-volumes-4694-9600/external-snapshotter-leaderelection-csi-mock-volumes-4694 Jun 11 00:12:48.947: INFO: creating *v1.RoleBinding: csi-mock-volumes-4694-9600/external-snapshotter-leaderelection Jun 11 00:12:48.950: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4694-9600/csi-mock Jun 11 00:12:48.953: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4694 Jun 11 00:12:48.955: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4694 Jun 11 00:12:48.958: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4694 Jun 11 00:12:48.961: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4694 Jun 11 00:12:48.964: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4694 Jun 11 00:12:48.966: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4694 Jun 11 00:12:48.969: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4694 Jun 11 00:12:48.972: INFO: creating *v1.StatefulSet: csi-mock-volumes-4694-9600/csi-mockplugin Jun 11 00:12:48.977: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-4694 Jun 11 00:12:48.980: INFO: creating *v1.StatefulSet: csi-mock-volumes-4694-9600/csi-mockplugin-attacher Jun 11 00:12:48.983: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-4694" Jun 11 00:12:48.986: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-4694 to register on node node2 STEP: Creating pod Jun 11 00:12:58.500: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jun 11 00:12:58.504: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-bt97x] to have phase Bound Jun 11 00:12:58.507: INFO: PersistentVolumeClaim pvc-bt97x found but phase is Pending instead of Bound. Jun 11 00:13:00.511: INFO: PersistentVolumeClaim pvc-bt97x found and phase=Bound (2.006691264s) STEP: Deleting the previously created pod Jun 11 00:13:20.532: INFO: Deleting pod "pvc-volume-tester-nbck5" in namespace "csi-mock-volumes-4694" Jun 11 00:13:20.537: INFO: Wait up to 5m0s for pod "pvc-volume-tester-nbck5" to be fully deleted STEP: Checking CSI driver logs Jun 11 00:13:24.558: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/5a056ea2-f48d-48eb-b22a-a5bb3bcb327e/volumes/kubernetes.io~csi/pvc-d98edc7a-9c19-4fbb-8085-ecb058aa3483/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:} STEP: Deleting pod pvc-volume-tester-nbck5 Jun 11 00:13:24.559: INFO: Deleting pod "pvc-volume-tester-nbck5" in namespace "csi-mock-volumes-4694" STEP: Deleting claim pvc-bt97x Jun 11 00:13:24.568: INFO: Waiting up to 2m0s for PersistentVolume pvc-d98edc7a-9c19-4fbb-8085-ecb058aa3483 to get deleted Jun 11 00:13:24.570: INFO: PersistentVolume pvc-d98edc7a-9c19-4fbb-8085-ecb058aa3483 found and phase=Bound (2.340937ms) Jun 11 00:13:26.573: INFO: PersistentVolume pvc-d98edc7a-9c19-4fbb-8085-ecb058aa3483 was removed STEP: Deleting storageclass csi-mock-volumes-4694-sct2966 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-4694 STEP: Waiting for namespaces [csi-mock-volumes-4694] to vanish STEP: uninstalling csi mock driver Jun 11 00:13:32.585: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4694-9600/csi-attacher Jun 11 00:13:32.589: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4694 Jun 11 00:13:32.592: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4694 Jun 11 00:13:32.596: INFO: deleting *v1.Role: csi-mock-volumes-4694-9600/external-attacher-cfg-csi-mock-volumes-4694 Jun 11 00:13:32.599: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4694-9600/csi-attacher-role-cfg Jun 11 00:13:32.603: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4694-9600/csi-provisioner Jun 11 00:13:32.607: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4694 Jun 11 00:13:32.611: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4694 Jun 11 00:13:32.614: INFO: deleting *v1.Role: csi-mock-volumes-4694-9600/external-provisioner-cfg-csi-mock-volumes-4694 Jun 11 00:13:32.618: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4694-9600/csi-provisioner-role-cfg Jun 11 00:13:32.622: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4694-9600/csi-resizer Jun 11 00:13:32.626: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4694 Jun 11 00:13:32.629: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4694 Jun 11 00:13:32.632: INFO: deleting *v1.Role: csi-mock-volumes-4694-9600/external-resizer-cfg-csi-mock-volumes-4694 Jun 11 00:13:32.636: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4694-9600/csi-resizer-role-cfg Jun 11 00:13:32.640: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4694-9600/csi-snapshotter Jun 11 00:13:32.644: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4694 Jun 11 00:13:32.647: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4694 Jun 11 00:13:32.650: INFO: deleting *v1.Role: csi-mock-volumes-4694-9600/external-snapshotter-leaderelection-csi-mock-volumes-4694 Jun 11 00:13:32.654: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4694-9600/external-snapshotter-leaderelection Jun 11 00:13:32.657: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4694-9600/csi-mock Jun 11 00:13:32.661: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4694 Jun 11 00:13:32.664: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4694 Jun 11 00:13:32.667: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4694 Jun 11 00:13:32.670: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4694 Jun 11 00:13:32.673: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4694 Jun 11 00:13:32.677: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4694 Jun 11 00:13:32.680: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4694 Jun 11 00:13:32.684: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4694-9600/csi-mockplugin Jun 11 00:13:32.687: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-4694 Jun 11 00:13:32.690: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4694-9600/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-4694-9600 STEP: Waiting for namespaces [csi-mock-volumes-4694-9600] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:14:00.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:71.900 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI workload information using mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:443 should not be passed when podInfoOnMount=nil /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=nil","total":-1,"completed":13,"skipped":578,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Directory [Slow] Should fail on mounting directory 'adir' when HostPathType is HostPathSocket","total":-1,"completed":8,"skipped":155,"failed":0} [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:13:02.417: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should expand volume without restarting pod if nodeExpansion=off /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590 STEP: Building a driver namespace object, basename csi-mock-volumes-4472 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Jun 11 00:13:02.481: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4472-1499/csi-attacher Jun 11 00:13:02.483: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4472 Jun 11 00:13:02.483: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-4472 Jun 11 00:13:02.486: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4472 Jun 11 00:13:02.489: INFO: creating *v1.Role: csi-mock-volumes-4472-1499/external-attacher-cfg-csi-mock-volumes-4472 Jun 11 00:13:02.492: INFO: creating *v1.RoleBinding: csi-mock-volumes-4472-1499/csi-attacher-role-cfg Jun 11 00:13:02.495: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4472-1499/csi-provisioner Jun 11 00:13:02.497: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4472 Jun 11 00:13:02.497: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-4472 Jun 11 00:13:02.501: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4472 Jun 11 00:13:02.504: INFO: creating *v1.Role: csi-mock-volumes-4472-1499/external-provisioner-cfg-csi-mock-volumes-4472 Jun 11 00:13:02.506: INFO: creating *v1.RoleBinding: csi-mock-volumes-4472-1499/csi-provisioner-role-cfg Jun 11 00:13:02.509: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4472-1499/csi-resizer Jun 11 00:13:02.511: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4472 Jun 11 00:13:02.511: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-4472 Jun 11 00:13:02.513: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4472 Jun 11 00:13:02.516: INFO: creating *v1.Role: csi-mock-volumes-4472-1499/external-resizer-cfg-csi-mock-volumes-4472 Jun 11 00:13:02.519: INFO: creating *v1.RoleBinding: csi-mock-volumes-4472-1499/csi-resizer-role-cfg Jun 11 00:13:02.521: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4472-1499/csi-snapshotter Jun 11 00:13:02.523: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4472 Jun 11 00:13:02.523: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-4472 Jun 11 00:13:02.526: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4472 Jun 11 00:13:02.529: INFO: creating *v1.Role: csi-mock-volumes-4472-1499/external-snapshotter-leaderelection-csi-mock-volumes-4472 Jun 11 00:13:02.531: INFO: creating *v1.RoleBinding: csi-mock-volumes-4472-1499/external-snapshotter-leaderelection Jun 11 00:13:02.534: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4472-1499/csi-mock Jun 11 00:13:02.536: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4472 Jun 11 00:13:02.539: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4472 Jun 11 00:13:02.542: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4472 Jun 11 00:13:02.545: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4472 Jun 11 00:13:02.547: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4472 Jun 11 00:13:02.550: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4472 Jun 11 00:13:02.552: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4472 Jun 11 00:13:02.555: INFO: creating *v1.StatefulSet: csi-mock-volumes-4472-1499/csi-mockplugin Jun 11 00:13:02.559: INFO: creating *v1.StatefulSet: csi-mock-volumes-4472-1499/csi-mockplugin-attacher Jun 11 00:13:02.563: INFO: creating *v1.StatefulSet: csi-mock-volumes-4472-1499/csi-mockplugin-resizer Jun 11 00:13:02.567: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-4472 to register on node node2 STEP: Creating pod Jun 11 00:13:18.839: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jun 11 00:13:18.844: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-ssk2k] to have phase Bound Jun 11 00:13:18.847: INFO: PersistentVolumeClaim pvc-ssk2k found but phase is Pending instead of Bound. Jun 11 00:13:20.855: INFO: PersistentVolumeClaim pvc-ssk2k found and phase=Bound (2.01100298s) STEP: Expanding current pvc STEP: Waiting for persistent volume resize to finish STEP: Waiting for PVC resize to finish STEP: Deleting pod pvc-volume-tester-fk7wp Jun 11 00:13:34.897: INFO: Deleting pod "pvc-volume-tester-fk7wp" in namespace "csi-mock-volumes-4472" Jun 11 00:13:34.902: INFO: Wait up to 5m0s for pod "pvc-volume-tester-fk7wp" to be fully deleted STEP: Deleting claim pvc-ssk2k Jun 11 00:13:48.915: INFO: Waiting up to 2m0s for PersistentVolume pvc-26ed2fe8-8060-4efe-ba03-359c32b0e8a2 to get deleted Jun 11 00:13:48.917: INFO: PersistentVolume pvc-26ed2fe8-8060-4efe-ba03-359c32b0e8a2 found and phase=Bound (1.963132ms) Jun 11 00:13:50.921: INFO: PersistentVolume pvc-26ed2fe8-8060-4efe-ba03-359c32b0e8a2 was removed STEP: Deleting storageclass csi-mock-volumes-4472-scf6dcb STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-4472 STEP: Waiting for namespaces [csi-mock-volumes-4472] to vanish STEP: uninstalling csi mock driver Jun 11 00:13:56.937: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4472-1499/csi-attacher Jun 11 00:13:56.941: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4472 Jun 11 00:13:56.945: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4472 Jun 11 00:13:56.949: INFO: deleting *v1.Role: csi-mock-volumes-4472-1499/external-attacher-cfg-csi-mock-volumes-4472 Jun 11 00:13:56.952: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4472-1499/csi-attacher-role-cfg Jun 11 00:13:56.956: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4472-1499/csi-provisioner Jun 11 00:13:56.959: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4472 Jun 11 00:13:56.962: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4472 Jun 11 00:13:56.966: INFO: deleting *v1.Role: csi-mock-volumes-4472-1499/external-provisioner-cfg-csi-mock-volumes-4472 Jun 11 00:13:56.973: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4472-1499/csi-provisioner-role-cfg Jun 11 00:13:56.982: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4472-1499/csi-resizer Jun 11 00:13:56.991: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4472 Jun 11 00:13:56.994: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4472 Jun 11 00:13:56.999: INFO: deleting *v1.Role: csi-mock-volumes-4472-1499/external-resizer-cfg-csi-mock-volumes-4472 Jun 11 00:13:57.002: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4472-1499/csi-resizer-role-cfg Jun 11 00:13:57.006: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4472-1499/csi-snapshotter Jun 11 00:13:57.009: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4472 Jun 11 00:13:57.013: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4472 Jun 11 00:13:57.016: INFO: deleting *v1.Role: csi-mock-volumes-4472-1499/external-snapshotter-leaderelection-csi-mock-volumes-4472 Jun 11 00:13:57.019: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4472-1499/external-snapshotter-leaderelection Jun 11 00:13:57.023: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4472-1499/csi-mock Jun 11 00:13:57.027: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4472 Jun 11 00:13:57.030: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4472 Jun 11 00:13:57.033: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4472 Jun 11 00:13:57.037: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4472 Jun 11 00:13:57.040: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4472 Jun 11 00:13:57.043: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4472 Jun 11 00:13:57.046: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4472 Jun 11 00:13:57.050: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4472-1499/csi-mockplugin Jun 11 00:13:57.054: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4472-1499/csi-mockplugin-attacher Jun 11 00:13:57.057: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4472-1499/csi-mockplugin-resizer STEP: deleting the driver namespace: csi-mock-volumes-4472-1499 STEP: Waiting for namespaces [csi-mock-volumes-4472-1499] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:14:09.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:66.657 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI Volume expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:561 should expand volume without restarting pod if nodeExpansion=off /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should expand volume without restarting pod if nodeExpansion=off","total":-1,"completed":9,"skipped":155,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:14:09.095: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-disks STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:74 Jun 11 00:14:09.119: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:14:09.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-disks-4170" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.034 seconds] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Serial] attach on previously attached volumes should work [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:458 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:75 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:14:09.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:68 Jun 11 00:14:09.191: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian) [AfterEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:14:09.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-4537" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.030 seconds] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 GlusterFS [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:128 should be mountable /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:129 Only supported for node OS distro [gci ubuntu custom] (not debian) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:69 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:13:38.214: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Jun 11 00:14:04.270: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-423c3864-7007-4f30-b0dc-fe8cac931327-backend && mount --bind /tmp/local-volume-test-423c3864-7007-4f30-b0dc-fe8cac931327-backend /tmp/local-volume-test-423c3864-7007-4f30-b0dc-fe8cac931327-backend && ln -s /tmp/local-volume-test-423c3864-7007-4f30-b0dc-fe8cac931327-backend /tmp/local-volume-test-423c3864-7007-4f30-b0dc-fe8cac931327] Namespace:persistent-local-volumes-test-8384 PodName:hostexec-node1-x85fg ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:14:04.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 11 00:14:04.365: INFO: Creating a PV followed by a PVC Jun 11 00:14:04.373: INFO: Waiting for PV local-pv8qrdd to bind to PVC pvc-qnhfl Jun 11 00:14:04.373: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-qnhfl] to have phase Bound Jun 11 00:14:04.375: INFO: PersistentVolumeClaim pvc-qnhfl found but phase is Pending instead of Bound. Jun 11 00:14:06.380: INFO: PersistentVolumeClaim pvc-qnhfl found but phase is Pending instead of Bound. Jun 11 00:14:08.385: INFO: PersistentVolumeClaim pvc-qnhfl found but phase is Pending instead of Bound. Jun 11 00:14:10.390: INFO: PersistentVolumeClaim pvc-qnhfl found but phase is Pending instead of Bound. Jun 11 00:14:12.398: INFO: PersistentVolumeClaim pvc-qnhfl found and phase=Bound (8.025458171s) Jun 11 00:14:12.398: INFO: Waiting up to 3m0s for PersistentVolume local-pv8qrdd to have phase Bound Jun 11 00:14:12.401: INFO: PersistentVolume local-pv8qrdd found and phase=Bound (2.436664ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Jun 11 00:14:12.405: INFO: Disabled temporarily, reopen after #73168 is fixed [AfterEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 11 00:14:12.407: INFO: Deleting PersistentVolumeClaim "pvc-qnhfl" Jun 11 00:14:12.413: INFO: Deleting PersistentVolume "local-pv8qrdd" STEP: Removing the test directory Jun 11 00:14:12.417: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm /tmp/local-volume-test-423c3864-7007-4f30-b0dc-fe8cac931327 && umount /tmp/local-volume-test-423c3864-7007-4f30-b0dc-fe8cac931327-backend && rm -r /tmp/local-volume-test-423c3864-7007-4f30-b0dc-fe8cac931327-backend] Namespace:persistent-local-volumes-test-8384 PodName:hostexec-node1-x85fg ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:14:12.417: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:14:12.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-8384" for this suite. S [SKIPPING] [34.309 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set different fsGroup for second pod if first pod is deleted [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Disabled temporarily, reopen after #73168 is fixed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:287 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:12:58.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should be passed when podInfoOnMount=true /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 STEP: Building a driver namespace object, basename csi-mock-volumes-1469 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Jun 11 00:12:58.701: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1469-5288/csi-attacher Jun 11 00:12:58.704: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-1469 Jun 11 00:12:58.704: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-1469 Jun 11 00:12:58.708: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-1469 Jun 11 00:12:58.710: INFO: creating *v1.Role: csi-mock-volumes-1469-5288/external-attacher-cfg-csi-mock-volumes-1469 Jun 11 00:12:58.713: INFO: creating *v1.RoleBinding: csi-mock-volumes-1469-5288/csi-attacher-role-cfg Jun 11 00:12:58.716: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1469-5288/csi-provisioner Jun 11 00:12:58.718: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-1469 Jun 11 00:12:58.718: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-1469 Jun 11 00:12:58.721: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-1469 Jun 11 00:12:58.724: INFO: creating *v1.Role: csi-mock-volumes-1469-5288/external-provisioner-cfg-csi-mock-volumes-1469 Jun 11 00:12:58.727: INFO: creating *v1.RoleBinding: csi-mock-volumes-1469-5288/csi-provisioner-role-cfg Jun 11 00:12:58.730: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1469-5288/csi-resizer Jun 11 00:12:58.732: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-1469 Jun 11 00:12:58.732: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-1469 Jun 11 00:12:58.735: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-1469 Jun 11 00:12:58.737: INFO: creating *v1.Role: csi-mock-volumes-1469-5288/external-resizer-cfg-csi-mock-volumes-1469 Jun 11 00:12:58.740: INFO: creating *v1.RoleBinding: csi-mock-volumes-1469-5288/csi-resizer-role-cfg Jun 11 00:12:58.742: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1469-5288/csi-snapshotter Jun 11 00:12:58.745: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-1469 Jun 11 00:12:58.745: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-1469 Jun 11 00:12:58.747: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-1469 Jun 11 00:12:58.750: INFO: creating *v1.Role: csi-mock-volumes-1469-5288/external-snapshotter-leaderelection-csi-mock-volumes-1469 Jun 11 00:12:58.754: INFO: creating *v1.RoleBinding: csi-mock-volumes-1469-5288/external-snapshotter-leaderelection Jun 11 00:12:58.757: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1469-5288/csi-mock Jun 11 00:12:58.759: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-1469 Jun 11 00:12:58.762: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-1469 Jun 11 00:12:58.765: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-1469 Jun 11 00:12:58.767: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-1469 Jun 11 00:12:58.769: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-1469 Jun 11 00:12:58.771: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-1469 Jun 11 00:12:58.774: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-1469 Jun 11 00:12:58.776: INFO: creating *v1.StatefulSet: csi-mock-volumes-1469-5288/csi-mockplugin Jun 11 00:12:58.780: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-1469 Jun 11 00:12:58.783: INFO: creating *v1.StatefulSet: csi-mock-volumes-1469-5288/csi-mockplugin-attacher Jun 11 00:12:58.786: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-1469" Jun 11 00:12:58.789: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-1469 to register on node node1 STEP: Creating pod Jun 11 00:13:08.309: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jun 11 00:13:08.314: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-snhnw] to have phase Bound Jun 11 00:13:08.316: INFO: PersistentVolumeClaim pvc-snhnw found but phase is Pending instead of Bound. Jun 11 00:13:10.322: INFO: PersistentVolumeClaim pvc-snhnw found and phase=Bound (2.008219732s) STEP: checking for CSIInlineVolumes feature Jun 11 00:13:30.358: INFO: Error getting logs for pod inline-volume-t82xp: the server rejected our request for an unknown reason (get pods inline-volume-t82xp) Jun 11 00:13:30.363: INFO: Deleting pod "inline-volume-t82xp" in namespace "csi-mock-volumes-1469" Jun 11 00:13:30.367: INFO: Wait up to 5m0s for pod "inline-volume-t82xp" to be fully deleted STEP: Deleting the previously created pod Jun 11 00:13:32.373: INFO: Deleting pod "pvc-volume-tester-hhf86" in namespace "csi-mock-volumes-1469" Jun 11 00:13:32.379: INFO: Wait up to 5m0s for pod "pvc-volume-tester-hhf86" to be fully deleted STEP: Checking CSI driver logs Jun 11 00:13:36.882: INFO: Found volume attribute csi.storage.k8s.io/pod.name: pvc-volume-tester-hhf86 Jun 11 00:13:36.882: INFO: Found volume attribute csi.storage.k8s.io/pod.namespace: csi-mock-volumes-1469 Jun 11 00:13:36.882: INFO: Found volume attribute csi.storage.k8s.io/pod.uid: 12b7ac76-65c6-43f4-b284-7632743cb452 Jun 11 00:13:36.882: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.name: default Jun 11 00:13:36.882: INFO: Found volume attribute csi.storage.k8s.io/ephemeral: false Jun 11 00:13:36.882: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/12b7ac76-65c6-43f4-b284-7632743cb452/volumes/kubernetes.io~csi/pvc-a5512f0d-aaef-430a-8cc1-86e4b26a9bad/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:} STEP: Deleting pod pvc-volume-tester-hhf86 Jun 11 00:13:36.882: INFO: Deleting pod "pvc-volume-tester-hhf86" in namespace "csi-mock-volumes-1469" STEP: Deleting claim pvc-snhnw Jun 11 00:13:36.895: INFO: Waiting up to 2m0s for PersistentVolume pvc-a5512f0d-aaef-430a-8cc1-86e4b26a9bad to get deleted Jun 11 00:13:36.897: INFO: PersistentVolume pvc-a5512f0d-aaef-430a-8cc1-86e4b26a9bad found and phase=Bound (2.121461ms) Jun 11 00:13:38.901: INFO: PersistentVolume pvc-a5512f0d-aaef-430a-8cc1-86e4b26a9bad found and phase=Released (2.005848754s) Jun 11 00:13:40.906: INFO: PersistentVolume pvc-a5512f0d-aaef-430a-8cc1-86e4b26a9bad was removed STEP: Deleting storageclass csi-mock-volumes-1469-scgqkb9 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-1469 STEP: Waiting for namespaces [csi-mock-volumes-1469] to vanish STEP: uninstalling csi mock driver Jun 11 00:13:46.921: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1469-5288/csi-attacher Jun 11 00:13:46.926: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-1469 Jun 11 00:13:46.930: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-1469 Jun 11 00:13:46.934: INFO: deleting *v1.Role: csi-mock-volumes-1469-5288/external-attacher-cfg-csi-mock-volumes-1469 Jun 11 00:13:46.937: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1469-5288/csi-attacher-role-cfg Jun 11 00:13:46.940: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1469-5288/csi-provisioner Jun 11 00:13:46.944: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-1469 Jun 11 00:13:46.948: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-1469 Jun 11 00:13:46.952: INFO: deleting *v1.Role: csi-mock-volumes-1469-5288/external-provisioner-cfg-csi-mock-volumes-1469 Jun 11 00:13:46.958: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1469-5288/csi-provisioner-role-cfg Jun 11 00:13:46.962: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1469-5288/csi-resizer Jun 11 00:13:46.968: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-1469 Jun 11 00:13:46.974: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-1469 Jun 11 00:13:46.978: INFO: deleting *v1.Role: csi-mock-volumes-1469-5288/external-resizer-cfg-csi-mock-volumes-1469 Jun 11 00:13:46.980: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1469-5288/csi-resizer-role-cfg Jun 11 00:13:46.984: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1469-5288/csi-snapshotter Jun 11 00:13:46.987: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-1469 Jun 11 00:13:46.990: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-1469 Jun 11 00:13:46.993: INFO: deleting *v1.Role: csi-mock-volumes-1469-5288/external-snapshotter-leaderelection-csi-mock-volumes-1469 Jun 11 00:13:46.997: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1469-5288/external-snapshotter-leaderelection Jun 11 00:13:46.999: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1469-5288/csi-mock Jun 11 00:13:47.002: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-1469 Jun 11 00:13:47.006: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-1469 Jun 11 00:13:47.009: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-1469 Jun 11 00:13:47.012: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-1469 Jun 11 00:13:47.016: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-1469 Jun 11 00:13:47.019: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-1469 Jun 11 00:13:47.022: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-1469 Jun 11 00:13:47.025: INFO: deleting *v1.StatefulSet: csi-mock-volumes-1469-5288/csi-mockplugin Jun 11 00:13:47.030: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-1469 Jun 11 00:13:47.033: INFO: deleting *v1.StatefulSet: csi-mock-volumes-1469-5288/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-1469-5288 STEP: Waiting for namespaces [csi-mock-volumes-1469-5288] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:14:15.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:76.413 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI workload information using mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:443 should be passed when podInfoOnMount=true /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should be passed when podInfoOnMount=true","total":-1,"completed":16,"skipped":511,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:14:09.253: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-26912947-d2a7-4045-8ff7-dbae73207054" Jun 11 00:14:11.302: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-26912947-d2a7-4045-8ff7-dbae73207054" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-26912947-d2a7-4045-8ff7-dbae73207054" "/tmp/local-volume-test-26912947-d2a7-4045-8ff7-dbae73207054"] Namespace:persistent-local-volumes-test-2660 PodName:hostexec-node2-sgd2b ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:14:11.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 11 00:14:11.396: INFO: Creating a PV followed by a PVC Jun 11 00:14:11.405: INFO: Waiting for PV local-pvxzk6q to bind to PVC pvc-pchcn Jun 11 00:14:11.405: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-pchcn] to have phase Bound Jun 11 00:14:11.407: INFO: PersistentVolumeClaim pvc-pchcn found but phase is Pending instead of Bound. Jun 11 00:14:13.411: INFO: PersistentVolumeClaim pvc-pchcn found and phase=Bound (2.006250339s) Jun 11 00:14:13.412: INFO: Waiting up to 3m0s for PersistentVolume local-pvxzk6q to have phase Bound Jun 11 00:14:13.414: INFO: PersistentVolume local-pvxzk6q found and phase=Bound (2.427582ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Jun 11 00:14:17.440: INFO: pod "pod-f5130e3f-7c22-4dbe-9385-d65285ccc343" created on Node "node2" STEP: Writing in pod1 Jun 11 00:14:17.440: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-2660 PodName:pod-f5130e3f-7c22-4dbe-9385-d65285ccc343 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:14:17.440: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:14:17.521: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 STEP: Reading in pod1 Jun 11 00:14:17.521: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-2660 PodName:pod-f5130e3f-7c22-4dbe-9385-d65285ccc343 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:14:17.521: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:14:17.602: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-f5130e3f-7c22-4dbe-9385-d65285ccc343 in namespace persistent-local-volumes-test-2660 [AfterEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 11 00:14:17.607: INFO: Deleting PersistentVolumeClaim "pvc-pchcn" Jun 11 00:14:17.611: INFO: Deleting PersistentVolume "local-pvxzk6q" STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-26912947-d2a7-4045-8ff7-dbae73207054" Jun 11 00:14:17.614: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-26912947-d2a7-4045-8ff7-dbae73207054"] Namespace:persistent-local-volumes-test-2660 PodName:hostexec-node2-sgd2b ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:14:17.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 11 00:14:17.714: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-26912947-d2a7-4045-8ff7-dbae73207054] Namespace:persistent-local-volumes-test-2660 PodName:hostexec-node2-sgd2b ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:14:17.714: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:14:17.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-2660" for this suite. • [SLOW TEST:8.561 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":10,"skipped":211,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:14:17.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support memory backed volumes of specified size /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:298 [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:14:17.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9183" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support memory backed volumes of specified size","total":-1,"completed":11,"skipped":233,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:14:00.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node2" using path "/tmp/local-volume-test-cfa25e7a-a2d3-4d9b-9b7a-e021b3a87d92" Jun 11 00:14:02.821: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-cfa25e7a-a2d3-4d9b-9b7a-e021b3a87d92 && dd if=/dev/zero of=/tmp/local-volume-test-cfa25e7a-a2d3-4d9b-9b7a-e021b3a87d92/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-cfa25e7a-a2d3-4d9b-9b7a-e021b3a87d92/file] Namespace:persistent-local-volumes-test-6978 PodName:hostexec-node2-74d2k ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:14:02.821: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:14:02.934: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-cfa25e7a-a2d3-4d9b-9b7a-e021b3a87d92/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-6978 PodName:hostexec-node2-74d2k ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:14:02.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 11 00:14:03.026: INFO: Creating a PV followed by a PVC Jun 11 00:14:03.033: INFO: Waiting for PV local-pvn9mnw to bind to PVC pvc-dj6g9 Jun 11 00:14:03.033: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-dj6g9] to have phase Bound Jun 11 00:14:03.036: INFO: PersistentVolumeClaim pvc-dj6g9 found but phase is Pending instead of Bound. Jun 11 00:14:05.044: INFO: PersistentVolumeClaim pvc-dj6g9 found but phase is Pending instead of Bound. Jun 11 00:14:07.049: INFO: PersistentVolumeClaim pvc-dj6g9 found but phase is Pending instead of Bound. Jun 11 00:14:09.054: INFO: PersistentVolumeClaim pvc-dj6g9 found but phase is Pending instead of Bound. Jun 11 00:14:11.057: INFO: PersistentVolumeClaim pvc-dj6g9 found but phase is Pending instead of Bound. Jun 11 00:14:13.060: INFO: PersistentVolumeClaim pvc-dj6g9 found and phase=Bound (10.026729259s) Jun 11 00:14:13.060: INFO: Waiting up to 3m0s for PersistentVolume local-pvn9mnw to have phase Bound Jun 11 00:14:13.063: INFO: PersistentVolume local-pvn9mnw found and phase=Bound (2.273078ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 STEP: Creating pod1 to write to the PV STEP: Creating a pod Jun 11 00:14:17.087: INFO: pod "pod-12d68217-8122-4b7c-aa97-fc800d4156ad" created on Node "node2" STEP: Writing in pod1 Jun 11 00:14:17.087: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6978 PodName:pod-12d68217-8122-4b7c-aa97-fc800d4156ad ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:14:17.087: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:14:17.165: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Jun 11 00:14:17.165: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6978 PodName:pod-12d68217-8122-4b7c-aa97-fc800d4156ad ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:14:17.165: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:14:17.264: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Creating pod2 to read from the PV STEP: Creating a pod Jun 11 00:14:25.286: INFO: pod "pod-0fdcf531-1079-4666-bc6c-0ba72aa126ba" created on Node "node2" Jun 11 00:14:25.286: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6978 PodName:pod-0fdcf531-1079-4666-bc6c-0ba72aa126ba ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:14:25.286: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:14:25.365: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod2 Jun 11 00:14:25.365: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /dev/loop0 > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6978 PodName:pod-0fdcf531-1079-4666-bc6c-0ba72aa126ba ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:14:25.365: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:14:25.450: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /dev/loop0 > /mnt/volume1/test-file", out: "", stderr: "", err: STEP: Reading in pod1 Jun 11 00:14:25.450: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6978 PodName:pod-12d68217-8122-4b7c-aa97-fc800d4156ad ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:14:25.450: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:14:25.524: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "/dev/loop0", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-12d68217-8122-4b7c-aa97-fc800d4156ad in namespace persistent-local-volumes-test-6978 STEP: Deleting pod2 STEP: Deleting pod pod-0fdcf531-1079-4666-bc6c-0ba72aa126ba in namespace persistent-local-volumes-test-6978 [AfterEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 11 00:14:25.533: INFO: Deleting PersistentVolumeClaim "pvc-dj6g9" Jun 11 00:14:25.536: INFO: Deleting PersistentVolume "local-pvn9mnw" Jun 11 00:14:25.540: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-cfa25e7a-a2d3-4d9b-9b7a-e021b3a87d92/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-6978 PodName:hostexec-node2-74d2k ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:14:25.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "node2" at path /tmp/local-volume-test-cfa25e7a-a2d3-4d9b-9b7a-e021b3a87d92/file Jun 11 00:14:25.629: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-6978 PodName:hostexec-node2-74d2k ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:14:25.629: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-cfa25e7a-a2d3-4d9b-9b7a-e021b3a87d92 Jun 11 00:14:25.709: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-cfa25e7a-a2d3-4d9b-9b7a-e021b3a87d92] Namespace:persistent-local-volumes-test-6978 PodName:hostexec-node2-74d2k ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:14:25.709: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:14:25.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-6978" for this suite. • [SLOW TEST:25.044 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":14,"skipped":601,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:14:17.907: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-block-dev STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:325 STEP: Create a pod for further testing Jun 11 00:14:17.941: INFO: The status of Pod test-hostpath-type-b55h5 is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:14:19.945: INFO: The status of Pod test-hostpath-type-b55h5 is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:14:21.946: INFO: The status of Pod test-hostpath-type-b55h5 is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:14:23.949: INFO: The status of Pod test-hostpath-type-b55h5 is Running (Ready = true) STEP: running on node node2 STEP: Create a block device for further testing Jun 11 00:14:23.951: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/ablkdev b 89 1] Namespace:host-path-type-block-dev-3663 PodName:test-hostpath-type-b55h5 ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:14:23.951: INFO: >>> kubeConfig: /root/.kube/config [It] Should fail on mounting block device 'ablkdev' when HostPathType is HostPathFile /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:359 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:14:26.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-block-dev-3663" for this suite. • [SLOW TEST:8.159 seconds] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting block device 'ablkdev' when HostPathType is HostPathFile /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:359 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Block Device [Slow] Should fail on mounting block device 'ablkdev' when HostPathType is HostPathFile","total":-1,"completed":12,"skipped":235,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:14:26.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Jun 11 00:14:26.264: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:14:26.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-9877" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.030 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create metrics for total number of volumes in A/D Controller [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:322 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Multi-AZ Cluster Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:14:26.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename multi-az STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Multi-AZ Cluster Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ubernetes_lite_volumes.go:39 Jun 11 00:14:26.345: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-storage] Multi-AZ Cluster Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:14:26.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "multi-az-1923" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.029 seconds] [sig-storage] Multi-AZ Cluster Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should schedule pods in the same zones as statically provisioned PVs [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ubernetes_lite_volumes.go:50 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ubernetes_lite_volumes.go:40 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:13:36.014: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should call NodeUnstage after NodeStage success /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:828 STEP: Building a driver namespace object, basename csi-mock-volumes-2547 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Jun 11 00:13:36.073: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2547-8913/csi-attacher Jun 11 00:13:36.076: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-2547 Jun 11 00:13:36.076: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-2547 Jun 11 00:13:36.079: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-2547 Jun 11 00:13:36.082: INFO: creating *v1.Role: csi-mock-volumes-2547-8913/external-attacher-cfg-csi-mock-volumes-2547 Jun 11 00:13:36.084: INFO: creating *v1.RoleBinding: csi-mock-volumes-2547-8913/csi-attacher-role-cfg Jun 11 00:13:36.087: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2547-8913/csi-provisioner Jun 11 00:13:36.089: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-2547 Jun 11 00:13:36.089: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-2547 Jun 11 00:13:36.092: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-2547 Jun 11 00:13:36.095: INFO: creating *v1.Role: csi-mock-volumes-2547-8913/external-provisioner-cfg-csi-mock-volumes-2547 Jun 11 00:13:36.098: INFO: creating *v1.RoleBinding: csi-mock-volumes-2547-8913/csi-provisioner-role-cfg Jun 11 00:13:36.101: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2547-8913/csi-resizer Jun 11 00:13:36.104: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-2547 Jun 11 00:13:36.104: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-2547 Jun 11 00:13:36.106: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-2547 Jun 11 00:13:36.109: INFO: creating *v1.Role: csi-mock-volumes-2547-8913/external-resizer-cfg-csi-mock-volumes-2547 Jun 11 00:13:36.112: INFO: creating *v1.RoleBinding: csi-mock-volumes-2547-8913/csi-resizer-role-cfg Jun 11 00:13:36.115: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2547-8913/csi-snapshotter Jun 11 00:13:36.118: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-2547 Jun 11 00:13:36.118: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-2547 Jun 11 00:13:36.121: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-2547 Jun 11 00:13:36.124: INFO: creating *v1.Role: csi-mock-volumes-2547-8913/external-snapshotter-leaderelection-csi-mock-volumes-2547 Jun 11 00:13:36.127: INFO: creating *v1.RoleBinding: csi-mock-volumes-2547-8913/external-snapshotter-leaderelection Jun 11 00:13:36.129: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2547-8913/csi-mock Jun 11 00:13:36.132: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-2547 Jun 11 00:13:36.135: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-2547 Jun 11 00:13:36.138: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-2547 Jun 11 00:13:36.140: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-2547 Jun 11 00:13:36.143: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-2547 Jun 11 00:13:36.146: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-2547 Jun 11 00:13:36.148: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-2547 Jun 11 00:13:36.151: INFO: creating *v1.StatefulSet: csi-mock-volumes-2547-8913/csi-mockplugin Jun 11 00:13:36.155: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-2547 Jun 11 00:13:36.160: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-2547" Jun 11 00:13:36.162: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-2547 to register on node node1 STEP: Creating pod Jun 11 00:13:45.679: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jun 11 00:13:45.683: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-48w29] to have phase Bound Jun 11 00:13:45.686: INFO: PersistentVolumeClaim pvc-48w29 found but phase is Pending instead of Bound. Jun 11 00:13:47.691: INFO: PersistentVolumeClaim pvc-48w29 found and phase=Bound (2.007291073s) Jun 11 00:13:47.705: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-48w29] to have phase Bound Jun 11 00:13:47.708: INFO: PersistentVolumeClaim pvc-48w29 found and phase=Bound (2.36889ms) STEP: Waiting for expected CSI calls STEP: Waiting for pod to be running STEP: Deleting the previously created pod Jun 11 00:14:05.728: INFO: Deleting pod "pvc-volume-tester-bdhqr" in namespace "csi-mock-volumes-2547" Jun 11 00:14:05.733: INFO: Wait up to 5m0s for pod "pvc-volume-tester-bdhqr" to be fully deleted STEP: Waiting for all remaining expected CSI calls STEP: Deleting pod pvc-volume-tester-bdhqr Jun 11 00:14:08.750: INFO: Deleting pod "pvc-volume-tester-bdhqr" in namespace "csi-mock-volumes-2547" STEP: Deleting claim pvc-48w29 Jun 11 00:14:08.760: INFO: Waiting up to 2m0s for PersistentVolume pvc-1d4dd876-2095-4238-b027-c0a034a50f79 to get deleted Jun 11 00:14:08.762: INFO: PersistentVolume pvc-1d4dd876-2095-4238-b027-c0a034a50f79 found and phase=Bound (2.25553ms) Jun 11 00:14:10.765: INFO: PersistentVolume pvc-1d4dd876-2095-4238-b027-c0a034a50f79 was removed STEP: Deleting storageclass csi-mock-volumes-2547-sc4cfdz STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-2547 STEP: Waiting for namespaces [csi-mock-volumes-2547] to vanish STEP: uninstalling csi mock driver Jun 11 00:14:16.782: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2547-8913/csi-attacher Jun 11 00:14:16.785: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-2547 Jun 11 00:14:16.789: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-2547 Jun 11 00:14:16.792: INFO: deleting *v1.Role: csi-mock-volumes-2547-8913/external-attacher-cfg-csi-mock-volumes-2547 Jun 11 00:14:16.796: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2547-8913/csi-attacher-role-cfg Jun 11 00:14:16.799: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2547-8913/csi-provisioner Jun 11 00:14:16.802: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-2547 Jun 11 00:14:16.806: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-2547 Jun 11 00:14:16.809: INFO: deleting *v1.Role: csi-mock-volumes-2547-8913/external-provisioner-cfg-csi-mock-volumes-2547 Jun 11 00:14:16.813: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2547-8913/csi-provisioner-role-cfg Jun 11 00:14:16.816: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2547-8913/csi-resizer Jun 11 00:14:16.820: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-2547 Jun 11 00:14:16.823: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-2547 Jun 11 00:14:16.827: INFO: deleting *v1.Role: csi-mock-volumes-2547-8913/external-resizer-cfg-csi-mock-volumes-2547 Jun 11 00:14:16.830: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2547-8913/csi-resizer-role-cfg Jun 11 00:14:16.834: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2547-8913/csi-snapshotter Jun 11 00:14:16.837: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-2547 Jun 11 00:14:16.840: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-2547 Jun 11 00:14:16.843: INFO: deleting *v1.Role: csi-mock-volumes-2547-8913/external-snapshotter-leaderelection-csi-mock-volumes-2547 Jun 11 00:14:16.847: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2547-8913/external-snapshotter-leaderelection Jun 11 00:14:16.850: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2547-8913/csi-mock Jun 11 00:14:16.853: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-2547 Jun 11 00:14:16.857: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-2547 Jun 11 00:14:16.860: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-2547 Jun 11 00:14:16.864: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-2547 Jun 11 00:14:16.867: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-2547 Jun 11 00:14:16.871: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-2547 Jun 11 00:14:16.874: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-2547 Jun 11 00:14:16.878: INFO: deleting *v1.StatefulSet: csi-mock-volumes-2547-8913/csi-mockplugin Jun 11 00:14:16.882: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-2547 STEP: deleting the driver namespace: csi-mock-volumes-2547-8913 STEP: Waiting for namespaces [csi-mock-volumes-2547-8913] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:14:28.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:52.889 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI NodeStage error cases [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:734 should call NodeUnstage after NodeStage success /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:828 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI NodeStage error cases [Slow] should call NodeUnstage after NodeStage success","total":-1,"completed":8,"skipped":347,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:13:44.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Jun 11 00:14:02.670: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-9433452b-3e38-4ba6-82e3-9c621a687284] Namespace:persistent-local-volumes-test-6675 PodName:hostexec-node1-7v2z9 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:14:02.670: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 11 00:14:02.766: INFO: Creating a PV followed by a PVC Jun 11 00:14:02.772: INFO: Waiting for PV local-pv4rhcz to bind to PVC pvc-8p24l Jun 11 00:14:02.772: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-8p24l] to have phase Bound Jun 11 00:14:02.774: INFO: PersistentVolumeClaim pvc-8p24l found but phase is Pending instead of Bound. Jun 11 00:14:04.779: INFO: PersistentVolumeClaim pvc-8p24l found but phase is Pending instead of Bound. Jun 11 00:14:06.782: INFO: PersistentVolumeClaim pvc-8p24l found but phase is Pending instead of Bound. Jun 11 00:14:08.788: INFO: PersistentVolumeClaim pvc-8p24l found but phase is Pending instead of Bound. Jun 11 00:14:10.792: INFO: PersistentVolumeClaim pvc-8p24l found but phase is Pending instead of Bound. Jun 11 00:14:12.796: INFO: PersistentVolumeClaim pvc-8p24l found and phase=Bound (10.023478972s) Jun 11 00:14:12.796: INFO: Waiting up to 3m0s for PersistentVolume local-pv4rhcz to have phase Bound Jun 11 00:14:12.798: INFO: PersistentVolume local-pv4rhcz found and phase=Bound (1.892328ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 STEP: Create first pod and check fsGroup is set STEP: Creating a pod Jun 11 00:14:22.820: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-6675 exec pod-738776de-c402-4492-9d3a-b71fd7bdf485 --namespace=persistent-local-volumes-test-6675 -- stat -c %g /mnt/volume1' Jun 11 00:14:23.178: INFO: stderr: "" Jun 11 00:14:23.178: INFO: stdout: "1234\n" STEP: Create second pod with same fsGroup and check fsGroup is correct STEP: Creating a pod Jun 11 00:14:29.202: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-6675 exec pod-6207eb3b-6308-452e-bc3f-2b0ff0e5d3ea --namespace=persistent-local-volumes-test-6675 -- stat -c %g /mnt/volume1' Jun 11 00:14:29.567: INFO: stderr: "" Jun 11 00:14:29.567: INFO: stdout: "1234\n" STEP: Deleting first pod STEP: Deleting pod pod-738776de-c402-4492-9d3a-b71fd7bdf485 in namespace persistent-local-volumes-test-6675 STEP: Deleting second pod STEP: Deleting pod pod-6207eb3b-6308-452e-bc3f-2b0ff0e5d3ea in namespace persistent-local-volumes-test-6675 [AfterEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 11 00:14:29.576: INFO: Deleting PersistentVolumeClaim "pvc-8p24l" Jun 11 00:14:29.580: INFO: Deleting PersistentVolume "local-pv4rhcz" STEP: Removing the test directory Jun 11 00:14:29.584: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-9433452b-3e38-4ba6-82e3-9c621a687284] Namespace:persistent-local-volumes-test-6675 PodName:hostexec-node1-7v2z9 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:14:29.584: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:14:30.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-6675" for this suite. • [SLOW TEST:45.524 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]","total":-1,"completed":18,"skipped":741,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:13:11.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should not call NodeUnstage after NodeStage final error /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:828 STEP: Building a driver namespace object, basename csi-mock-volumes-3110 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock proxy Jun 11 00:13:11.626: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3110-2931/csi-attacher Jun 11 00:13:11.629: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-3110 Jun 11 00:13:11.629: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-3110 Jun 11 00:13:11.632: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-3110 Jun 11 00:13:11.635: INFO: creating *v1.Role: csi-mock-volumes-3110-2931/external-attacher-cfg-csi-mock-volumes-3110 Jun 11 00:13:11.638: INFO: creating *v1.RoleBinding: csi-mock-volumes-3110-2931/csi-attacher-role-cfg Jun 11 00:13:11.641: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3110-2931/csi-provisioner Jun 11 00:13:11.643: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-3110 Jun 11 00:13:11.643: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-3110 Jun 11 00:13:11.646: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-3110 Jun 11 00:13:11.648: INFO: creating *v1.Role: csi-mock-volumes-3110-2931/external-provisioner-cfg-csi-mock-volumes-3110 Jun 11 00:13:11.651: INFO: creating *v1.RoleBinding: csi-mock-volumes-3110-2931/csi-provisioner-role-cfg Jun 11 00:13:11.653: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3110-2931/csi-resizer Jun 11 00:13:11.656: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-3110 Jun 11 00:13:11.656: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-3110 Jun 11 00:13:11.658: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-3110 Jun 11 00:13:11.661: INFO: creating *v1.Role: csi-mock-volumes-3110-2931/external-resizer-cfg-csi-mock-volumes-3110 Jun 11 00:13:11.663: INFO: creating *v1.RoleBinding: csi-mock-volumes-3110-2931/csi-resizer-role-cfg Jun 11 00:13:11.666: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3110-2931/csi-snapshotter Jun 11 00:13:11.669: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-3110 Jun 11 00:13:11.669: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-3110 Jun 11 00:13:11.672: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-3110 Jun 11 00:13:11.674: INFO: creating *v1.Role: csi-mock-volumes-3110-2931/external-snapshotter-leaderelection-csi-mock-volumes-3110 Jun 11 00:13:11.677: INFO: creating *v1.RoleBinding: csi-mock-volumes-3110-2931/external-snapshotter-leaderelection Jun 11 00:13:11.680: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3110-2931/csi-mock Jun 11 00:13:11.682: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-3110 Jun 11 00:13:11.685: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-3110 Jun 11 00:13:11.690: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-3110 Jun 11 00:13:11.692: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-3110 Jun 11 00:13:11.695: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-3110 Jun 11 00:13:11.697: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-3110 Jun 11 00:13:11.699: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-3110 Jun 11 00:13:11.702: INFO: creating *v1.StatefulSet: csi-mock-volumes-3110-2931/csi-mockplugin Jun 11 00:13:11.706: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-3110 Jun 11 00:13:11.709: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-3110" Jun 11 00:13:11.711: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-3110 to register on node node2 I0611 00:13:21.804860 26 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-3110","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I0611 00:13:21.899545 26 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null} I0611 00:13:21.901322 26 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-3110","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I0611 00:13:21.942346 26 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":"","FullError":null} I0611 00:13:21.983652 26 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null} I0611 00:13:22.462443 26 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-3110"},"Error":"","FullError":null} STEP: Creating pod Jun 11 00:13:27.981: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jun 11 00:13:27.986: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-7k9dv] to have phase Bound Jun 11 00:13:27.988: INFO: PersistentVolumeClaim pvc-7k9dv found but phase is Pending instead of Bound. I0611 00:13:27.994501 26 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-89010384-070a-4d35-944b-bd605ac2e102","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-89010384-070a-4d35-944b-bd605ac2e102"}}},"Error":"","FullError":null} Jun 11 00:13:29.991: INFO: PersistentVolumeClaim pvc-7k9dv found and phase=Bound (2.004726938s) Jun 11 00:13:30.005: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-7k9dv] to have phase Bound Jun 11 00:13:30.007: INFO: PersistentVolumeClaim pvc-7k9dv found and phase=Bound (2.129448ms) STEP: Waiting for expected CSI calls I0611 00:13:30.867603 26 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0611 00:13:30.894239 26 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-89010384-070a-4d35-944b-bd605ac2e102/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-89010384-070a-4d35-944b-bd605ac2e102","storage.kubernetes.io/csiProvisionerIdentity":"1654906401983-8081-csi-mock-csi-mock-volumes-3110"}},"Response":null,"Error":"rpc error: code = InvalidArgument desc = fake error","FullError":{"code":3,"message":"fake error"}} STEP: Deleting the previously created pod Jun 11 00:13:31.008: INFO: Deleting pod "pvc-volume-tester-45h5x" in namespace "csi-mock-volumes-3110" Jun 11 00:13:31.012: INFO: Wait up to 5m0s for pod "pvc-volume-tester-45h5x" to be fully deleted I0611 00:13:31.478564 26 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0611 00:13:31.480844 26 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-89010384-070a-4d35-944b-bd605ac2e102/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-89010384-070a-4d35-944b-bd605ac2e102","storage.kubernetes.io/csiProvisionerIdentity":"1654906401983-8081-csi-mock-csi-mock-volumes-3110"}},"Response":null,"Error":"rpc error: code = InvalidArgument desc = fake error","FullError":{"code":3,"message":"fake error"}} I0611 00:13:32.486665 26 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0611 00:13:32.488619 26 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-89010384-070a-4d35-944b-bd605ac2e102/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-89010384-070a-4d35-944b-bd605ac2e102","storage.kubernetes.io/csiProvisionerIdentity":"1654906401983-8081-csi-mock-csi-mock-volumes-3110"}},"Response":null,"Error":"rpc error: code = InvalidArgument desc = fake error","FullError":{"code":3,"message":"fake error"}} I0611 00:13:34.552478 26 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0611 00:13:34.554843 26 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-89010384-070a-4d35-944b-bd605ac2e102/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-89010384-070a-4d35-944b-bd605ac2e102","storage.kubernetes.io/csiProvisionerIdentity":"1654906401983-8081-csi-mock-csi-mock-volumes-3110"}},"Response":null,"Error":"rpc error: code = InvalidArgument desc = fake error","FullError":{"code":3,"message":"fake error"}} STEP: Waiting for all remaining expected CSI calls STEP: Deleting pod pvc-volume-tester-45h5x Jun 11 00:13:40.019: INFO: Deleting pod "pvc-volume-tester-45h5x" in namespace "csi-mock-volumes-3110" STEP: Deleting claim pvc-7k9dv Jun 11 00:13:40.030: INFO: Waiting up to 2m0s for PersistentVolume pvc-89010384-070a-4d35-944b-bd605ac2e102 to get deleted Jun 11 00:13:40.033: INFO: PersistentVolume pvc-89010384-070a-4d35-944b-bd605ac2e102 found and phase=Bound (2.868444ms) I0611 00:13:40.042667 26 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null} Jun 11 00:13:42.037: INFO: PersistentVolume pvc-89010384-070a-4d35-944b-bd605ac2e102 was removed STEP: Deleting storageclass csi-mock-volumes-3110-scxkt5f STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-3110 STEP: Waiting for namespaces [csi-mock-volumes-3110] to vanish STEP: uninstalling csi mock driver Jun 11 00:13:48.064: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3110-2931/csi-attacher Jun 11 00:13:48.068: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-3110 Jun 11 00:13:48.072: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-3110 Jun 11 00:13:48.076: INFO: deleting *v1.Role: csi-mock-volumes-3110-2931/external-attacher-cfg-csi-mock-volumes-3110 Jun 11 00:13:48.079: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3110-2931/csi-attacher-role-cfg Jun 11 00:13:48.082: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3110-2931/csi-provisioner Jun 11 00:13:48.086: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-3110 Jun 11 00:13:48.088: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-3110 Jun 11 00:13:48.092: INFO: deleting *v1.Role: csi-mock-volumes-3110-2931/external-provisioner-cfg-csi-mock-volumes-3110 Jun 11 00:13:48.100: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3110-2931/csi-provisioner-role-cfg Jun 11 00:13:48.108: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3110-2931/csi-resizer Jun 11 00:13:48.116: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-3110 Jun 11 00:13:48.120: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-3110 Jun 11 00:13:48.128: INFO: deleting *v1.Role: csi-mock-volumes-3110-2931/external-resizer-cfg-csi-mock-volumes-3110 Jun 11 00:13:48.140: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3110-2931/csi-resizer-role-cfg Jun 11 00:13:48.144: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3110-2931/csi-snapshotter Jun 11 00:13:48.147: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-3110 Jun 11 00:13:48.151: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-3110 Jun 11 00:13:48.154: INFO: deleting *v1.Role: csi-mock-volumes-3110-2931/external-snapshotter-leaderelection-csi-mock-volumes-3110 Jun 11 00:13:48.157: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3110-2931/external-snapshotter-leaderelection Jun 11 00:13:48.162: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3110-2931/csi-mock Jun 11 00:13:48.166: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-3110 Jun 11 00:13:48.170: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-3110 Jun 11 00:13:48.173: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-3110 Jun 11 00:13:48.176: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-3110 Jun 11 00:13:48.180: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-3110 Jun 11 00:13:48.184: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-3110 Jun 11 00:13:48.187: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-3110 Jun 11 00:13:48.190: INFO: deleting *v1.StatefulSet: csi-mock-volumes-3110-2931/csi-mockplugin Jun 11 00:13:48.194: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-3110 STEP: deleting the driver namespace: csi-mock-volumes-3110-2931 STEP: Waiting for namespaces [csi-mock-volumes-3110-2931] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:14:32.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:80.653 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI NodeStage error cases [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:734 should not call NodeUnstage after NodeStage final error /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:828 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI NodeStage error cases [Slow] should not call NodeUnstage after NodeStage final error","total":-1,"completed":14,"skipped":720,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:14:32.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Jun 11 00:14:32.342: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:14:32.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-2624" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.033 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:383 should create bound pv/pvc count metrics for pvc controller after creating both pv and pvc /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:503 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:14:26.407: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-char-dev STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:256 STEP: Create a pod for further testing Jun 11 00:14:26.442: INFO: The status of Pod test-hostpath-type-4tcqc is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:14:28.448: INFO: The status of Pod test-hostpath-type-4tcqc is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:14:30.448: INFO: The status of Pod test-hostpath-type-4tcqc is Running (Ready = true) STEP: running on node node2 STEP: Create a character device for further testing Jun 11 00:14:30.451: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/achardev c 89 1] Namespace:host-path-type-char-dev-6406 PodName:test-hostpath-type-4tcqc ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:14:30.451: INFO: >>> kubeConfig: /root/.kube/config [It] Should fail on mounting character device 'achardev' when HostPathType is HostPathDirectory /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:285 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:14:32.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-char-dev-6406" for this suite. • [SLOW TEST:6.176 seconds] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting character device 'achardev' when HostPathType is HostPathDirectory /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:285 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Character Device [Slow] Should fail on mounting character device 'achardev' when HostPathType is HostPathDirectory","total":-1,"completed":13,"skipped":365,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:13:13.643: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should not be passed when podInfoOnMount=false /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 STEP: Building a driver namespace object, basename csi-mock-volumes-5299 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Jun 11 00:13:13.707: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5299-2535/csi-attacher Jun 11 00:13:13.710: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-5299 Jun 11 00:13:13.710: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-5299 Jun 11 00:13:13.712: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-5299 Jun 11 00:13:13.716: INFO: creating *v1.Role: csi-mock-volumes-5299-2535/external-attacher-cfg-csi-mock-volumes-5299 Jun 11 00:13:13.719: INFO: creating *v1.RoleBinding: csi-mock-volumes-5299-2535/csi-attacher-role-cfg Jun 11 00:13:13.721: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5299-2535/csi-provisioner Jun 11 00:13:13.723: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-5299 Jun 11 00:13:13.723: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-5299 Jun 11 00:13:13.727: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-5299 Jun 11 00:13:13.729: INFO: creating *v1.Role: csi-mock-volumes-5299-2535/external-provisioner-cfg-csi-mock-volumes-5299 Jun 11 00:13:13.732: INFO: creating *v1.RoleBinding: csi-mock-volumes-5299-2535/csi-provisioner-role-cfg Jun 11 00:13:13.735: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5299-2535/csi-resizer Jun 11 00:13:13.738: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-5299 Jun 11 00:13:13.738: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-5299 Jun 11 00:13:13.740: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-5299 Jun 11 00:13:13.743: INFO: creating *v1.Role: csi-mock-volumes-5299-2535/external-resizer-cfg-csi-mock-volumes-5299 Jun 11 00:13:13.746: INFO: creating *v1.RoleBinding: csi-mock-volumes-5299-2535/csi-resizer-role-cfg Jun 11 00:13:13.748: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5299-2535/csi-snapshotter Jun 11 00:13:13.751: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-5299 Jun 11 00:13:13.751: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-5299 Jun 11 00:13:13.753: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-5299 Jun 11 00:13:13.756: INFO: creating *v1.Role: csi-mock-volumes-5299-2535/external-snapshotter-leaderelection-csi-mock-volumes-5299 Jun 11 00:13:13.758: INFO: creating *v1.RoleBinding: csi-mock-volumes-5299-2535/external-snapshotter-leaderelection Jun 11 00:13:13.761: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5299-2535/csi-mock Jun 11 00:13:13.764: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-5299 Jun 11 00:13:13.766: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-5299 Jun 11 00:13:13.769: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-5299 Jun 11 00:13:13.773: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-5299 Jun 11 00:13:13.775: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-5299 Jun 11 00:13:13.778: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-5299 Jun 11 00:13:13.781: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-5299 Jun 11 00:13:13.784: INFO: creating *v1.StatefulSet: csi-mock-volumes-5299-2535/csi-mockplugin Jun 11 00:13:13.789: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-5299 Jun 11 00:13:13.791: INFO: creating *v1.StatefulSet: csi-mock-volumes-5299-2535/csi-mockplugin-attacher Jun 11 00:13:13.794: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-5299" Jun 11 00:13:13.796: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-5299 to register on node node2 STEP: Creating pod Jun 11 00:13:23.312: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jun 11 00:13:23.316: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-jf6cz] to have phase Bound Jun 11 00:13:23.318: INFO: PersistentVolumeClaim pvc-jf6cz found but phase is Pending instead of Bound. Jun 11 00:13:25.321: INFO: PersistentVolumeClaim pvc-jf6cz found and phase=Bound (2.004322709s) STEP: Deleting the previously created pod Jun 11 00:13:47.345: INFO: Deleting pod "pvc-volume-tester-lsk8x" in namespace "csi-mock-volumes-5299" Jun 11 00:13:47.348: INFO: Wait up to 5m0s for pod "pvc-volume-tester-lsk8x" to be fully deleted STEP: Checking CSI driver logs Jun 11 00:13:51.366: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/a4eaad6d-dabf-4138-a7bc-e5a387cf57a2/volumes/kubernetes.io~csi/pvc-907b0fbe-2d05-4816-b394-e9797e1d2d9c/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:} STEP: Deleting pod pvc-volume-tester-lsk8x Jun 11 00:13:51.367: INFO: Deleting pod "pvc-volume-tester-lsk8x" in namespace "csi-mock-volumes-5299" STEP: Deleting claim pvc-jf6cz Jun 11 00:13:51.376: INFO: Waiting up to 2m0s for PersistentVolume pvc-907b0fbe-2d05-4816-b394-e9797e1d2d9c to get deleted Jun 11 00:13:51.379: INFO: PersistentVolume pvc-907b0fbe-2d05-4816-b394-e9797e1d2d9c found and phase=Bound (2.392176ms) Jun 11 00:13:53.381: INFO: PersistentVolume pvc-907b0fbe-2d05-4816-b394-e9797e1d2d9c found and phase=Released (2.004968441s) Jun 11 00:13:55.387: INFO: PersistentVolume pvc-907b0fbe-2d05-4816-b394-e9797e1d2d9c found and phase=Released (4.011009553s) Jun 11 00:13:57.391: INFO: PersistentVolume pvc-907b0fbe-2d05-4816-b394-e9797e1d2d9c found and phase=Released (6.014139164s) Jun 11 00:13:59.398: INFO: PersistentVolume pvc-907b0fbe-2d05-4816-b394-e9797e1d2d9c was removed STEP: Deleting storageclass csi-mock-volumes-5299-sc2dbls STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-5299 STEP: Waiting for namespaces [csi-mock-volumes-5299] to vanish STEP: uninstalling csi mock driver Jun 11 00:14:05.413: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5299-2535/csi-attacher Jun 11 00:14:05.417: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-5299 Jun 11 00:14:05.421: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-5299 Jun 11 00:14:05.425: INFO: deleting *v1.Role: csi-mock-volumes-5299-2535/external-attacher-cfg-csi-mock-volumes-5299 Jun 11 00:14:05.428: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5299-2535/csi-attacher-role-cfg Jun 11 00:14:05.431: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5299-2535/csi-provisioner Jun 11 00:14:05.434: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-5299 Jun 11 00:14:05.439: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-5299 Jun 11 00:14:05.443: INFO: deleting *v1.Role: csi-mock-volumes-5299-2535/external-provisioner-cfg-csi-mock-volumes-5299 Jun 11 00:14:05.446: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5299-2535/csi-provisioner-role-cfg Jun 11 00:14:05.449: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5299-2535/csi-resizer Jun 11 00:14:05.453: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-5299 Jun 11 00:14:05.456: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-5299 Jun 11 00:14:05.459: INFO: deleting *v1.Role: csi-mock-volumes-5299-2535/external-resizer-cfg-csi-mock-volumes-5299 Jun 11 00:14:05.463: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5299-2535/csi-resizer-role-cfg Jun 11 00:14:05.467: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5299-2535/csi-snapshotter Jun 11 00:14:05.470: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-5299 Jun 11 00:14:05.474: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-5299 Jun 11 00:14:05.477: INFO: deleting *v1.Role: csi-mock-volumes-5299-2535/external-snapshotter-leaderelection-csi-mock-volumes-5299 Jun 11 00:14:05.480: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5299-2535/external-snapshotter-leaderelection Jun 11 00:14:05.483: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5299-2535/csi-mock Jun 11 00:14:05.486: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-5299 Jun 11 00:14:05.490: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-5299 Jun 11 00:14:05.493: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-5299 Jun 11 00:14:05.496: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-5299 Jun 11 00:14:05.500: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-5299 Jun 11 00:14:05.503: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-5299 Jun 11 00:14:05.506: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-5299 Jun 11 00:14:05.510: INFO: deleting *v1.StatefulSet: csi-mock-volumes-5299-2535/csi-mockplugin Jun 11 00:14:05.514: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-5299 Jun 11 00:14:05.517: INFO: deleting *v1.StatefulSet: csi-mock-volumes-5299-2535/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-5299-2535 STEP: Waiting for namespaces [csi-mock-volumes-5299-2535] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:14:33.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:79.893 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI workload information using mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:443 should not be passed when podInfoOnMount=false /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=false","total":-1,"completed":13,"skipped":258,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:14:15.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Jun 11 00:14:19.194: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-b8f74a4e-b46a-405b-8934-74a909e6623b-backend && ln -s /tmp/local-volume-test-b8f74a4e-b46a-405b-8934-74a909e6623b-backend /tmp/local-volume-test-b8f74a4e-b46a-405b-8934-74a909e6623b] Namespace:persistent-local-volumes-test-9996 PodName:hostexec-node1-8gw4j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:14:19.194: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 11 00:14:19.283: INFO: Creating a PV followed by a PVC Jun 11 00:14:19.290: INFO: Waiting for PV local-pv77s8c to bind to PVC pvc-x885n Jun 11 00:14:19.290: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-x885n] to have phase Bound Jun 11 00:14:19.292: INFO: PersistentVolumeClaim pvc-x885n found but phase is Pending instead of Bound. Jun 11 00:14:21.295: INFO: PersistentVolumeClaim pvc-x885n found but phase is Pending instead of Bound. Jun 11 00:14:23.298: INFO: PersistentVolumeClaim pvc-x885n found but phase is Pending instead of Bound. Jun 11 00:14:25.301: INFO: PersistentVolumeClaim pvc-x885n found but phase is Pending instead of Bound. Jun 11 00:14:27.304: INFO: PersistentVolumeClaim pvc-x885n found and phase=Bound (8.014750801s) Jun 11 00:14:27.305: INFO: Waiting up to 3m0s for PersistentVolume local-pv77s8c to have phase Bound Jun 11 00:14:27.307: INFO: PersistentVolume local-pv77s8c found and phase=Bound (2.67714ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Jun 11 00:14:33.335: INFO: pod "pod-da8391f2-07be-4834-be2c-78b6f55a6096" created on Node "node1" STEP: Writing in pod1 Jun 11 00:14:33.335: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9996 PodName:pod-da8391f2-07be-4834-be2c-78b6f55a6096 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:14:33.335: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:14:33.414: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Jun 11 00:14:33.415: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9996 PodName:pod-da8391f2-07be-4834-be2c-78b6f55a6096 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:14:33.415: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:14:33.493: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod1 Jun 11 00:14:33.493: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-b8f74a4e-b46a-405b-8934-74a909e6623b > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9996 PodName:pod-da8391f2-07be-4834-be2c-78b6f55a6096 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:14:33.493: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:14:33.572: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-b8f74a4e-b46a-405b-8934-74a909e6623b > /mnt/volume1/test-file", out: "", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-da8391f2-07be-4834-be2c-78b6f55a6096 in namespace persistent-local-volumes-test-9996 [AfterEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 11 00:14:33.577: INFO: Deleting PersistentVolumeClaim "pvc-x885n" Jun 11 00:14:33.581: INFO: Deleting PersistentVolume "local-pv77s8c" STEP: Removing the test directory Jun 11 00:14:33.588: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-b8f74a4e-b46a-405b-8934-74a909e6623b && rm -r /tmp/local-volume-test-b8f74a4e-b46a-405b-8934-74a909e6623b-backend] Namespace:persistent-local-volumes-test-9996 PodName:hostexec-node1-8gw4j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:14:33.588: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:14:33.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-9996" for this suite. • [SLOW TEST:18.557 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":17,"skipped":554,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]"]} SSS ------------------------------ [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:14:33.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-file STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:124 STEP: Create a pod for further testing Jun 11 00:14:33.772: INFO: The status of Pod test-hostpath-type-bx7xr is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:14:35.777: INFO: The status of Pod test-hostpath-type-bx7xr is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:14:37.777: INFO: The status of Pod test-hostpath-type-bx7xr is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:14:39.777: INFO: The status of Pod test-hostpath-type-bx7xr is Running (Ready = true) STEP: running on node node1 STEP: Should automatically create a new file 'afile' when HostPathType is HostPathFileOrCreate [It] Should fail on mounting file 'afile' when HostPathType is HostPathCharDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:161 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:14:45.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-file-4200" for this suite. • [SLOW TEST:12.145 seconds] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting file 'afile' when HostPathType is HostPathCharDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:161 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType File [Slow] Should fail on mounting file 'afile' when HostPathType is HostPathCharDev","total":-1,"completed":18,"skipped":557,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:14:30.295: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Jun 11 00:14:34.350: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-b7377059-b600-4667-b628-9b2e7569a784-backend && ln -s /tmp/local-volume-test-b7377059-b600-4667-b628-9b2e7569a784-backend /tmp/local-volume-test-b7377059-b600-4667-b628-9b2e7569a784] Namespace:persistent-local-volumes-test-7187 PodName:hostexec-node2-9sz5c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:14:34.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 11 00:14:34.433: INFO: Creating a PV followed by a PVC Jun 11 00:14:34.439: INFO: Waiting for PV local-pvrqf52 to bind to PVC pvc-lgstd Jun 11 00:14:34.439: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-lgstd] to have phase Bound Jun 11 00:14:34.442: INFO: PersistentVolumeClaim pvc-lgstd found but phase is Pending instead of Bound. Jun 11 00:14:36.444: INFO: PersistentVolumeClaim pvc-lgstd found but phase is Pending instead of Bound. Jun 11 00:14:38.448: INFO: PersistentVolumeClaim pvc-lgstd found but phase is Pending instead of Bound. Jun 11 00:14:40.453: INFO: PersistentVolumeClaim pvc-lgstd found but phase is Pending instead of Bound. Jun 11 00:14:42.456: INFO: PersistentVolumeClaim pvc-lgstd found and phase=Bound (8.016907064s) Jun 11 00:14:42.456: INFO: Waiting up to 3m0s for PersistentVolume local-pvrqf52 to have phase Bound Jun 11 00:14:42.459: INFO: PersistentVolume local-pvrqf52 found and phase=Bound (2.781271ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 STEP: Checking fsGroup is set STEP: Creating a pod Jun 11 00:14:46.485: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-7187 exec pod-b7953b4d-3342-4ba2-aa46-8886be936834 --namespace=persistent-local-volumes-test-7187 -- stat -c %g /mnt/volume1' Jun 11 00:14:46.736: INFO: stderr: "" Jun 11 00:14:46.736: INFO: stdout: "1234\n" STEP: Deleting pod STEP: Deleting pod pod-b7953b4d-3342-4ba2-aa46-8886be936834 in namespace persistent-local-volumes-test-7187 [AfterEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 11 00:14:46.741: INFO: Deleting PersistentVolumeClaim "pvc-lgstd" Jun 11 00:14:46.745: INFO: Deleting PersistentVolume "local-pvrqf52" STEP: Removing the test directory Jun 11 00:14:46.749: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-b7377059-b600-4667-b628-9b2e7569a784 && rm -r /tmp/local-volume-test-b7377059-b600-4667-b628-9b2e7569a784-backend] Namespace:persistent-local-volumes-test-7187 PodName:hostexec-node2-9sz5c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:14:46.749: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:14:47.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-7187" for this suite. • [SLOW TEST:16.763 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link] Set fsGroup for local volume should set fsGroup for one pod [Slow]","total":-1,"completed":19,"skipped":815,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:14:32.383: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Jun 11 00:14:36.430: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-4e1c659b-9883-4386-a000-76af437cc605] Namespace:persistent-local-volumes-test-8738 PodName:hostexec-node2-kqfb4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:14:36.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 11 00:14:36.553: INFO: Creating a PV followed by a PVC Jun 11 00:14:36.559: INFO: Waiting for PV local-pvhx4p7 to bind to PVC pvc-vlcpf Jun 11 00:14:36.559: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-vlcpf] to have phase Bound Jun 11 00:14:36.561: INFO: PersistentVolumeClaim pvc-vlcpf found but phase is Pending instead of Bound. Jun 11 00:14:38.565: INFO: PersistentVolumeClaim pvc-vlcpf found but phase is Pending instead of Bound. Jun 11 00:14:40.569: INFO: PersistentVolumeClaim pvc-vlcpf found but phase is Pending instead of Bound. Jun 11 00:14:42.573: INFO: PersistentVolumeClaim pvc-vlcpf found and phase=Bound (6.013651462s) Jun 11 00:14:42.573: INFO: Waiting up to 3m0s for PersistentVolume local-pvhx4p7 to have phase Bound Jun 11 00:14:42.576: INFO: PersistentVolume local-pvhx4p7 found and phase=Bound (2.476019ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 STEP: Creating pod1 STEP: Creating a pod Jun 11 00:14:46.605: INFO: pod "pod-a45f75e0-e57c-49e4-8194-31dcfe66c400" created on Node "node2" STEP: Writing in pod1 Jun 11 00:14:46.605: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-8738 PodName:pod-a45f75e0-e57c-49e4-8194-31dcfe66c400 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:14:46.605: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:14:46.709: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Jun 11 00:14:46.709: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-8738 PodName:pod-a45f75e0-e57c-49e4-8194-31dcfe66c400 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:14:46.709: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:14:47.039: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-a45f75e0-e57c-49e4-8194-31dcfe66c400 in namespace persistent-local-volumes-test-8738 STEP: Creating pod2 STEP: Creating a pod Jun 11 00:14:53.067: INFO: pod "pod-fc1fb215-a649-4375-8eb2-e119e8076d52" created on Node "node2" STEP: Reading in pod2 Jun 11 00:14:53.068: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-8738 PodName:pod-fc1fb215-a649-4375-8eb2-e119e8076d52 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:14:53.068: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:14:53.392: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod2 STEP: Deleting pod pod-fc1fb215-a649-4375-8eb2-e119e8076d52 in namespace persistent-local-volumes-test-8738 [AfterEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 11 00:14:53.397: INFO: Deleting PersistentVolumeClaim "pvc-vlcpf" Jun 11 00:14:53.401: INFO: Deleting PersistentVolume "local-pvhx4p7" STEP: Removing the test directory Jun 11 00:14:53.406: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-4e1c659b-9883-4386-a000-76af437cc605] Namespace:persistent-local-volumes-test-8738 PodName:hostexec-node2-kqfb4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:14:53.406: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:14:53.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-8738" for this suite. • [SLOW TEST:21.132 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":15,"skipped":782,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:14:45.959: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-socket STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:191 STEP: Create a pod for further testing Jun 11 00:14:45.993: INFO: The status of Pod test-hostpath-type-kxl9h is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:14:47.996: INFO: The status of Pod test-hostpath-type-kxl9h is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:14:49.996: INFO: The status of Pod test-hostpath-type-kxl9h is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:14:51.999: INFO: The status of Pod test-hostpath-type-kxl9h is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:14:53.999: INFO: The status of Pod test-hostpath-type-kxl9h is Running (Ready = true) STEP: running on node node2 [It] Should fail on mounting socket 'asocket' when HostPathType is HostPathBlockDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:231 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:14:56.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-socket-1196" for this suite. • [SLOW TEST:10.078 seconds] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting socket 'asocket' when HostPathType is HostPathBlockDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:231 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Socket [Slow] Should fail on mounting socket 'asocket' when HostPathType is HostPathBlockDev","total":-1,"completed":19,"skipped":606,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]"]} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:14:53.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50 [It] volume on default medium should have the correct mode using FSGroup /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:71 STEP: Creating a pod to test emptydir volume type on node default medium Jun 11 00:14:53.602: INFO: Waiting up to 5m0s for pod "pod-af545e8c-d6f9-4f9c-8d6b-fd92d784bb0e" in namespace "emptydir-7399" to be "Succeeded or Failed" Jun 11 00:14:53.605: INFO: Pod "pod-af545e8c-d6f9-4f9c-8d6b-fd92d784bb0e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.740782ms Jun 11 00:14:55.609: INFO: Pod "pod-af545e8c-d6f9-4f9c-8d6b-fd92d784bb0e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006351735s Jun 11 00:14:57.613: INFO: Pod "pod-af545e8c-d6f9-4f9c-8d6b-fd92d784bb0e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010480301s Jun 11 00:14:59.616: INFO: Pod "pod-af545e8c-d6f9-4f9c-8d6b-fd92d784bb0e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014024935s STEP: Saw pod success Jun 11 00:14:59.616: INFO: Pod "pod-af545e8c-d6f9-4f9c-8d6b-fd92d784bb0e" satisfied condition "Succeeded or Failed" Jun 11 00:14:59.618: INFO: Trying to get logs from node node2 pod pod-af545e8c-d6f9-4f9c-8d6b-fd92d784bb0e container test-container: STEP: delete the pod Jun 11 00:14:59.636: INFO: Waiting for pod pod-af545e8c-d6f9-4f9c-8d6b-fd92d784bb0e to disappear Jun 11 00:14:59.638: INFO: Pod pod-af545e8c-d6f9-4f9c-8d6b-fd92d784bb0e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:14:59.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7399" for this suite. • [SLOW TEST:6.082 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:48 volume on default medium should have the correct mode using FSGroup /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:71 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on default medium should have the correct mode using FSGroup","total":-1,"completed":16,"skipped":803,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:13:29.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] two pods: should call NodeStage after previous NodeUnstage final error /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:961 STEP: Building a driver namespace object, basename csi-mock-volumes-3020 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock proxy Jun 11 00:13:29.550: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3020-7596/csi-attacher Jun 11 00:13:29.554: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-3020 Jun 11 00:13:29.554: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-3020 Jun 11 00:13:29.557: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-3020 Jun 11 00:13:29.560: INFO: creating *v1.Role: csi-mock-volumes-3020-7596/external-attacher-cfg-csi-mock-volumes-3020 Jun 11 00:13:29.563: INFO: creating *v1.RoleBinding: csi-mock-volumes-3020-7596/csi-attacher-role-cfg Jun 11 00:13:29.566: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3020-7596/csi-provisioner Jun 11 00:13:29.569: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-3020 Jun 11 00:13:29.569: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-3020 Jun 11 00:13:29.571: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-3020 Jun 11 00:13:29.574: INFO: creating *v1.Role: csi-mock-volumes-3020-7596/external-provisioner-cfg-csi-mock-volumes-3020 Jun 11 00:13:29.577: INFO: creating *v1.RoleBinding: csi-mock-volumes-3020-7596/csi-provisioner-role-cfg Jun 11 00:13:29.579: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3020-7596/csi-resizer Jun 11 00:13:29.582: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-3020 Jun 11 00:13:29.582: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-3020 Jun 11 00:13:29.584: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-3020 Jun 11 00:13:29.587: INFO: creating *v1.Role: csi-mock-volumes-3020-7596/external-resizer-cfg-csi-mock-volumes-3020 Jun 11 00:13:29.590: INFO: creating *v1.RoleBinding: csi-mock-volumes-3020-7596/csi-resizer-role-cfg Jun 11 00:13:29.592: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3020-7596/csi-snapshotter Jun 11 00:13:29.595: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-3020 Jun 11 00:13:29.595: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-3020 Jun 11 00:13:29.597: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-3020 Jun 11 00:13:29.600: INFO: creating *v1.Role: csi-mock-volumes-3020-7596/external-snapshotter-leaderelection-csi-mock-volumes-3020 Jun 11 00:13:29.602: INFO: creating *v1.RoleBinding: csi-mock-volumes-3020-7596/external-snapshotter-leaderelection Jun 11 00:13:29.605: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3020-7596/csi-mock Jun 11 00:13:29.608: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-3020 Jun 11 00:13:29.610: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-3020 Jun 11 00:13:29.615: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-3020 Jun 11 00:13:29.617: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-3020 Jun 11 00:13:29.620: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-3020 Jun 11 00:13:29.622: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-3020 Jun 11 00:13:29.625: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-3020 Jun 11 00:13:29.628: INFO: creating *v1.StatefulSet: csi-mock-volumes-3020-7596/csi-mockplugin Jun 11 00:13:29.632: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-3020 Jun 11 00:13:29.635: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-3020" Jun 11 00:13:29.638: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-3020 to register on node node2 I0611 00:13:36.718176 39 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-3020","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I0611 00:13:36.811099 39 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null} I0611 00:13:36.812980 39 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-3020","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I0611 00:13:36.814198 39 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":"","FullError":null} I0611 00:13:36.815694 39 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null} I0611 00:13:37.501623 39 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-3020"},"Error":"","FullError":null} STEP: Creating pod Jun 11 00:13:39.154: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jun 11 00:13:39.158: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-8mlds] to have phase Bound Jun 11 00:13:39.161: INFO: PersistentVolumeClaim pvc-8mlds found but phase is Pending instead of Bound. I0611 00:13:39.169032 39 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-5c85501d-f70f-4d82-aa9c-c0580ab28f12","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-5c85501d-f70f-4d82-aa9c-c0580ab28f12"}}},"Error":"","FullError":null} Jun 11 00:13:41.164: INFO: PersistentVolumeClaim pvc-8mlds found and phase=Bound (2.005961698s) Jun 11 00:13:41.179: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-8mlds] to have phase Bound Jun 11 00:13:41.182: INFO: PersistentVolumeClaim pvc-8mlds found and phase=Bound (2.160836ms) I0611 00:13:41.433173 39 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Jun 11 00:13:41.435: INFO: >>> kubeConfig: /root/.kube/config I0611 00:13:41.525928 39 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-5c85501d-f70f-4d82-aa9c-c0580ab28f12/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-5c85501d-f70f-4d82-aa9c-c0580ab28f12","storage.kubernetes.io/csiProvisionerIdentity":"1654906416815-8081-csi-mock-csi-mock-volumes-3020"}},"Response":{},"Error":"","FullError":null} I0611 00:13:41.530863 39 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Jun 11 00:13:41.532: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:13:41.633: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:13:41.725: INFO: >>> kubeConfig: /root/.kube/config I0611 00:13:41.810973 39 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-5c85501d-f70f-4d82-aa9c-c0580ab28f12/globalmount","target_path":"/var/lib/kubelet/pods/e0c79252-2f63-4fd3-8b64-e163a1ebf34e/volumes/kubernetes.io~csi/pvc-5c85501d-f70f-4d82-aa9c-c0580ab28f12/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-5c85501d-f70f-4d82-aa9c-c0580ab28f12","storage.kubernetes.io/csiProvisionerIdentity":"1654906416815-8081-csi-mock-csi-mock-volumes-3020"}},"Response":{},"Error":"","FullError":null} Jun 11 00:13:47.187: INFO: Deleting pod "pvc-volume-tester-n5lhn" in namespace "csi-mock-volumes-3020" Jun 11 00:13:47.191: INFO: Wait up to 5m0s for pod "pvc-volume-tester-n5lhn" to be fully deleted I0611 00:13:47.704326 39 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0611 00:13:47.706714 39 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetVolumeStats","Request":{"volume_id":"4","volume_path":"/var/lib/kubelet/pods/e0c79252-2f63-4fd3-8b64-e163a1ebf34e/volumes/kubernetes.io~csi/pvc-5c85501d-f70f-4d82-aa9c-c0580ab28f12/mount"},"Response":{"usage":[{"total":1073741824,"unit":1}],"volume_condition":{}},"Error":"","FullError":null} Jun 11 00:13:48.720: INFO: >>> kubeConfig: /root/.kube/config I0611 00:13:48.825589 39 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/e0c79252-2f63-4fd3-8b64-e163a1ebf34e/volumes/kubernetes.io~csi/pvc-5c85501d-f70f-4d82-aa9c-c0580ab28f12/mount"},"Response":{},"Error":"","FullError":null} I0611 00:13:48.923099 39 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0611 00:13:48.924883 39 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-5c85501d-f70f-4d82-aa9c-c0580ab28f12/globalmount"},"Response":null,"Error":"rpc error: code = InvalidArgument desc = fake final error","FullError":{"code":3,"message":"fake final error"}} I0611 00:13:49.527832 39 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0611 00:13:49.529688 39 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-5c85501d-f70f-4d82-aa9c-c0580ab28f12/globalmount"},"Response":null,"Error":"rpc error: code = InvalidArgument desc = fake final error","FullError":{"code":3,"message":"fake final error"}} I0611 00:13:50.533909 39 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0611 00:13:50.536135 39 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-5c85501d-f70f-4d82-aa9c-c0580ab28f12/globalmount"},"Response":null,"Error":"rpc error: code = InvalidArgument desc = fake final error","FullError":{"code":3,"message":"fake final error"}} I0611 00:13:52.145032 39 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Jun 11 00:13:52.146: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:13:52.239: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:13:52.320: INFO: >>> kubeConfig: /root/.kube/config I0611 00:13:52.403883 39 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-5c85501d-f70f-4d82-aa9c-c0580ab28f12/globalmount","target_path":"/var/lib/kubelet/pods/3da9bffd-a9fc-4163-8c86-d12fd4f0e81b/volumes/kubernetes.io~csi/pvc-5c85501d-f70f-4d82-aa9c-c0580ab28f12/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-5c85501d-f70f-4d82-aa9c-c0580ab28f12","storage.kubernetes.io/csiProvisionerIdentity":"1654906416815-8081-csi-mock-csi-mock-volumes-3020"}},"Response":{},"Error":"","FullError":null} Jun 11 00:13:57.210: INFO: Deleting pod "pvc-volume-tester-wwqrt" in namespace "csi-mock-volumes-3020" Jun 11 00:13:57.214: INFO: Wait up to 5m0s for pod "pvc-volume-tester-wwqrt" to be fully deleted Jun 11 00:14:02.372: INFO: >>> kubeConfig: /root/.kube/config I0611 00:14:02.470722 39 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/3da9bffd-a9fc-4163-8c86-d12fd4f0e81b/volumes/kubernetes.io~csi/pvc-5c85501d-f70f-4d82-aa9c-c0580ab28f12/mount"},"Response":{},"Error":"","FullError":null} I0611 00:14:02.575229 39 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0611 00:14:02.577316 39 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-5c85501d-f70f-4d82-aa9c-c0580ab28f12/globalmount"},"Response":{},"Error":"","FullError":null} STEP: Waiting for all remaining expected CSI calls Jun 11 00:14:08.223: FAIL: while waiting for all CSI calls Unexpected error: <*errors.errorString | 0xc000bfbf60>: { s: "Unexpected CSI call 2: expected NodeStageVolume (0), got NodeUnstageVolume (0)", } Unexpected CSI call 2: expected NodeStageVolume (0), got NodeUnstageVolume (0) occurred Full Stack Trace k8s.io/kubernetes/test/e2e/storage.glob..func1.13.3() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1017 +0x79e k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0007d3980) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc0007d3980) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc0007d3980, 0x70f99e8) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 STEP: Deleting pod pvc-volume-tester-n5lhn Jun 11 00:14:08.224: INFO: Deleting pod "pvc-volume-tester-n5lhn" in namespace "csi-mock-volumes-3020" STEP: Deleting pod pvc-volume-tester-wwqrt Jun 11 00:14:08.227: INFO: Deleting pod "pvc-volume-tester-wwqrt" in namespace "csi-mock-volumes-3020" STEP: Deleting claim pvc-8mlds Jun 11 00:14:08.238: INFO: Waiting up to 2m0s for PersistentVolume pvc-5c85501d-f70f-4d82-aa9c-c0580ab28f12 to get deleted Jun 11 00:14:08.240: INFO: PersistentVolume pvc-5c85501d-f70f-4d82-aa9c-c0580ab28f12 found and phase=Bound (2.056408ms) I0611 00:14:08.252292 39 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null} Jun 11 00:14:10.243: INFO: PersistentVolume pvc-5c85501d-f70f-4d82-aa9c-c0580ab28f12 was removed STEP: Deleting storageclass csi-mock-volumes-3020-scdkdfc STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-3020 STEP: Waiting for namespaces [csi-mock-volumes-3020] to vanish STEP: uninstalling csi mock driver Jun 11 00:14:16.275: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3020-7596/csi-attacher Jun 11 00:14:16.278: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-3020 Jun 11 00:14:16.282: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-3020 Jun 11 00:14:16.286: INFO: deleting *v1.Role: csi-mock-volumes-3020-7596/external-attacher-cfg-csi-mock-volumes-3020 Jun 11 00:14:16.289: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3020-7596/csi-attacher-role-cfg Jun 11 00:14:16.293: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3020-7596/csi-provisioner Jun 11 00:14:16.296: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-3020 Jun 11 00:14:16.299: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-3020 Jun 11 00:14:16.302: INFO: deleting *v1.Role: csi-mock-volumes-3020-7596/external-provisioner-cfg-csi-mock-volumes-3020 Jun 11 00:14:16.306: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3020-7596/csi-provisioner-role-cfg Jun 11 00:14:16.309: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3020-7596/csi-resizer Jun 11 00:14:16.311: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-3020 Jun 11 00:14:16.315: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-3020 Jun 11 00:14:16.318: INFO: deleting *v1.Role: csi-mock-volumes-3020-7596/external-resizer-cfg-csi-mock-volumes-3020 Jun 11 00:14:16.321: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3020-7596/csi-resizer-role-cfg Jun 11 00:14:16.325: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3020-7596/csi-snapshotter Jun 11 00:14:16.328: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-3020 Jun 11 00:14:16.332: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-3020 Jun 11 00:14:16.335: INFO: deleting *v1.Role: csi-mock-volumes-3020-7596/external-snapshotter-leaderelection-csi-mock-volumes-3020 Jun 11 00:14:16.339: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3020-7596/external-snapshotter-leaderelection Jun 11 00:14:16.342: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3020-7596/csi-mock Jun 11 00:14:16.346: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-3020 Jun 11 00:14:16.349: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-3020 Jun 11 00:14:16.352: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-3020 Jun 11 00:14:16.356: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-3020 Jun 11 00:14:16.359: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-3020 Jun 11 00:14:16.362: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-3020 Jun 11 00:14:16.366: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-3020 Jun 11 00:14:16.369: INFO: deleting *v1.StatefulSet: csi-mock-volumes-3020-7596/csi-mockplugin Jun 11 00:14:16.373: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-3020 STEP: deleting the driver namespace: csi-mock-volumes-3020-7596 STEP: Waiting for namespaces [csi-mock-volumes-3020-7596] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:15:00.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • Failure [90.906 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI NodeUnstage error cases [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:900 two pods: should call NodeStage after previous NodeUnstage final error [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:961 Jun 11 00:14:08.223: while waiting for all CSI calls Unexpected error: <*errors.errorString | 0xc000bfbf60>: { s: "Unexpected CSI call 2: expected NodeStageVolume (0), got NodeUnstageVolume (0)", } Unexpected CSI call 2: expected NodeStageVolume (0), got NodeUnstageVolume (0) occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1017 ------------------------------ {"msg":"FAILED [sig-storage] CSI mock volume CSI NodeUnstage error cases [Slow] two pods: should call NodeStage after previous NodeUnstage final error","total":-1,"completed":20,"skipped":571,"failed":1,"failures":["[sig-storage] CSI mock volume CSI NodeUnstage error cases [Slow] two pods: should call NodeStage after previous NodeUnstage final error"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:14:47.097: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Jun 11 00:14:49.148: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-61620f52-ef09-4530-bcf5-fda8a0783ae8-backend && ln -s /tmp/local-volume-test-61620f52-ef09-4530-bcf5-fda8a0783ae8-backend /tmp/local-volume-test-61620f52-ef09-4530-bcf5-fda8a0783ae8] Namespace:persistent-local-volumes-test-5640 PodName:hostexec-node1-2kbrj ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:14:49.148: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 11 00:14:49.240: INFO: Creating a PV followed by a PVC Jun 11 00:14:49.246: INFO: Waiting for PV local-pvpfxv7 to bind to PVC pvc-f7jkn Jun 11 00:14:49.246: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-f7jkn] to have phase Bound Jun 11 00:14:49.248: INFO: PersistentVolumeClaim pvc-f7jkn found but phase is Pending instead of Bound. Jun 11 00:14:51.251: INFO: PersistentVolumeClaim pvc-f7jkn found but phase is Pending instead of Bound. Jun 11 00:14:53.254: INFO: PersistentVolumeClaim pvc-f7jkn found but phase is Pending instead of Bound. Jun 11 00:14:55.258: INFO: PersistentVolumeClaim pvc-f7jkn found but phase is Pending instead of Bound. Jun 11 00:14:57.262: INFO: PersistentVolumeClaim pvc-f7jkn found and phase=Bound (8.015994364s) Jun 11 00:14:57.262: INFO: Waiting up to 3m0s for PersistentVolume local-pvpfxv7 to have phase Bound Jun 11 00:14:57.266: INFO: PersistentVolume local-pvpfxv7 found and phase=Bound (3.328597ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 STEP: Create first pod and check fsGroup is set STEP: Creating a pod Jun 11 00:15:03.291: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-5640 exec pod-8a3cf0f7-180e-437f-8298-c8838b5769ca --namespace=persistent-local-volumes-test-5640 -- stat -c %g /mnt/volume1' Jun 11 00:15:03.572: INFO: stderr: "" Jun 11 00:15:03.572: INFO: stdout: "1234\n" STEP: Create second pod with same fsGroup and check fsGroup is correct STEP: Creating a pod Jun 11 00:15:09.594: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-5640 exec pod-abf84523-91bc-44ea-8411-cffb52d74b70 --namespace=persistent-local-volumes-test-5640 -- stat -c %g /mnt/volume1' Jun 11 00:15:09.857: INFO: stderr: "" Jun 11 00:15:09.857: INFO: stdout: "1234\n" STEP: Deleting first pod STEP: Deleting pod pod-8a3cf0f7-180e-437f-8298-c8838b5769ca in namespace persistent-local-volumes-test-5640 STEP: Deleting second pod STEP: Deleting pod pod-abf84523-91bc-44ea-8411-cffb52d74b70 in namespace persistent-local-volumes-test-5640 [AfterEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 11 00:15:09.867: INFO: Deleting PersistentVolumeClaim "pvc-f7jkn" Jun 11 00:15:09.871: INFO: Deleting PersistentVolume "local-pvpfxv7" STEP: Removing the test directory Jun 11 00:15:09.874: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-61620f52-ef09-4530-bcf5-fda8a0783ae8 && rm -r /tmp/local-volume-test-61620f52-ef09-4530-bcf5-fda8a0783ae8-backend] Namespace:persistent-local-volumes-test-5640 PodName:hostexec-node1-2kbrj ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:15:09.874: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:15:09.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5640" for this suite. • [SLOW TEST:22.883 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]","total":-1,"completed":20,"skipped":831,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:14:12.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should expand volume by restarting pod if attach=on, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590 STEP: Building a driver namespace object, basename csi-mock-volumes-2477 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Jun 11 00:14:12.637: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2477-5826/csi-attacher Jun 11 00:14:12.640: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-2477 Jun 11 00:14:12.640: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-2477 Jun 11 00:14:12.643: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-2477 Jun 11 00:14:12.646: INFO: creating *v1.Role: csi-mock-volumes-2477-5826/external-attacher-cfg-csi-mock-volumes-2477 Jun 11 00:14:12.649: INFO: creating *v1.RoleBinding: csi-mock-volumes-2477-5826/csi-attacher-role-cfg Jun 11 00:14:12.652: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2477-5826/csi-provisioner Jun 11 00:14:12.654: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-2477 Jun 11 00:14:12.654: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-2477 Jun 11 00:14:12.657: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-2477 Jun 11 00:14:12.661: INFO: creating *v1.Role: csi-mock-volumes-2477-5826/external-provisioner-cfg-csi-mock-volumes-2477 Jun 11 00:14:12.664: INFO: creating *v1.RoleBinding: csi-mock-volumes-2477-5826/csi-provisioner-role-cfg Jun 11 00:14:12.667: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2477-5826/csi-resizer Jun 11 00:14:12.670: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-2477 Jun 11 00:14:12.670: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-2477 Jun 11 00:14:12.672: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-2477 Jun 11 00:14:12.675: INFO: creating *v1.Role: csi-mock-volumes-2477-5826/external-resizer-cfg-csi-mock-volumes-2477 Jun 11 00:14:12.677: INFO: creating *v1.RoleBinding: csi-mock-volumes-2477-5826/csi-resizer-role-cfg Jun 11 00:14:12.680: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2477-5826/csi-snapshotter Jun 11 00:14:12.682: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-2477 Jun 11 00:14:12.682: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-2477 Jun 11 00:14:12.685: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-2477 Jun 11 00:14:12.687: INFO: creating *v1.Role: csi-mock-volumes-2477-5826/external-snapshotter-leaderelection-csi-mock-volumes-2477 Jun 11 00:14:12.691: INFO: creating *v1.RoleBinding: csi-mock-volumes-2477-5826/external-snapshotter-leaderelection Jun 11 00:14:12.693: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2477-5826/csi-mock Jun 11 00:14:12.695: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-2477 Jun 11 00:14:12.698: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-2477 Jun 11 00:14:12.701: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-2477 Jun 11 00:14:12.714: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-2477 Jun 11 00:14:12.717: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-2477 Jun 11 00:14:12.720: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-2477 Jun 11 00:14:12.723: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-2477 Jun 11 00:14:12.725: INFO: creating *v1.StatefulSet: csi-mock-volumes-2477-5826/csi-mockplugin Jun 11 00:14:12.729: INFO: creating *v1.StatefulSet: csi-mock-volumes-2477-5826/csi-mockplugin-attacher Jun 11 00:14:12.733: INFO: creating *v1.StatefulSet: csi-mock-volumes-2477-5826/csi-mockplugin-resizer Jun 11 00:14:12.736: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-2477 to register on node node1 STEP: Creating pod Jun 11 00:14:22.252: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jun 11 00:14:22.257: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-hfzmw] to have phase Bound Jun 11 00:14:22.259: INFO: PersistentVolumeClaim pvc-hfzmw found but phase is Pending instead of Bound. Jun 11 00:14:24.266: INFO: PersistentVolumeClaim pvc-hfzmw found and phase=Bound (2.00838633s) STEP: Expanding current pvc STEP: Waiting for persistent volume resize to finish STEP: Checking for conditions on pvc STEP: Deleting the previously created pod Jun 11 00:14:36.304: INFO: Deleting pod "pvc-volume-tester-rn2lp" in namespace "csi-mock-volumes-2477" Jun 11 00:14:36.308: INFO: Wait up to 5m0s for pod "pvc-volume-tester-rn2lp" to be fully deleted STEP: Creating a new pod with same volume STEP: Waiting for PVC resize to finish STEP: Deleting pod pvc-volume-tester-rn2lp Jun 11 00:14:48.331: INFO: Deleting pod "pvc-volume-tester-rn2lp" in namespace "csi-mock-volumes-2477" STEP: Deleting pod pvc-volume-tester-s4hsv Jun 11 00:14:48.333: INFO: Deleting pod "pvc-volume-tester-s4hsv" in namespace "csi-mock-volumes-2477" Jun 11 00:14:48.337: INFO: Wait up to 5m0s for pod "pvc-volume-tester-s4hsv" to be fully deleted STEP: Deleting claim pvc-hfzmw Jun 11 00:14:50.353: INFO: Waiting up to 2m0s for PersistentVolume pvc-23669001-42a1-4aa0-b266-590c00ee0b2f to get deleted Jun 11 00:14:50.356: INFO: PersistentVolume pvc-23669001-42a1-4aa0-b266-590c00ee0b2f found and phase=Bound (2.207572ms) Jun 11 00:14:52.361: INFO: PersistentVolume pvc-23669001-42a1-4aa0-b266-590c00ee0b2f was removed STEP: Deleting storageclass csi-mock-volumes-2477-scjkclj STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-2477 STEP: Waiting for namespaces [csi-mock-volumes-2477] to vanish STEP: uninstalling csi mock driver Jun 11 00:14:58.376: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2477-5826/csi-attacher Jun 11 00:14:58.380: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-2477 Jun 11 00:14:58.383: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-2477 Jun 11 00:14:58.386: INFO: deleting *v1.Role: csi-mock-volumes-2477-5826/external-attacher-cfg-csi-mock-volumes-2477 Jun 11 00:14:58.390: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2477-5826/csi-attacher-role-cfg Jun 11 00:14:58.393: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2477-5826/csi-provisioner Jun 11 00:14:58.396: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-2477 Jun 11 00:14:58.400: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-2477 Jun 11 00:14:58.403: INFO: deleting *v1.Role: csi-mock-volumes-2477-5826/external-provisioner-cfg-csi-mock-volumes-2477 Jun 11 00:14:58.406: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2477-5826/csi-provisioner-role-cfg Jun 11 00:14:58.410: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2477-5826/csi-resizer Jun 11 00:14:58.413: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-2477 Jun 11 00:14:58.417: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-2477 Jun 11 00:14:58.421: INFO: deleting *v1.Role: csi-mock-volumes-2477-5826/external-resizer-cfg-csi-mock-volumes-2477 Jun 11 00:14:58.424: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2477-5826/csi-resizer-role-cfg Jun 11 00:14:58.427: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2477-5826/csi-snapshotter Jun 11 00:14:58.431: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-2477 Jun 11 00:14:58.434: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-2477 Jun 11 00:14:58.437: INFO: deleting *v1.Role: csi-mock-volumes-2477-5826/external-snapshotter-leaderelection-csi-mock-volumes-2477 Jun 11 00:14:58.441: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2477-5826/external-snapshotter-leaderelection Jun 11 00:14:58.445: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2477-5826/csi-mock Jun 11 00:14:58.448: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-2477 Jun 11 00:14:58.451: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-2477 Jun 11 00:14:58.455: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-2477 Jun 11 00:14:58.459: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-2477 Jun 11 00:14:58.462: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-2477 Jun 11 00:14:58.465: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-2477 Jun 11 00:14:58.469: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-2477 Jun 11 00:14:58.473: INFO: deleting *v1.StatefulSet: csi-mock-volumes-2477-5826/csi-mockplugin Jun 11 00:14:58.477: INFO: deleting *v1.StatefulSet: csi-mock-volumes-2477-5826/csi-mockplugin-attacher Jun 11 00:14:58.480: INFO: deleting *v1.StatefulSet: csi-mock-volumes-2477-5826/csi-mockplugin-resizer STEP: deleting the driver namespace: csi-mock-volumes-2477-5826 STEP: Waiting for namespaces [csi-mock-volumes-2477-5826] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:15:10.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:57.930 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI Volume expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:561 should expand volume by restarting pod if attach=on, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=on, nodeExpansion=on","total":-1,"completed":10,"skipped":201,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:15:10.667: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename regional-pd STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:68 Jun 11 00:15:10.693: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:15:10.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "regional-pd-1794" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.037 seconds] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 RegionalPD [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:76 should provision storage in the allowedTopologies with delayed binding [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:90 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:72 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:15:10.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:59 STEP: Creating configMap with name projected-configmap-test-volume-9a39bec9-6519-42ee-80be-898318572896 STEP: Creating a pod to test consume configMaps Jun 11 00:15:10.145: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2c165b3a-0d82-4ce3-a6c8-ed4731e29f2a" in namespace "projected-934" to be "Succeeded or Failed" Jun 11 00:15:10.147: INFO: Pod "pod-projected-configmaps-2c165b3a-0d82-4ce3-a6c8-ed4731e29f2a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056617ms Jun 11 00:15:12.151: INFO: Pod "pod-projected-configmaps-2c165b3a-0d82-4ce3-a6c8-ed4731e29f2a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006241714s Jun 11 00:15:14.156: INFO: Pod "pod-projected-configmaps-2c165b3a-0d82-4ce3-a6c8-ed4731e29f2a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010866005s STEP: Saw pod success Jun 11 00:15:14.156: INFO: Pod "pod-projected-configmaps-2c165b3a-0d82-4ce3-a6c8-ed4731e29f2a" satisfied condition "Succeeded or Failed" Jun 11 00:15:14.159: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-2c165b3a-0d82-4ce3-a6c8-ed4731e29f2a container agnhost-container: STEP: delete the pod Jun 11 00:15:14.176: INFO: Waiting for pod pod-projected-configmaps-2c165b3a-0d82-4ce3-a6c8-ed4731e29f2a to disappear Jun 11 00:15:14.178: INFO: Pod pod-projected-configmaps-2c165b3a-0d82-4ce3-a6c8-ed4731e29f2a no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:15:14.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-934" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":21,"skipped":889,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:15:14.204: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:110 STEP: Creating configMap with name projected-configmap-test-volume-map-0414c090-8ea4-41de-bb5f-5324163979ab STEP: Creating a pod to test consume configMaps Jun 11 00:15:14.240: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ecb2681c-7e21-486e-9699-9eb6540192ad" in namespace "projected-9567" to be "Succeeded or Failed" Jun 11 00:15:14.247: INFO: Pod "pod-projected-configmaps-ecb2681c-7e21-486e-9699-9eb6540192ad": Phase="Pending", Reason="", readiness=false. Elapsed: 6.446288ms Jun 11 00:15:16.250: INFO: Pod "pod-projected-configmaps-ecb2681c-7e21-486e-9699-9eb6540192ad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010094309s Jun 11 00:15:18.254: INFO: Pod "pod-projected-configmaps-ecb2681c-7e21-486e-9699-9eb6540192ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013868981s STEP: Saw pod success Jun 11 00:15:18.254: INFO: Pod "pod-projected-configmaps-ecb2681c-7e21-486e-9699-9eb6540192ad" satisfied condition "Succeeded or Failed" Jun 11 00:15:18.257: INFO: Trying to get logs from node node1 pod pod-projected-configmaps-ecb2681c-7e21-486e-9699-9eb6540192ad container agnhost-container: STEP: delete the pod Jun 11 00:15:18.275: INFO: Waiting for pod pod-projected-configmaps-ecb2681c-7e21-486e-9699-9eb6540192ad to disappear Jun 11 00:15:18.278: INFO: Pod pod-projected-configmaps-ecb2681c-7e21-486e-9699-9eb6540192ad no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:15:18.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9567" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":22,"skipped":896,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:15:00.519: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume-provisioning STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:146 [It] should create and delete persistent volumes [fast] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:794 STEP: creating a Gluster DP server Pod STEP: locating the provisioner pod STEP: creating a StorageClass STEP: Creating a StorageClass STEP: creating a claim object with a suffix for gluster dynamic provisioner Jun 11 00:15:12.578: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil STEP: creating claim=&PersistentVolumeClaim{ObjectMeta:{ pvc- volume-provisioning-6146 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {} 2Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*volume-provisioning-6146-glusterdptestzc4cq,VolumeMode:nil,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} Jun 11 00:15:12.583: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-hdw92] to have phase Bound Jun 11 00:15:12.585: INFO: PersistentVolumeClaim pvc-hdw92 found but phase is Pending instead of Bound. Jun 11 00:15:14.590: INFO: PersistentVolumeClaim pvc-hdw92 found and phase=Bound (2.00706402s) STEP: checking the claim STEP: checking the PV STEP: deleting claim "volume-provisioning-6146"/"pvc-hdw92" STEP: deleting the claim's PV "pvc-6aeaa061-bb2c-4338-96ab-4a7a6ab86409" Jun 11 00:15:14.598: INFO: Waiting up to 20m0s for PersistentVolume pvc-6aeaa061-bb2c-4338-96ab-4a7a6ab86409 to get deleted Jun 11 00:15:14.602: INFO: PersistentVolume pvc-6aeaa061-bb2c-4338-96ab-4a7a6ab86409 found and phase=Bound (3.205753ms) Jun 11 00:15:19.610: INFO: PersistentVolume pvc-6aeaa061-bb2c-4338-96ab-4a7a6ab86409 was removed Jun 11 00:15:19.610: INFO: deleting claim "volume-provisioning-6146"/"pvc-hdw92" Jun 11 00:15:19.612: INFO: deleting storage class volume-provisioning-6146-glusterdptestzc4cq [AfterEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:15:19.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-provisioning-6146" for this suite. • [SLOW TEST:19.106 seconds] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 GlusterDynamicProvisioner /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:793 should create and delete persistent volumes [fast] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:794 ------------------------------ {"msg":"PASSED [sig-storage] Dynamic Provisioning GlusterDynamicProvisioner should create and delete persistent volumes [fast]","total":-1,"completed":21,"skipped":628,"failed":1,"failures":["[sig-storage] CSI mock volume CSI NodeUnstage error cases [Slow] two pods: should call NodeStage after previous NodeUnstage final error"]} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:15:18.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Jun 11 00:15:20.402: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-8e55264d-1211-4440-a225-7328213dda39-backend && ln -s /tmp/local-volume-test-8e55264d-1211-4440-a225-7328213dda39-backend /tmp/local-volume-test-8e55264d-1211-4440-a225-7328213dda39] Namespace:persistent-local-volumes-test-4257 PodName:hostexec-node1-mtfr7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:15:20.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 11 00:15:20.947: INFO: Creating a PV followed by a PVC Jun 11 00:15:20.952: INFO: Waiting for PV local-pvb79mb to bind to PVC pvc-mctpl Jun 11 00:15:20.952: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-mctpl] to have phase Bound Jun 11 00:15:20.955: INFO: PersistentVolumeClaim pvc-mctpl found but phase is Pending instead of Bound. Jun 11 00:15:22.959: INFO: PersistentVolumeClaim pvc-mctpl found but phase is Pending instead of Bound. Jun 11 00:15:24.966: INFO: PersistentVolumeClaim pvc-mctpl found but phase is Pending instead of Bound. Jun 11 00:15:26.971: INFO: PersistentVolumeClaim pvc-mctpl found and phase=Bound (6.018999177s) Jun 11 00:15:26.971: INFO: Waiting up to 3m0s for PersistentVolume local-pvb79mb to have phase Bound Jun 11 00:15:26.973: INFO: PersistentVolume local-pvb79mb found and phase=Bound (1.93513ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 STEP: Creating pod1 STEP: Creating a pod Jun 11 00:15:33.001: INFO: pod "pod-8817d3ff-fb45-4986-99f1-aa3d1b97775c" created on Node "node1" STEP: Writing in pod1 Jun 11 00:15:33.001: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4257 PodName:pod-8817d3ff-fb45-4986-99f1-aa3d1b97775c ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:15:33.001: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:15:33.089: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Jun 11 00:15:33.089: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4257 PodName:pod-8817d3ff-fb45-4986-99f1-aa3d1b97775c ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:15:33.089: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:15:33.166: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-8817d3ff-fb45-4986-99f1-aa3d1b97775c in namespace persistent-local-volumes-test-4257 STEP: Creating pod2 STEP: Creating a pod Jun 11 00:15:37.197: INFO: pod "pod-c7a39d47-ba64-4caa-986e-251acab0ac2e" created on Node "node1" STEP: Reading in pod2 Jun 11 00:15:37.197: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4257 PodName:pod-c7a39d47-ba64-4caa-986e-251acab0ac2e ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:15:37.197: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:15:37.282: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod2 STEP: Deleting pod pod-c7a39d47-ba64-4caa-986e-251acab0ac2e in namespace persistent-local-volumes-test-4257 [AfterEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 11 00:15:37.288: INFO: Deleting PersistentVolumeClaim "pvc-mctpl" Jun 11 00:15:37.293: INFO: Deleting PersistentVolume "local-pvb79mb" STEP: Removing the test directory Jun 11 00:15:37.297: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-8e55264d-1211-4440-a225-7328213dda39 && rm -r /tmp/local-volume-test-8e55264d-1211-4440-a225-7328213dda39-backend] Namespace:persistent-local-volumes-test-4257 PodName:hostexec-node1-mtfr7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:15:37.297: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:15:37.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-4257" for this suite. • [SLOW TEST:19.047 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":23,"skipped":926,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:15:37.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Jun 11 00:15:37.443: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:15:37.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-782" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.035 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:383 should create unbound pvc count metrics for pvc controller after creating pvc only /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:494 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:15:37.537: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-disks STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:74 [It] using 4 containers and 1 PDs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:254 Jun 11 00:15:37.566: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:15:37.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-disks-125" for this suite. S [SKIPPING] [0.038 seconds] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 schedule a pod w/ RW PD(s) mounted to 1 or more containers, write to PD, verify content, delete pod, and repeat in rapid succession [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:231 using 4 containers and 1 PDs [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:254 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:255 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:15:19.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Jun 11 00:15:23.728: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-75ec1490-c90b-4a32-a342-5f61791e1e08 && mount --bind /tmp/local-volume-test-75ec1490-c90b-4a32-a342-5f61791e1e08 /tmp/local-volume-test-75ec1490-c90b-4a32-a342-5f61791e1e08] Namespace:persistent-local-volumes-test-4916 PodName:hostexec-node1-d4tk8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:15:23.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 11 00:15:23.819: INFO: Creating a PV followed by a PVC Jun 11 00:15:23.826: INFO: Waiting for PV local-pv2jhpc to bind to PVC pvc-26xrn Jun 11 00:15:23.826: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-26xrn] to have phase Bound Jun 11 00:15:23.828: INFO: PersistentVolumeClaim pvc-26xrn found but phase is Pending instead of Bound. Jun 11 00:15:25.834: INFO: PersistentVolumeClaim pvc-26xrn found but phase is Pending instead of Bound. Jun 11 00:15:27.838: INFO: PersistentVolumeClaim pvc-26xrn found and phase=Bound (4.011924328s) Jun 11 00:15:27.838: INFO: Waiting up to 3m0s for PersistentVolume local-pv2jhpc to have phase Bound Jun 11 00:15:27.841: INFO: PersistentVolume local-pv2jhpc found and phase=Bound (2.733013ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 STEP: Create first pod and check fsGroup is set STEP: Creating a pod Jun 11 00:15:33.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-4916 exec pod-0f678720-9934-402d-9473-3f8400175a95 --namespace=persistent-local-volumes-test-4916 -- stat -c %g /mnt/volume1' Jun 11 00:15:34.171: INFO: stderr: "" Jun 11 00:15:34.171: INFO: stdout: "1234\n" STEP: Create second pod with same fsGroup and check fsGroup is correct STEP: Creating a pod Jun 11 00:15:38.191: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-4916 exec pod-d35e8f06-17f7-47ae-8d61-fbf2aecf695d --namespace=persistent-local-volumes-test-4916 -- stat -c %g /mnt/volume1' Jun 11 00:15:38.456: INFO: stderr: "" Jun 11 00:15:38.456: INFO: stdout: "1234\n" STEP: Deleting first pod STEP: Deleting pod pod-0f678720-9934-402d-9473-3f8400175a95 in namespace persistent-local-volumes-test-4916 STEP: Deleting second pod STEP: Deleting pod pod-d35e8f06-17f7-47ae-8d61-fbf2aecf695d in namespace persistent-local-volumes-test-4916 [AfterEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 11 00:15:38.465: INFO: Deleting PersistentVolumeClaim "pvc-26xrn" Jun 11 00:15:38.469: INFO: Deleting PersistentVolume "local-pv2jhpc" STEP: Removing the test directory Jun 11 00:15:38.473: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-75ec1490-c90b-4a32-a342-5f61791e1e08 && rm -r /tmp/local-volume-test-75ec1490-c90b-4a32-a342-5f61791e1e08] Namespace:persistent-local-volumes-test-4916 PodName:hostexec-node1-d4tk8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:15:38.473: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:15:38.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-4916" for this suite. • [SLOW TEST:18.977 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]","total":-1,"completed":22,"skipped":651,"failed":1,"failures":["[sig-storage] CSI mock volume CSI NodeUnstage error cases [Slow] two pods: should call NodeStage after previous NodeUnstage final error"]} SS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:15:38.661: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_secret.go:90 STEP: Creating projection with secret that has name projected-secret-test-3cf37b1d-2a6f-4a5a-bf62-b2aa88a3246f STEP: Creating a pod to test consume secrets Jun 11 00:15:38.717: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-cd6ad024-4cc7-45ed-95cf-aa93b14e6e1f" in namespace "projected-6878" to be "Succeeded or Failed" Jun 11 00:15:38.722: INFO: Pod "pod-projected-secrets-cd6ad024-4cc7-45ed-95cf-aa93b14e6e1f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.295743ms Jun 11 00:15:40.726: INFO: Pod "pod-projected-secrets-cd6ad024-4cc7-45ed-95cf-aa93b14e6e1f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009242152s Jun 11 00:15:42.731: INFO: Pod "pod-projected-secrets-cd6ad024-4cc7-45ed-95cf-aa93b14e6e1f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013990923s STEP: Saw pod success Jun 11 00:15:42.731: INFO: Pod "pod-projected-secrets-cd6ad024-4cc7-45ed-95cf-aa93b14e6e1f" satisfied condition "Succeeded or Failed" Jun 11 00:15:42.735: INFO: Trying to get logs from node node2 pod pod-projected-secrets-cd6ad024-4cc7-45ed-95cf-aa93b14e6e1f container projected-secret-volume-test: STEP: delete the pod Jun 11 00:15:42.755: INFO: Waiting for pod pod-projected-secrets-cd6ad024-4cc7-45ed-95cf-aa93b14e6e1f to disappear Jun 11 00:15:42.757: INFO: Pod pod-projected-secrets-cd6ad024-4cc7-45ed-95cf-aa93b14e6e1f no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:15:42.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6878" for this suite. STEP: Destroying namespace "secret-namespace-8065" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]","total":-1,"completed":23,"skipped":653,"failed":1,"failures":["[sig-storage] CSI mock volume CSI NodeUnstage error cases [Slow] two pods: should call NodeStage after previous NodeUnstage final error"]} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:15:42.814: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename regional-pd STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:68 Jun 11 00:15:42.834: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:15:42.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "regional-pd-3316" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.031 seconds] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 RegionalPD [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:76 should provision storage in the allowedTopologies [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:86 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:72 ------------------------------ SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:15:42.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-directory STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:57 STEP: Create a pod for further testing Jun 11 00:15:42.910: INFO: The status of Pod test-hostpath-type-smjbf is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:15:44.915: INFO: The status of Pod test-hostpath-type-smjbf is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:15:46.913: INFO: The status of Pod test-hostpath-type-smjbf is Running (Ready = true) STEP: running on node node2 STEP: Should automatically create a new directory 'adir' when HostPathType is HostPathDirectoryOrCreate [It] Should be able to mount directory 'adir' successfully when HostPathType is HostPathDirectory /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:76 [AfterEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:15:54.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-directory-6494" for this suite. • [SLOW TEST:12.104 seconds] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should be able to mount directory 'adir' successfully when HostPathType is HostPathDirectory /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:76 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Directory [Slow] Should be able to mount directory 'adir' successfully when HostPathType is HostPathDirectory","total":-1,"completed":24,"skipped":679,"failed":1,"failures":["[sig-storage] CSI mock volume CSI NodeUnstage error cases [Slow] two pods: should call NodeStage after previous NodeUnstage final error"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:14:56.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] CSIStorageCapacity unused /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1299 STEP: Building a driver namespace object, basename csi-mock-volumes-4324 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Jun 11 00:14:56.132: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4324-2127/csi-attacher Jun 11 00:14:56.135: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4324 Jun 11 00:14:56.135: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-4324 Jun 11 00:14:56.138: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4324 Jun 11 00:14:56.142: INFO: creating *v1.Role: csi-mock-volumes-4324-2127/external-attacher-cfg-csi-mock-volumes-4324 Jun 11 00:14:56.146: INFO: creating *v1.RoleBinding: csi-mock-volumes-4324-2127/csi-attacher-role-cfg Jun 11 00:14:56.148: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4324-2127/csi-provisioner Jun 11 00:14:56.151: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4324 Jun 11 00:14:56.151: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-4324 Jun 11 00:14:56.153: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4324 Jun 11 00:14:56.156: INFO: creating *v1.Role: csi-mock-volumes-4324-2127/external-provisioner-cfg-csi-mock-volumes-4324 Jun 11 00:14:56.159: INFO: creating *v1.RoleBinding: csi-mock-volumes-4324-2127/csi-provisioner-role-cfg Jun 11 00:14:56.161: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4324-2127/csi-resizer Jun 11 00:14:56.164: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4324 Jun 11 00:14:56.164: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-4324 Jun 11 00:14:56.167: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4324 Jun 11 00:14:56.170: INFO: creating *v1.Role: csi-mock-volumes-4324-2127/external-resizer-cfg-csi-mock-volumes-4324 Jun 11 00:14:56.172: INFO: creating *v1.RoleBinding: csi-mock-volumes-4324-2127/csi-resizer-role-cfg Jun 11 00:14:56.175: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4324-2127/csi-snapshotter Jun 11 00:14:56.178: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4324 Jun 11 00:14:56.178: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-4324 Jun 11 00:14:56.181: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4324 Jun 11 00:14:56.183: INFO: creating *v1.Role: csi-mock-volumes-4324-2127/external-snapshotter-leaderelection-csi-mock-volumes-4324 Jun 11 00:14:56.186: INFO: creating *v1.RoleBinding: csi-mock-volumes-4324-2127/external-snapshotter-leaderelection Jun 11 00:14:56.189: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4324-2127/csi-mock Jun 11 00:14:56.191: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4324 Jun 11 00:14:56.194: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4324 Jun 11 00:14:56.197: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4324 Jun 11 00:14:56.199: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4324 Jun 11 00:14:56.202: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4324 Jun 11 00:14:56.204: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4324 Jun 11 00:14:56.207: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4324 Jun 11 00:14:56.209: INFO: creating *v1.StatefulSet: csi-mock-volumes-4324-2127/csi-mockplugin Jun 11 00:14:56.213: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-4324 Jun 11 00:14:56.216: INFO: creating *v1.StatefulSet: csi-mock-volumes-4324-2127/csi-mockplugin-attacher Jun 11 00:14:56.219: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-4324" Jun 11 00:14:56.221: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-4324 to register on node node1 STEP: Creating pod Jun 11 00:15:10.744: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil STEP: Deleting the previously created pod Jun 11 00:15:22.772: INFO: Deleting pod "pvc-volume-tester-6mxd9" in namespace "csi-mock-volumes-4324" Jun 11 00:15:22.779: INFO: Wait up to 5m0s for pod "pvc-volume-tester-6mxd9" to be fully deleted STEP: Deleting pod pvc-volume-tester-6mxd9 Jun 11 00:15:38.784: INFO: Deleting pod "pvc-volume-tester-6mxd9" in namespace "csi-mock-volumes-4324" STEP: Deleting claim pvc-k8cbp Jun 11 00:15:38.791: INFO: Waiting up to 2m0s for PersistentVolume pvc-7a217782-0bd9-4e93-a0da-1467d3995d75 to get deleted Jun 11 00:15:38.793: INFO: PersistentVolume pvc-7a217782-0bd9-4e93-a0da-1467d3995d75 found and phase=Bound (1.669646ms) Jun 11 00:15:40.797: INFO: PersistentVolume pvc-7a217782-0bd9-4e93-a0da-1467d3995d75 was removed STEP: Deleting storageclass mock-csi-storage-capacity-csi-mock-volumes-4324 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-4324 STEP: Waiting for namespaces [csi-mock-volumes-4324] to vanish STEP: uninstalling csi mock driver Jun 11 00:15:46.810: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4324-2127/csi-attacher Jun 11 00:15:46.814: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4324 Jun 11 00:15:46.817: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4324 Jun 11 00:15:46.821: INFO: deleting *v1.Role: csi-mock-volumes-4324-2127/external-attacher-cfg-csi-mock-volumes-4324 Jun 11 00:15:46.824: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4324-2127/csi-attacher-role-cfg Jun 11 00:15:46.828: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4324-2127/csi-provisioner Jun 11 00:15:46.831: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4324 Jun 11 00:15:46.834: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4324 Jun 11 00:15:46.838: INFO: deleting *v1.Role: csi-mock-volumes-4324-2127/external-provisioner-cfg-csi-mock-volumes-4324 Jun 11 00:15:46.841: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4324-2127/csi-provisioner-role-cfg Jun 11 00:15:46.844: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4324-2127/csi-resizer Jun 11 00:15:46.847: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4324 Jun 11 00:15:46.850: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4324 Jun 11 00:15:46.859: INFO: deleting *v1.Role: csi-mock-volumes-4324-2127/external-resizer-cfg-csi-mock-volumes-4324 Jun 11 00:15:46.868: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4324-2127/csi-resizer-role-cfg Jun 11 00:15:46.876: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4324-2127/csi-snapshotter Jun 11 00:15:46.879: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4324 Jun 11 00:15:46.883: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4324 Jun 11 00:15:46.886: INFO: deleting *v1.Role: csi-mock-volumes-4324-2127/external-snapshotter-leaderelection-csi-mock-volumes-4324 Jun 11 00:15:46.890: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4324-2127/external-snapshotter-leaderelection Jun 11 00:15:46.893: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4324-2127/csi-mock Jun 11 00:15:46.896: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4324 Jun 11 00:15:46.900: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4324 Jun 11 00:15:46.903: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4324 Jun 11 00:15:46.906: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4324 Jun 11 00:15:46.911: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4324 Jun 11 00:15:46.914: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4324 Jun 11 00:15:46.917: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4324 Jun 11 00:15:46.920: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4324-2127/csi-mockplugin Jun 11 00:15:46.923: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-4324 Jun 11 00:15:46.926: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4324-2127/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-4324-2127 STEP: Waiting for namespaces [csi-mock-volumes-4324-2127] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:15:58.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:62.886 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSIStorageCapacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1256 CSIStorageCapacity unused /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1299 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity unused","total":-1,"completed":20,"skipped":614,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]"]} SSSSS ------------------------------ [BeforeEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:15:55.145: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volumes.go:42 [It] should be mountable /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volumes.go:48 STEP: starting configmap-client STEP: Checking that text file contents are perfect. Jun 11 00:16:01.190: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=volume-4594 exec configmap-client --namespace=volume-4594 -- cat /opt/0/firstfile' Jun 11 00:16:01.411: INFO: stderr: "" Jun 11 00:16:01.411: INFO: stdout: "this is the first file" Jun 11 00:16:01.411: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /opt/0] Namespace:volume-4594 PodName:configmap-client ContainerName:configmap-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:16:01.411: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:16:01.497: INFO: ExecWithOptions {Command:[/bin/sh -c test -b /opt/0] Namespace:volume-4594 PodName:configmap-client ContainerName:configmap-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:16:01.497: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:16:01.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=volume-4594 exec configmap-client --namespace=volume-4594 -- cat /opt/1/secondfile' Jun 11 00:16:01.818: INFO: stderr: "" Jun 11 00:16:01.818: INFO: stdout: "this is the second file" Jun 11 00:16:01.818: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /opt/1] Namespace:volume-4594 PodName:configmap-client ContainerName:configmap-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:16:01.819: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:16:01.895: INFO: ExecWithOptions {Command:[/bin/sh -c test -b /opt/1] Namespace:volume-4594 PodName:configmap-client ContainerName:configmap-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:16:01.895: INFO: >>> kubeConfig: /root/.kube/config STEP: Deleting pod configmap-client in namespace volume-4594 Jun 11 00:16:01.974: INFO: Waiting for pod configmap-client to disappear Jun 11 00:16:01.977: INFO: Pod configmap-client still exists Jun 11 00:16:03.978: INFO: Waiting for pod configmap-client to disappear Jun 11 00:16:03.981: INFO: Pod configmap-client still exists Jun 11 00:16:05.978: INFO: Waiting for pod configmap-client to disappear Jun 11 00:16:05.981: INFO: Pod configmap-client no longer exists [AfterEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:16:05.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-4594" for this suite. • [SLOW TEST:10.848 seconds] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volumes.go:47 should be mountable /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volumes.go:48 ------------------------------ {"msg":"PASSED [sig-storage] Volumes ConfigMap should be mountable","total":-1,"completed":25,"skipped":755,"failed":1,"failures":["[sig-storage] CSI mock volume CSI NodeUnstage error cases [Slow] two pods: should call NodeStage after previous NodeUnstage final error"]} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:15:58.960: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-directory STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:57 STEP: Create a pod for further testing Jun 11 00:15:59.000: INFO: The status of Pod test-hostpath-type-v47dj is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:16:01.004: INFO: The status of Pod test-hostpath-type-v47dj is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:16:03.002: INFO: The status of Pod test-hostpath-type-v47dj is Running (Ready = true) STEP: running on node node1 STEP: Should automatically create a new directory 'adir' when HostPathType is HostPathDirectoryOrCreate [It] Should fail on mounting directory 'adir' when HostPathType is HostPathBlockDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:99 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:16:09.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-directory-9994" for this suite. • [SLOW TEST:10.102 seconds] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting directory 'adir' when HostPathType is HostPathBlockDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:99 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Directory [Slow] Should fail on mounting directory 'adir' when HostPathType is HostPathBlockDev","total":-1,"completed":21,"skipped":619,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]"]} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:16:06.025: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Jun 11 00:16:08.078: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-9dd31edd-d926-4a54-b14a-74c4f7c177d8] Namespace:persistent-local-volumes-test-1062 PodName:hostexec-node1-qppmd ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:16:08.078: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 11 00:16:08.168: INFO: Creating a PV followed by a PVC Jun 11 00:16:08.176: INFO: Waiting for PV local-pvt4xgt to bind to PVC pvc-98lb5 Jun 11 00:16:08.176: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-98lb5] to have phase Bound Jun 11 00:16:08.178: INFO: PersistentVolumeClaim pvc-98lb5 found but phase is Pending instead of Bound. Jun 11 00:16:10.181: INFO: PersistentVolumeClaim pvc-98lb5 found and phase=Bound (2.005644279s) Jun 11 00:16:10.181: INFO: Waiting up to 3m0s for PersistentVolume local-pvt4xgt to have phase Bound Jun 11 00:16:10.184: INFO: PersistentVolume local-pvt4xgt found and phase=Bound (2.469138ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Jun 11 00:16:14.211: INFO: pod "pod-f71519a4-f77d-48d5-b849-56b437f4cdef" created on Node "node1" STEP: Writing in pod1 Jun 11 00:16:14.211: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1062 PodName:pod-f71519a4-f77d-48d5-b849-56b437f4cdef ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:16:14.211: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:16:14.299: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Jun 11 00:16:14.299: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1062 PodName:pod-f71519a4-f77d-48d5-b849-56b437f4cdef ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:16:14.299: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:16:14.404: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod1 Jun 11 00:16:14.404: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-9dd31edd-d926-4a54-b14a-74c4f7c177d8 > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1062 PodName:pod-f71519a4-f77d-48d5-b849-56b437f4cdef ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:16:14.404: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:16:14.485: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-9dd31edd-d926-4a54-b14a-74c4f7c177d8 > /mnt/volume1/test-file", out: "", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-f71519a4-f77d-48d5-b849-56b437f4cdef in namespace persistent-local-volumes-test-1062 [AfterEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 11 00:16:14.491: INFO: Deleting PersistentVolumeClaim "pvc-98lb5" Jun 11 00:16:14.494: INFO: Deleting PersistentVolume "local-pvt4xgt" STEP: Removing the test directory Jun 11 00:16:14.497: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-9dd31edd-d926-4a54-b14a-74c4f7c177d8] Namespace:persistent-local-volumes-test-1062 PodName:hostexec-node1-qppmd ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:16:14.497: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:16:14.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-1062" for this suite. • [SLOW TEST:8.605 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":26,"skipped":768,"failed":1,"failures":["[sig-storage] CSI mock volume CSI NodeUnstage error cases [Slow] two pods: should call NodeStage after previous NodeUnstage final error"]} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Volume limits /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:16:14.679: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume-limits-on-node STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Volume limits /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_limits.go:35 Jun 11 00:16:14.701: INFO: Only supported for providers [aws gce gke] (not local) [AfterEach] [sig-storage] Volume limits /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:16:14.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-limits-on-node-4196" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.031 seconds] [sig-storage] Volume limits /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should verify that all nodes have volume limits [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_limits.go:41 Only supported for providers [aws gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_limits.go:36 ------------------------------ SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:14:33.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] two pods: should call NodeStage after previous NodeUnstage transient error /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:961 STEP: Building a driver namespace object, basename csi-mock-volumes-275 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock proxy Jun 11 00:14:33.608: INFO: creating *v1.ServiceAccount: csi-mock-volumes-275-496/csi-attacher Jun 11 00:14:33.611: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-275 Jun 11 00:14:33.611: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-275 Jun 11 00:14:33.614: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-275 Jun 11 00:14:33.616: INFO: creating *v1.Role: csi-mock-volumes-275-496/external-attacher-cfg-csi-mock-volumes-275 Jun 11 00:14:33.619: INFO: creating *v1.RoleBinding: csi-mock-volumes-275-496/csi-attacher-role-cfg Jun 11 00:14:33.622: INFO: creating *v1.ServiceAccount: csi-mock-volumes-275-496/csi-provisioner Jun 11 00:14:33.624: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-275 Jun 11 00:14:33.624: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-275 Jun 11 00:14:33.627: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-275 Jun 11 00:14:33.630: INFO: creating *v1.Role: csi-mock-volumes-275-496/external-provisioner-cfg-csi-mock-volumes-275 Jun 11 00:14:33.633: INFO: creating *v1.RoleBinding: csi-mock-volumes-275-496/csi-provisioner-role-cfg Jun 11 00:14:33.636: INFO: creating *v1.ServiceAccount: csi-mock-volumes-275-496/csi-resizer Jun 11 00:14:33.639: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-275 Jun 11 00:14:33.639: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-275 Jun 11 00:14:33.641: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-275 Jun 11 00:14:33.644: INFO: creating *v1.Role: csi-mock-volumes-275-496/external-resizer-cfg-csi-mock-volumes-275 Jun 11 00:14:33.647: INFO: creating *v1.RoleBinding: csi-mock-volumes-275-496/csi-resizer-role-cfg Jun 11 00:14:33.650: INFO: creating *v1.ServiceAccount: csi-mock-volumes-275-496/csi-snapshotter Jun 11 00:14:33.652: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-275 Jun 11 00:14:33.652: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-275 Jun 11 00:14:33.656: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-275 Jun 11 00:14:33.659: INFO: creating *v1.Role: csi-mock-volumes-275-496/external-snapshotter-leaderelection-csi-mock-volumes-275 Jun 11 00:14:33.662: INFO: creating *v1.RoleBinding: csi-mock-volumes-275-496/external-snapshotter-leaderelection Jun 11 00:14:33.665: INFO: creating *v1.ServiceAccount: csi-mock-volumes-275-496/csi-mock Jun 11 00:14:33.667: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-275 Jun 11 00:14:33.670: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-275 Jun 11 00:14:33.672: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-275 Jun 11 00:14:33.675: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-275 Jun 11 00:14:33.677: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-275 Jun 11 00:14:33.680: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-275 Jun 11 00:14:33.682: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-275 Jun 11 00:14:33.685: INFO: creating *v1.StatefulSet: csi-mock-volumes-275-496/csi-mockplugin Jun 11 00:14:33.689: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-275 Jun 11 00:14:33.692: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-275" Jun 11 00:14:33.694: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-275 to register on node node2 I0611 00:14:39.769322 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-275","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I0611 00:14:39.865621 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null} I0611 00:14:39.867043 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-275","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I0611 00:14:39.868452 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":"","FullError":null} I0611 00:14:39.870130 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null} I0611 00:14:40.004124 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-275"},"Error":"","FullError":null} STEP: Creating pod Jun 11 00:14:43.208: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jun 11 00:14:43.212: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-lhjg5] to have phase Bound Jun 11 00:14:43.215: INFO: PersistentVolumeClaim pvc-lhjg5 found but phase is Pending instead of Bound. I0611 00:14:43.218634 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-d01bd8b8-411b-4807-9424-ab6d474519d3","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-d01bd8b8-411b-4807-9424-ab6d474519d3"}}},"Error":"","FullError":null} Jun 11 00:14:45.218: INFO: PersistentVolumeClaim pvc-lhjg5 found and phase=Bound (2.005948845s) Jun 11 00:14:45.234: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-lhjg5] to have phase Bound Jun 11 00:14:45.237: INFO: PersistentVolumeClaim pvc-lhjg5 found and phase=Bound (3.027973ms) I0611 00:14:45.519597 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Jun 11 00:14:45.522: INFO: >>> kubeConfig: /root/.kube/config I0611 00:14:45.605809 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-d01bd8b8-411b-4807-9424-ab6d474519d3/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-d01bd8b8-411b-4807-9424-ab6d474519d3","storage.kubernetes.io/csiProvisionerIdentity":"1654906479869-8081-csi-mock-csi-mock-volumes-275"}},"Response":{},"Error":"","FullError":null} I0611 00:14:45.616885 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Jun 11 00:14:45.630: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:14:45.713: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:14:45.797: INFO: >>> kubeConfig: /root/.kube/config I0611 00:14:45.887771 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-d01bd8b8-411b-4807-9424-ab6d474519d3/globalmount","target_path":"/var/lib/kubelet/pods/392185b2-2796-4790-8938-c97378540e36/volumes/kubernetes.io~csi/pvc-d01bd8b8-411b-4807-9424-ab6d474519d3/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-d01bd8b8-411b-4807-9424-ab6d474519d3","storage.kubernetes.io/csiProvisionerIdentity":"1654906479869-8081-csi-mock-csi-mock-volumes-275"}},"Response":{},"Error":"","FullError":null} I0611 00:14:47.719324 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0611 00:14:47.721326 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetVolumeStats","Request":{"volume_id":"4","volume_path":"/var/lib/kubelet/pods/392185b2-2796-4790-8938-c97378540e36/volumes/kubernetes.io~csi/pvc-d01bd8b8-411b-4807-9424-ab6d474519d3/mount"},"Response":{"usage":[{"total":1073741824,"unit":1}],"volume_condition":{}},"Error":"","FullError":null} Jun 11 00:14:51.243: INFO: Deleting pod "pvc-volume-tester-nkvw6" in namespace "csi-mock-volumes-275" Jun 11 00:14:51.247: INFO: Wait up to 5m0s for pod "pvc-volume-tester-nkvw6" to be fully deleted Jun 11 00:14:55.553: INFO: >>> kubeConfig: /root/.kube/config I0611 00:14:56.077555 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/392185b2-2796-4790-8938-c97378540e36/volumes/kubernetes.io~csi/pvc-d01bd8b8-411b-4807-9424-ab6d474519d3/mount"},"Response":{},"Error":"","FullError":null} I0611 00:14:56.099174 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0611 00:14:56.100780 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-d01bd8b8-411b-4807-9424-ab6d474519d3/globalmount"},"Response":null,"Error":"rpc error: code = DeadlineExceeded desc = fake transient error","FullError":{"code":4,"message":"fake transient error"}} I0611 00:14:56.703362 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0611 00:14:56.705168 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-d01bd8b8-411b-4807-9424-ab6d474519d3/globalmount"},"Response":null,"Error":"rpc error: code = DeadlineExceeded desc = fake transient error","FullError":{"code":4,"message":"fake transient error"}} I0611 00:14:58.199186 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0611 00:14:58.202912 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-d01bd8b8-411b-4807-9424-ab6d474519d3/globalmount"},"Response":null,"Error":"rpc error: code = DeadlineExceeded desc = fake transient error","FullError":{"code":4,"message":"fake transient error"}} I0611 00:15:00.224836 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0611 00:15:00.226503 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-d01bd8b8-411b-4807-9424-ab6d474519d3/globalmount"},"Response":null,"Error":"rpc error: code = DeadlineExceeded desc = fake transient error","FullError":{"code":4,"message":"fake transient error"}} I0611 00:15:04.287088 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0611 00:15:04.288998 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-d01bd8b8-411b-4807-9424-ab6d474519d3/globalmount"},"Response":null,"Error":"rpc error: code = DeadlineExceeded desc = fake transient error","FullError":{"code":4,"message":"fake transient error"}} I0611 00:15:07.312818 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Jun 11 00:15:07.314: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:15:07.403: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:15:07.558: INFO: >>> kubeConfig: /root/.kube/config I0611 00:15:07.651476 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-d01bd8b8-411b-4807-9424-ab6d474519d3/globalmount","target_path":"/var/lib/kubelet/pods/fcce5ea9-1509-4ea3-a43a-1ddeb74a6c03/volumes/kubernetes.io~csi/pvc-d01bd8b8-411b-4807-9424-ab6d474519d3/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-d01bd8b8-411b-4807-9424-ab6d474519d3","storage.kubernetes.io/csiProvisionerIdentity":"1654906479869-8081-csi-mock-csi-mock-volumes-275"}},"Response":{},"Error":"","FullError":null} Jun 11 00:15:15.270: INFO: Deleting pod "pvc-volume-tester-bhmzl" in namespace "csi-mock-volumes-275" Jun 11 00:15:15.274: INFO: Wait up to 5m0s for pod "pvc-volume-tester-bhmzl" to be fully deleted Jun 11 00:15:17.634: INFO: >>> kubeConfig: /root/.kube/config I0611 00:15:17.720882 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/fcce5ea9-1509-4ea3-a43a-1ddeb74a6c03/volumes/kubernetes.io~csi/pvc-d01bd8b8-411b-4807-9424-ab6d474519d3/mount"},"Response":{},"Error":"","FullError":null} I0611 00:15:17.738627 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0611 00:15:17.750355 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-d01bd8b8-411b-4807-9424-ab6d474519d3/globalmount"},"Response":{},"Error":"","FullError":null} STEP: Waiting for all remaining expected CSI calls Jun 11 00:15:28.282: FAIL: while waiting for all CSI calls Unexpected error: <*errors.errorString | 0xc003b999a0>: { s: "Unexpected CSI call 2: expected NodeStageVolume (0), got NodeUnstageVolume (0)", } Unexpected CSI call 2: expected NodeStageVolume (0), got NodeUnstageVolume (0) occurred Full Stack Trace k8s.io/kubernetes/test/e2e/storage.glob..func1.13.3() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1017 +0x79e k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00044d200) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc00044d200) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc00044d200, 0x70f99e8) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 STEP: Deleting pod pvc-volume-tester-nkvw6 Jun 11 00:15:28.283: INFO: Deleting pod "pvc-volume-tester-nkvw6" in namespace "csi-mock-volumes-275" STEP: Deleting pod pvc-volume-tester-bhmzl Jun 11 00:15:28.287: INFO: Deleting pod "pvc-volume-tester-bhmzl" in namespace "csi-mock-volumes-275" STEP: Deleting claim pvc-lhjg5 Jun 11 00:15:28.296: INFO: Waiting up to 2m0s for PersistentVolume pvc-d01bd8b8-411b-4807-9424-ab6d474519d3 to get deleted Jun 11 00:15:28.299: INFO: PersistentVolume pvc-d01bd8b8-411b-4807-9424-ab6d474519d3 found and phase=Bound (2.954238ms) I0611 00:15:28.310087 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null} Jun 11 00:15:30.304: INFO: PersistentVolume pvc-d01bd8b8-411b-4807-9424-ab6d474519d3 was removed STEP: Deleting storageclass csi-mock-volumes-275-sccvqp9 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-275 STEP: Waiting for namespaces [csi-mock-volumes-275] to vanish STEP: uninstalling csi mock driver Jun 11 00:15:36.331: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-275-496/csi-attacher Jun 11 00:15:36.336: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-275 Jun 11 00:15:36.339: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-275 Jun 11 00:15:36.343: INFO: deleting *v1.Role: csi-mock-volumes-275-496/external-attacher-cfg-csi-mock-volumes-275 Jun 11 00:15:36.346: INFO: deleting *v1.RoleBinding: csi-mock-volumes-275-496/csi-attacher-role-cfg Jun 11 00:15:36.349: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-275-496/csi-provisioner Jun 11 00:15:36.353: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-275 Jun 11 00:15:36.357: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-275 Jun 11 00:15:36.360: INFO: deleting *v1.Role: csi-mock-volumes-275-496/external-provisioner-cfg-csi-mock-volumes-275 Jun 11 00:15:36.364: INFO: deleting *v1.RoleBinding: csi-mock-volumes-275-496/csi-provisioner-role-cfg Jun 11 00:15:36.368: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-275-496/csi-resizer Jun 11 00:15:36.371: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-275 Jun 11 00:15:36.374: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-275 Jun 11 00:15:36.377: INFO: deleting *v1.Role: csi-mock-volumes-275-496/external-resizer-cfg-csi-mock-volumes-275 Jun 11 00:15:36.380: INFO: deleting *v1.RoleBinding: csi-mock-volumes-275-496/csi-resizer-role-cfg Jun 11 00:15:36.385: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-275-496/csi-snapshotter Jun 11 00:15:36.388: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-275 Jun 11 00:15:36.392: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-275 Jun 11 00:15:36.395: INFO: deleting *v1.Role: csi-mock-volumes-275-496/external-snapshotter-leaderelection-csi-mock-volumes-275 Jun 11 00:15:36.398: INFO: deleting *v1.RoleBinding: csi-mock-volumes-275-496/external-snapshotter-leaderelection Jun 11 00:15:36.402: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-275-496/csi-mock Jun 11 00:15:36.405: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-275 Jun 11 00:15:36.409: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-275 Jun 11 00:15:36.412: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-275 Jun 11 00:15:36.415: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-275 Jun 11 00:15:36.418: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-275 Jun 11 00:15:36.421: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-275 Jun 11 00:15:36.424: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-275 Jun 11 00:15:36.427: INFO: deleting *v1.StatefulSet: csi-mock-volumes-275-496/csi-mockplugin Jun 11 00:15:36.430: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-275 STEP: deleting the driver namespace: csi-mock-volumes-275-496 STEP: Waiting for namespaces [csi-mock-volumes-275-496] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:16:20.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • Failure [106.912 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI NodeUnstage error cases [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:900 two pods: should call NodeStage after previous NodeUnstage transient error [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:961 Jun 11 00:15:28.282: while waiting for all CSI calls Unexpected error: <*errors.errorString | 0xc003b999a0>: { s: "Unexpected CSI call 2: expected NodeStageVolume (0), got NodeUnstageVolume (0)", } Unexpected CSI call 2: expected NodeStageVolume (0), got NodeUnstageVolume (0) occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1017 ------------------------------ {"msg":"FAILED [sig-storage] CSI mock volume CSI NodeUnstage error cases [Slow] two pods: should call NodeStage after previous NodeUnstage transient error","total":-1,"completed":13,"skipped":260,"failed":1,"failures":["[sig-storage] CSI mock volume CSI NodeUnstage error cases [Slow] two pods: should call NodeStage after previous NodeUnstage transient error"]} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:16:14.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-socket STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:191 STEP: Create a pod for further testing Jun 11 00:16:14.762: INFO: The status of Pod test-hostpath-type-slnjz is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:16:16.766: INFO: The status of Pod test-hostpath-type-slnjz is Pending, waiting for it to be Running (with Ready = true) Jun 11 00:16:18.768: INFO: The status of Pod test-hostpath-type-slnjz is Running (Ready = true) STEP: running on node node1 [It] Should be able to mount socket 'asocket' successfully when HostPathType is HostPathUnset /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:212 [AfterEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:16:22.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-socket-6497" for this suite. • [SLOW TEST:8.075 seconds] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should be able to mount socket 'asocket' successfully when HostPathType is HostPathUnset /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:212 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Socket [Slow] Should be able to mount socket 'asocket' successfully when HostPathType is HostPathUnset","total":-1,"completed":27,"skipped":798,"failed":1,"failures":["[sig-storage] CSI mock volume CSI NodeUnstage error cases [Slow] two pods: should call NodeStage after previous NodeUnstage final error"]} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:16:22.827: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume-provisioning STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:146 [It] should test that deleting a claim before the volume is provisioned deletes the volume. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:511 Jun 11 00:16:22.850: INFO: Only supported for providers [openstack gce aws gke vsphere azure] (not local) [AfterEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:16:22.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-provisioning-3632" for this suite. S [SKIPPING] [0.032 seconds] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 DynamicProvisioner [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:152 should test that deleting a claim before the volume is provisioned deletes the volume. [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:511 Only supported for providers [openstack gce aws gke vsphere azure] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:517 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:16:22.964: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Jun 11 00:16:25.016: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-5143d32b-95f5-462e-a661-f3e618111b88 && mount --bind /tmp/local-volume-test-5143d32b-95f5-462e-a661-f3e618111b88 /tmp/local-volume-test-5143d32b-95f5-462e-a661-f3e618111b88] Namespace:persistent-local-volumes-test-4097 PodName:hostexec-node2-htzjl ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:16:25.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 11 00:16:25.107: INFO: Creating a PV followed by a PVC Jun 11 00:16:25.114: INFO: Waiting for PV local-pvjbr4v to bind to PVC pvc-tdq4v Jun 11 00:16:25.114: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-tdq4v] to have phase Bound Jun 11 00:16:25.118: INFO: PersistentVolumeClaim pvc-tdq4v found but phase is Pending instead of Bound. Jun 11 00:16:27.121: INFO: PersistentVolumeClaim pvc-tdq4v found and phase=Bound (2.007652404s) Jun 11 00:16:27.121: INFO: Waiting up to 3m0s for PersistentVolume local-pvjbr4v to have phase Bound Jun 11 00:16:27.124: INFO: PersistentVolume local-pvjbr4v found and phase=Bound (2.315381ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Jun 11 00:16:31.149: INFO: pod "pod-04afcc29-f1c1-43a4-af47-42b227efa9e9" created on Node "node2" STEP: Writing in pod1 Jun 11 00:16:31.150: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4097 PodName:pod-04afcc29-f1c1-43a4-af47-42b227efa9e9 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:16:31.150: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:16:31.234: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 STEP: Reading in pod1 Jun 11 00:16:31.234: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4097 PodName:pod-04afcc29-f1c1-43a4-af47-42b227efa9e9 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:16:31.234: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:16:31.310: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-04afcc29-f1c1-43a4-af47-42b227efa9e9 in namespace persistent-local-volumes-test-4097 [AfterEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 11 00:16:31.316: INFO: Deleting PersistentVolumeClaim "pvc-tdq4v" Jun 11 00:16:31.319: INFO: Deleting PersistentVolume "local-pvjbr4v" STEP: Removing the test directory Jun 11 00:16:31.324: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-5143d32b-95f5-462e-a661-f3e618111b88 && rm -r /tmp/local-volume-test-5143d32b-95f5-462e-a661-f3e618111b88] Namespace:persistent-local-volumes-test-4097 PodName:hostexec-node2-htzjl ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:16:31.324: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:16:31.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-4097" for this suite. • [SLOW TEST:8.473 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":28,"skipped":856,"failed":1,"failures":["[sig-storage] CSI mock volume CSI NodeUnstage error cases [Slow] two pods: should call NodeStage after previous NodeUnstage final error"]} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:16:09.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] CSIStorageCapacity used, no capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1299 STEP: Building a driver namespace object, basename csi-mock-volumes-999 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Jun 11 00:16:09.161: INFO: creating *v1.ServiceAccount: csi-mock-volumes-999-7675/csi-attacher Jun 11 00:16:09.165: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-999 Jun 11 00:16:09.165: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-999 Jun 11 00:16:09.169: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-999 Jun 11 00:16:09.171: INFO: creating *v1.Role: csi-mock-volumes-999-7675/external-attacher-cfg-csi-mock-volumes-999 Jun 11 00:16:09.174: INFO: creating *v1.RoleBinding: csi-mock-volumes-999-7675/csi-attacher-role-cfg Jun 11 00:16:09.177: INFO: creating *v1.ServiceAccount: csi-mock-volumes-999-7675/csi-provisioner Jun 11 00:16:09.179: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-999 Jun 11 00:16:09.179: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-999 Jun 11 00:16:09.182: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-999 Jun 11 00:16:09.185: INFO: creating *v1.Role: csi-mock-volumes-999-7675/external-provisioner-cfg-csi-mock-volumes-999 Jun 11 00:16:09.187: INFO: creating *v1.RoleBinding: csi-mock-volumes-999-7675/csi-provisioner-role-cfg Jun 11 00:16:09.190: INFO: creating *v1.ServiceAccount: csi-mock-volumes-999-7675/csi-resizer Jun 11 00:16:09.193: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-999 Jun 11 00:16:09.193: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-999 Jun 11 00:16:09.195: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-999 Jun 11 00:16:09.198: INFO: creating *v1.Role: csi-mock-volumes-999-7675/external-resizer-cfg-csi-mock-volumes-999 Jun 11 00:16:09.201: INFO: creating *v1.RoleBinding: csi-mock-volumes-999-7675/csi-resizer-role-cfg Jun 11 00:16:09.203: INFO: creating *v1.ServiceAccount: csi-mock-volumes-999-7675/csi-snapshotter Jun 11 00:16:09.205: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-999 Jun 11 00:16:09.205: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-999 Jun 11 00:16:09.208: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-999 Jun 11 00:16:09.210: INFO: creating *v1.Role: csi-mock-volumes-999-7675/external-snapshotter-leaderelection-csi-mock-volumes-999 Jun 11 00:16:09.213: INFO: creating *v1.RoleBinding: csi-mock-volumes-999-7675/external-snapshotter-leaderelection Jun 11 00:16:09.215: INFO: creating *v1.ServiceAccount: csi-mock-volumes-999-7675/csi-mock Jun 11 00:16:09.217: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-999 Jun 11 00:16:09.219: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-999 Jun 11 00:16:09.222: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-999 Jun 11 00:16:09.224: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-999 Jun 11 00:16:09.227: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-999 Jun 11 00:16:09.229: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-999 Jun 11 00:16:09.232: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-999 Jun 11 00:16:09.234: INFO: creating *v1.StatefulSet: csi-mock-volumes-999-7675/csi-mockplugin Jun 11 00:16:09.238: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-999 Jun 11 00:16:09.242: INFO: creating *v1.StatefulSet: csi-mock-volumes-999-7675/csi-mockplugin-attacher Jun 11 00:16:09.246: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-999" Jun 11 00:16:09.253: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-999 to register on node node2 STEP: Creating pod Jun 11 00:16:19.276: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil STEP: Deleting the previously created pod Jun 11 00:16:19.294: INFO: Deleting pod "pvc-volume-tester-xccp5" in namespace "csi-mock-volumes-999" Jun 11 00:16:19.299: INFO: Wait up to 5m0s for pod "pvc-volume-tester-xccp5" to be fully deleted STEP: Deleting pod pvc-volume-tester-xccp5 Jun 11 00:16:19.301: INFO: Deleting pod "pvc-volume-tester-xccp5" in namespace "csi-mock-volumes-999" STEP: Deleting claim pvc-4lr44 STEP: Deleting storageclass mock-csi-storage-capacity-csi-mock-volumes-999 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-999 STEP: Waiting for namespaces [csi-mock-volumes-999] to vanish STEP: uninstalling csi mock driver Jun 11 00:16:25.321: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-999-7675/csi-attacher Jun 11 00:16:25.325: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-999 Jun 11 00:16:25.329: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-999 Jun 11 00:16:25.333: INFO: deleting *v1.Role: csi-mock-volumes-999-7675/external-attacher-cfg-csi-mock-volumes-999 Jun 11 00:16:25.337: INFO: deleting *v1.RoleBinding: csi-mock-volumes-999-7675/csi-attacher-role-cfg Jun 11 00:16:25.344: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-999-7675/csi-provisioner Jun 11 00:16:25.348: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-999 Jun 11 00:16:25.354: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-999 Jun 11 00:16:25.359: INFO: deleting *v1.Role: csi-mock-volumes-999-7675/external-provisioner-cfg-csi-mock-volumes-999 Jun 11 00:16:25.362: INFO: deleting *v1.RoleBinding: csi-mock-volumes-999-7675/csi-provisioner-role-cfg Jun 11 00:16:25.368: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-999-7675/csi-resizer Jun 11 00:16:25.374: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-999 Jun 11 00:16:25.377: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-999 Jun 11 00:16:25.382: INFO: deleting *v1.Role: csi-mock-volumes-999-7675/external-resizer-cfg-csi-mock-volumes-999 Jun 11 00:16:25.386: INFO: deleting *v1.RoleBinding: csi-mock-volumes-999-7675/csi-resizer-role-cfg Jun 11 00:16:25.390: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-999-7675/csi-snapshotter Jun 11 00:16:25.393: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-999 Jun 11 00:16:25.396: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-999 Jun 11 00:16:25.399: INFO: deleting *v1.Role: csi-mock-volumes-999-7675/external-snapshotter-leaderelection-csi-mock-volumes-999 Jun 11 00:16:25.402: INFO: deleting *v1.RoleBinding: csi-mock-volumes-999-7675/external-snapshotter-leaderelection Jun 11 00:16:25.406: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-999-7675/csi-mock Jun 11 00:16:25.409: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-999 Jun 11 00:16:25.413: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-999 Jun 11 00:16:25.416: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-999 Jun 11 00:16:25.419: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-999 Jun 11 00:16:25.422: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-999 Jun 11 00:16:25.426: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-999 Jun 11 00:16:25.429: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-999 Jun 11 00:16:25.433: INFO: deleting *v1.StatefulSet: csi-mock-volumes-999-7675/csi-mockplugin Jun 11 00:16:25.436: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-999 Jun 11 00:16:25.439: INFO: deleting *v1.StatefulSet: csi-mock-volumes-999-7675/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-999-7675 STEP: Waiting for namespaces [csi-mock-volumes-999-7675] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:16:31.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:22.361 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSIStorageCapacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1256 CSIStorageCapacity used, no capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1299 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","total":-1,"completed":22,"skipped":633,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ Jun 11 00:16:31.526: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:15:10.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should modify fsGroup if fsGroupPolicy=File /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1582 STEP: Building a driver namespace object, basename csi-mock-volumes-6804 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Jun 11 00:15:10.838: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6804-3521/csi-attacher Jun 11 00:15:10.840: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-6804 Jun 11 00:15:10.840: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-6804 Jun 11 00:15:10.843: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-6804 Jun 11 00:15:10.846: INFO: creating *v1.Role: csi-mock-volumes-6804-3521/external-attacher-cfg-csi-mock-volumes-6804 Jun 11 00:15:10.849: INFO: creating *v1.RoleBinding: csi-mock-volumes-6804-3521/csi-attacher-role-cfg Jun 11 00:15:10.851: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6804-3521/csi-provisioner Jun 11 00:15:10.854: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-6804 Jun 11 00:15:10.854: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-6804 Jun 11 00:15:10.856: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-6804 Jun 11 00:15:10.859: INFO: creating *v1.Role: csi-mock-volumes-6804-3521/external-provisioner-cfg-csi-mock-volumes-6804 Jun 11 00:15:10.861: INFO: creating *v1.RoleBinding: csi-mock-volumes-6804-3521/csi-provisioner-role-cfg Jun 11 00:15:10.864: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6804-3521/csi-resizer Jun 11 00:15:10.867: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-6804 Jun 11 00:15:10.867: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-6804 Jun 11 00:15:10.871: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-6804 Jun 11 00:15:10.874: INFO: creating *v1.Role: csi-mock-volumes-6804-3521/external-resizer-cfg-csi-mock-volumes-6804 Jun 11 00:15:10.876: INFO: creating *v1.RoleBinding: csi-mock-volumes-6804-3521/csi-resizer-role-cfg Jun 11 00:15:10.878: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6804-3521/csi-snapshotter Jun 11 00:15:10.881: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-6804 Jun 11 00:15:10.881: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-6804 Jun 11 00:15:10.883: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-6804 Jun 11 00:15:10.886: INFO: creating *v1.Role: csi-mock-volumes-6804-3521/external-snapshotter-leaderelection-csi-mock-volumes-6804 Jun 11 00:15:10.890: INFO: creating *v1.RoleBinding: csi-mock-volumes-6804-3521/external-snapshotter-leaderelection Jun 11 00:15:10.892: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6804-3521/csi-mock Jun 11 00:15:10.894: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-6804 Jun 11 00:15:10.897: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-6804 Jun 11 00:15:10.900: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-6804 Jun 11 00:15:10.903: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-6804 Jun 11 00:15:10.906: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-6804 Jun 11 00:15:10.909: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-6804 Jun 11 00:15:10.911: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-6804 Jun 11 00:15:10.914: INFO: creating *v1.StatefulSet: csi-mock-volumes-6804-3521/csi-mockplugin Jun 11 00:15:10.919: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-6804 Jun 11 00:15:10.921: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-6804" Jun 11 00:15:10.924: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-6804 to register on node node1 STEP: Creating pod with fsGroup Jun 11 00:15:20.943: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jun 11 00:15:20.948: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-rr995] to have phase Bound Jun 11 00:15:20.950: INFO: PersistentVolumeClaim pvc-rr995 found but phase is Pending instead of Bound. Jun 11 00:15:22.953: INFO: PersistentVolumeClaim pvc-rr995 found and phase=Bound (2.004750381s) Jun 11 00:15:26.976: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir /mnt/test/csi-mock-volumes-6804] Namespace:csi-mock-volumes-6804 PodName:pvc-volume-tester-m6vql ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:15:26.976: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:15:27.053: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'filecontents' > '/mnt/test/csi-mock-volumes-6804/csi-mock-volumes-6804'; sync] Namespace:csi-mock-volumes-6804 PodName:pvc-volume-tester-m6vql ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:15:27.053: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:15:29.542: INFO: ExecWithOptions {Command:[/bin/sh -c ls -l /mnt/test/csi-mock-volumes-6804/csi-mock-volumes-6804] Namespace:csi-mock-volumes-6804 PodName:pvc-volume-tester-m6vql ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:15:29.542: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:15:29.978: INFO: pod csi-mock-volumes-6804/pvc-volume-tester-m6vql exec for cmd ls -l /mnt/test/csi-mock-volumes-6804/csi-mock-volumes-6804, stdout: -rw-r--r-- 1 root 11485 13 Jun 11 00:15 /mnt/test/csi-mock-volumes-6804/csi-mock-volumes-6804, stderr: Jun 11 00:15:29.978: INFO: ExecWithOptions {Command:[/bin/sh -c rm -fr /mnt/test/csi-mock-volumes-6804] Namespace:csi-mock-volumes-6804 PodName:pvc-volume-tester-m6vql ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:15:29.978: INFO: >>> kubeConfig: /root/.kube/config STEP: Deleting pod pvc-volume-tester-m6vql Jun 11 00:15:30.078: INFO: Deleting pod "pvc-volume-tester-m6vql" in namespace "csi-mock-volumes-6804" Jun 11 00:15:30.084: INFO: Wait up to 5m0s for pod "pvc-volume-tester-m6vql" to be fully deleted STEP: Deleting claim pvc-rr995 Jun 11 00:16:08.098: INFO: Waiting up to 2m0s for PersistentVolume pvc-3722ebee-ac64-4164-a73f-03f20b3eeca6 to get deleted Jun 11 00:16:08.100: INFO: PersistentVolume pvc-3722ebee-ac64-4164-a73f-03f20b3eeca6 found and phase=Bound (1.951852ms) Jun 11 00:16:10.104: INFO: PersistentVolume pvc-3722ebee-ac64-4164-a73f-03f20b3eeca6 was removed STEP: Deleting storageclass csi-mock-volumes-6804-scg5l7s STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-6804 STEP: Waiting for namespaces [csi-mock-volumes-6804] to vanish STEP: uninstalling csi mock driver Jun 11 00:16:16.117: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6804-3521/csi-attacher Jun 11 00:16:16.122: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-6804 Jun 11 00:16:16.126: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-6804 Jun 11 00:16:16.129: INFO: deleting *v1.Role: csi-mock-volumes-6804-3521/external-attacher-cfg-csi-mock-volumes-6804 Jun 11 00:16:16.133: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6804-3521/csi-attacher-role-cfg Jun 11 00:16:16.137: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6804-3521/csi-provisioner Jun 11 00:16:16.141: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-6804 Jun 11 00:16:16.144: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-6804 Jun 11 00:16:16.148: INFO: deleting *v1.Role: csi-mock-volumes-6804-3521/external-provisioner-cfg-csi-mock-volumes-6804 Jun 11 00:16:16.151: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6804-3521/csi-provisioner-role-cfg Jun 11 00:16:16.155: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6804-3521/csi-resizer Jun 11 00:16:16.158: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-6804 Jun 11 00:16:16.162: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-6804 Jun 11 00:16:16.165: INFO: deleting *v1.Role: csi-mock-volumes-6804-3521/external-resizer-cfg-csi-mock-volumes-6804 Jun 11 00:16:16.168: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6804-3521/csi-resizer-role-cfg Jun 11 00:16:16.172: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6804-3521/csi-snapshotter Jun 11 00:16:16.175: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-6804 Jun 11 00:16:16.180: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-6804 Jun 11 00:16:16.183: INFO: deleting *v1.Role: csi-mock-volumes-6804-3521/external-snapshotter-leaderelection-csi-mock-volumes-6804 Jun 11 00:16:16.186: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6804-3521/external-snapshotter-leaderelection Jun 11 00:16:16.189: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6804-3521/csi-mock Jun 11 00:16:16.194: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-6804 Jun 11 00:16:16.197: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-6804 Jun 11 00:16:16.200: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-6804 Jun 11 00:16:16.203: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-6804 Jun 11 00:16:16.206: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-6804 Jun 11 00:16:16.210: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-6804 Jun 11 00:16:16.213: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-6804 Jun 11 00:16:16.217: INFO: deleting *v1.StatefulSet: csi-mock-volumes-6804-3521/csi-mockplugin Jun 11 00:16:16.220: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-6804 STEP: deleting the driver namespace: csi-mock-volumes-6804-3521 STEP: Waiting for namespaces [csi-mock-volumes-6804-3521] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:16:44.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:93.470 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI FSGroupPolicy [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1558 should modify fsGroup if fsGroupPolicy=File /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1582 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=File","total":-1,"completed":11,"skipped":306,"failed":0} Jun 11 00:16:44.254: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:15:37.777: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should not modify fsGroup if fsGroupPolicy=None /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1582 STEP: Building a driver namespace object, basename csi-mock-volumes-2584 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Jun 11 00:15:37.836: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2584-6560/csi-attacher Jun 11 00:15:37.839: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-2584 Jun 11 00:15:37.839: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-2584 Jun 11 00:15:37.841: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-2584 Jun 11 00:15:37.844: INFO: creating *v1.Role: csi-mock-volumes-2584-6560/external-attacher-cfg-csi-mock-volumes-2584 Jun 11 00:15:37.847: INFO: creating *v1.RoleBinding: csi-mock-volumes-2584-6560/csi-attacher-role-cfg Jun 11 00:15:37.850: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2584-6560/csi-provisioner Jun 11 00:15:37.852: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-2584 Jun 11 00:15:37.852: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-2584 Jun 11 00:15:37.855: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-2584 Jun 11 00:15:37.858: INFO: creating *v1.Role: csi-mock-volumes-2584-6560/external-provisioner-cfg-csi-mock-volumes-2584 Jun 11 00:15:37.862: INFO: creating *v1.RoleBinding: csi-mock-volumes-2584-6560/csi-provisioner-role-cfg Jun 11 00:15:37.865: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2584-6560/csi-resizer Jun 11 00:15:37.868: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-2584 Jun 11 00:15:37.868: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-2584 Jun 11 00:15:37.871: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-2584 Jun 11 00:15:37.874: INFO: creating *v1.Role: csi-mock-volumes-2584-6560/external-resizer-cfg-csi-mock-volumes-2584 Jun 11 00:15:37.877: INFO: creating *v1.RoleBinding: csi-mock-volumes-2584-6560/csi-resizer-role-cfg Jun 11 00:15:37.879: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2584-6560/csi-snapshotter Jun 11 00:15:37.882: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-2584 Jun 11 00:15:37.882: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-2584 Jun 11 00:15:37.884: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-2584 Jun 11 00:15:37.887: INFO: creating *v1.Role: csi-mock-volumes-2584-6560/external-snapshotter-leaderelection-csi-mock-volumes-2584 Jun 11 00:15:37.889: INFO: creating *v1.RoleBinding: csi-mock-volumes-2584-6560/external-snapshotter-leaderelection Jun 11 00:15:37.892: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2584-6560/csi-mock Jun 11 00:15:37.894: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-2584 Jun 11 00:15:37.896: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-2584 Jun 11 00:15:37.899: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-2584 Jun 11 00:15:37.901: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-2584 Jun 11 00:15:37.903: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-2584 Jun 11 00:15:37.905: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-2584 Jun 11 00:15:37.908: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-2584 Jun 11 00:15:37.911: INFO: creating *v1.StatefulSet: csi-mock-volumes-2584-6560/csi-mockplugin Jun 11 00:15:37.915: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-2584 Jun 11 00:15:37.918: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-2584" Jun 11 00:15:37.920: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-2584 to register on node node2 STEP: Creating pod with fsGroup Jun 11 00:15:47.933: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jun 11 00:15:47.937: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-vt9b9] to have phase Bound Jun 11 00:15:47.939: INFO: PersistentVolumeClaim pvc-vt9b9 found but phase is Pending instead of Bound. Jun 11 00:15:49.945: INFO: PersistentVolumeClaim pvc-vt9b9 found and phase=Bound (2.00803918s) Jun 11 00:15:53.969: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir /mnt/test/csi-mock-volumes-2584] Namespace:csi-mock-volumes-2584 PodName:pvc-volume-tester-qfzgl ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:15:53.969: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:15:54.053: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'filecontents' > '/mnt/test/csi-mock-volumes-2584/csi-mock-volumes-2584'; sync] Namespace:csi-mock-volumes-2584 PodName:pvc-volume-tester-qfzgl ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:15:54.053: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:15:56.317: INFO: ExecWithOptions {Command:[/bin/sh -c ls -l /mnt/test/csi-mock-volumes-2584/csi-mock-volumes-2584] Namespace:csi-mock-volumes-2584 PodName:pvc-volume-tester-qfzgl ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:15:56.317: INFO: >>> kubeConfig: /root/.kube/config Jun 11 00:15:56.511: INFO: pod csi-mock-volumes-2584/pvc-volume-tester-qfzgl exec for cmd ls -l /mnt/test/csi-mock-volumes-2584/csi-mock-volumes-2584, stdout: -rw-r--r-- 1 root root 13 Jun 11 00:15 /mnt/test/csi-mock-volumes-2584/csi-mock-volumes-2584, stderr: Jun 11 00:15:56.511: INFO: ExecWithOptions {Command:[/bin/sh -c rm -fr /mnt/test/csi-mock-volumes-2584] Namespace:csi-mock-volumes-2584 PodName:pvc-volume-tester-qfzgl ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 11 00:15:56.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Deleting pod pvc-volume-tester-qfzgl Jun 11 00:15:56.616: INFO: Deleting pod "pvc-volume-tester-qfzgl" in namespace "csi-mock-volumes-2584" Jun 11 00:15:56.621: INFO: Wait up to 5m0s for pod "pvc-volume-tester-qfzgl" to be fully deleted STEP: Deleting claim pvc-vt9b9 Jun 11 00:16:30.633: INFO: Waiting up to 2m0s for PersistentVolume pvc-eee89b33-519e-4898-b3cf-1bdc4f0bf217 to get deleted Jun 11 00:16:30.635: INFO: PersistentVolume pvc-eee89b33-519e-4898-b3cf-1bdc4f0bf217 found and phase=Bound (2.098781ms) Jun 11 00:16:32.638: INFO: PersistentVolume pvc-eee89b33-519e-4898-b3cf-1bdc4f0bf217 was removed STEP: Deleting storageclass csi-mock-volumes-2584-scvmp8x STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-2584 STEP: Waiting for namespaces [csi-mock-volumes-2584] to vanish STEP: uninstalling csi mock driver Jun 11 00:16:38.653: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2584-6560/csi-attacher Jun 11 00:16:38.658: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-2584 Jun 11 00:16:38.661: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-2584 Jun 11 00:16:38.665: INFO: deleting *v1.Role: csi-mock-volumes-2584-6560/external-attacher-cfg-csi-mock-volumes-2584 Jun 11 00:16:38.668: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2584-6560/csi-attacher-role-cfg Jun 11 00:16:38.672: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2584-6560/csi-provisioner Jun 11 00:16:38.675: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-2584 Jun 11 00:16:38.680: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-2584 Jun 11 00:16:38.686: INFO: deleting *v1.Role: csi-mock-volumes-2584-6560/external-provisioner-cfg-csi-mock-volumes-2584 Jun 11 00:16:38.695: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2584-6560/csi-provisioner-role-cfg Jun 11 00:16:38.704: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2584-6560/csi-resizer Jun 11 00:16:38.710: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-2584 Jun 11 00:16:38.714: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-2584 Jun 11 00:16:38.717: INFO: deleting *v1.Role: csi-mock-volumes-2584-6560/external-resizer-cfg-csi-mock-volumes-2584 Jun 11 00:16:38.720: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2584-6560/csi-resizer-role-cfg Jun 11 00:16:38.723: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2584-6560/csi-snapshotter Jun 11 00:16:38.727: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-2584 Jun 11 00:16:38.731: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-2584 Jun 11 00:16:38.734: INFO: deleting *v1.Role: csi-mock-volumes-2584-6560/external-snapshotter-leaderelection-csi-mock-volumes-2584 Jun 11 00:16:38.737: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2584-6560/external-snapshotter-leaderelection Jun 11 00:16:38.741: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2584-6560/csi-mock Jun 11 00:16:38.744: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-2584 Jun 11 00:16:38.748: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-2584 Jun 11 00:16:38.751: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-2584 Jun 11 00:16:38.754: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-2584 Jun 11 00:16:38.757: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-2584 Jun 11 00:16:38.760: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-2584 Jun 11 00:16:38.764: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-2584 Jun 11 00:16:38.767: INFO: deleting *v1.StatefulSet: csi-mock-volumes-2584-6560/csi-mockplugin Jun 11 00:16:38.771: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-2584 STEP: deleting the driver namespace: csi-mock-volumes-2584-6560 STEP: Waiting for namespaces [csi-mock-volumes-2584-6560] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:16:44.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:67.012 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI FSGroupPolicy [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1558 should not modify fsGroup if fsGroupPolicy=None /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1582 ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:14:59.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [It] should verify that container can restart successfully after configmaps modified /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:131 STEP: Create configmap STEP: Creating pod pod-subpath-test-configmap-ck9z STEP: Failing liveness probe Jun 11 00:15:13.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=subpath-7068 exec pod-subpath-test-configmap-ck9z --container test-container-volume-configmap-ck9z -- /bin/sh -c rm /probe-volume/probe-file' Jun 11 00:15:14.192: INFO: stderr: "" Jun 11 00:15:14.192: INFO: stdout: "" Jun 11 00:15:14.192: INFO: Pod exec output: STEP: Waiting for container to restart Jun 11 00:15:14.195: INFO: Container test-container-subpath-configmap-ck9z, restarts: 0 Jun 11 00:15:24.202: INFO: Container test-container-subpath-configmap-ck9z, restarts: 2 Jun 11 00:15:24.202: INFO: Container has restart count: 2 STEP: Fix liveness probe STEP: Waiting for container to stop restarting Jun 11 00:15:44.215: INFO: Container has restart count: 3 Jun 11 00:16:46.214: INFO: Container restart has stabilized Jun 11 00:16:46.214: INFO: Deleting pod "pod-subpath-test-configmap-ck9z" in namespace "subpath-7068" Jun 11 00:16:46.221: INFO: Wait up to 5m0s for pod "pod-subpath-test-configmap-ck9z" to be fully deleted [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:16:58.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7068" for this suite. • [SLOW TEST:118.546 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Container restart /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:130 should verify that container can restart successfully after configmaps modified /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:131 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Container restart should verify that container can restart successfully after configmaps modified","total":-1,"completed":17,"skipped":819,"failed":0} Jun 11 00:16:58.241: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:16:20.476: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should call NodeUnstage after NodeStage ephemeral error /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:828 STEP: Building a driver namespace object, basename csi-mock-volumes-8925 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock proxy Jun 11 00:16:20.543: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8925-7130/csi-attacher Jun 11 00:16:20.546: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-8925 Jun 11 00:16:20.546: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-8925 Jun 11 00:16:20.548: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-8925 Jun 11 00:16:20.551: INFO: creating *v1.Role: csi-mock-volumes-8925-7130/external-attacher-cfg-csi-mock-volumes-8925 Jun 11 00:16:20.553: INFO: creating *v1.RoleBinding: csi-mock-volumes-8925-7130/csi-attacher-role-cfg Jun 11 00:16:20.556: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8925-7130/csi-provisioner Jun 11 00:16:20.558: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-8925 Jun 11 00:16:20.558: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-8925 Jun 11 00:16:20.561: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-8925 Jun 11 00:16:20.564: INFO: creating *v1.Role: csi-mock-volumes-8925-7130/external-provisioner-cfg-csi-mock-volumes-8925 Jun 11 00:16:20.567: INFO: creating *v1.RoleBinding: csi-mock-volumes-8925-7130/csi-provisioner-role-cfg Jun 11 00:16:20.569: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8925-7130/csi-resizer Jun 11 00:16:20.572: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-8925 Jun 11 00:16:20.572: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-8925 Jun 11 00:16:20.575: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-8925 Jun 11 00:16:20.580: INFO: creating *v1.Role: csi-mock-volumes-8925-7130/external-resizer-cfg-csi-mock-volumes-8925 Jun 11 00:16:20.584: INFO: creating *v1.RoleBinding: csi-mock-volumes-8925-7130/csi-resizer-role-cfg Jun 11 00:16:20.589: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8925-7130/csi-snapshotter Jun 11 00:16:20.596: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-8925 Jun 11 00:16:20.596: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-8925 Jun 11 00:16:20.599: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-8925 Jun 11 00:16:20.602: INFO: creating *v1.Role: csi-mock-volumes-8925-7130/external-snapshotter-leaderelection-csi-mock-volumes-8925 Jun 11 00:16:20.605: INFO: creating *v1.RoleBinding: csi-mock-volumes-8925-7130/external-snapshotter-leaderelection Jun 11 00:16:20.608: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8925-7130/csi-mock Jun 11 00:16:20.611: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-8925 Jun 11 00:16:20.614: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-8925 Jun 11 00:16:20.616: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-8925 Jun 11 00:16:20.619: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-8925 Jun 11 00:16:20.622: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-8925 Jun 11 00:16:20.624: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-8925 Jun 11 00:16:20.627: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-8925 Jun 11 00:16:20.629: INFO: creating *v1.StatefulSet: csi-mock-volumes-8925-7130/csi-mockplugin Jun 11 00:16:20.634: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-8925 Jun 11 00:16:20.636: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-8925" Jun 11 00:16:20.638: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-8925 to register on node node2 I0611 00:16:25.722163 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-8925","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I0611 00:16:25.818881 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null} I0611 00:16:25.820117 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-8925","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I0611 00:16:25.821254 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":"","FullError":null} I0611 00:16:25.822753 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null} I0611 00:16:26.685148 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-8925"},"Error":"","FullError":null} STEP: Creating pod Jun 11 00:16:30.156: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jun 11 00:16:30.162: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-l2gs2] to have phase Bound Jun 11 00:16:30.165: INFO: PersistentVolumeClaim pvc-l2gs2 found but phase is Pending instead of Bound. I0611 00:16:30.172032 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-c71e4529-59c2-4dbb-91b5-f16092f9594c","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-c71e4529-59c2-4dbb-91b5-f16092f9594c"}}},"Error":"","FullError":null} Jun 11 00:16:32.168: INFO: PersistentVolumeClaim pvc-l2gs2 found and phase=Bound (2.006245435s) Jun 11 00:16:32.183: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-l2gs2] to have phase Bound Jun 11 00:16:32.186: INFO: PersistentVolumeClaim pvc-l2gs2 found and phase=Bound (2.273367ms) STEP: Waiting for expected CSI calls I0611 00:16:32.463908 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0611 00:16:32.466070 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c71e4529-59c2-4dbb-91b5-f16092f9594c/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-c71e4529-59c2-4dbb-91b5-f16092f9594c","storage.kubernetes.io/csiProvisionerIdentity":"1654906585822-8081-csi-mock-csi-mock-volumes-8925"}},"Response":null,"Error":"rpc error: code = DeadlineExceeded desc = fake error","FullError":{"code":4,"message":"fake error"}} I0611 00:16:32.977807 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0611 00:16:32.979795 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c71e4529-59c2-4dbb-91b5-f16092f9594c/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-c71e4529-59c2-4dbb-91b5-f16092f9594c","storage.kubernetes.io/csiProvisionerIdentity":"1654906585822-8081-csi-mock-csi-mock-volumes-8925"}},"Response":null,"Error":"rpc error: code = DeadlineExceeded desc = fake error","FullError":{"code":4,"message":"fake error"}} STEP: Deleting the previously created pod Jun 11 00:16:33.186: INFO: Deleting pod "pvc-volume-tester-fpzfk" in namespace "csi-mock-volumes-8925" Jun 11 00:16:33.191: INFO: Wait up to 5m0s for pod "pvc-volume-tester-fpzfk" to be fully deleted I0611 00:16:33.992369 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0611 00:16:33.994837 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c71e4529-59c2-4dbb-91b5-f16092f9594c/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-c71e4529-59c2-4dbb-91b5-f16092f9594c","storage.kubernetes.io/csiProvisionerIdentity":"1654906585822-8081-csi-mock-csi-mock-volumes-8925"}},"Response":null,"Error":"rpc error: code = DeadlineExceeded desc = fake error","FullError":{"code":4,"message":"fake error"}} I0611 00:16:36.009653 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0611 00:16:36.012271 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c71e4529-59c2-4dbb-91b5-f16092f9594c/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-c71e4529-59c2-4dbb-91b5-f16092f9594c","storage.kubernetes.io/csiProvisionerIdentity":"1654906585822-8081-csi-mock-csi-mock-volumes-8925"}},"Response":null,"Error":"rpc error: code = DeadlineExceeded desc = fake error","FullError":{"code":4,"message":"fake error"}} STEP: Waiting for all remaining expected CSI calls I0611 00:16:37.327307 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0611 00:16:37.330489 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c71e4529-59c2-4dbb-91b5-f16092f9594c/globalmount"},"Response":{},"Error":"","FullError":null} STEP: Deleting pod pvc-volume-tester-fpzfk Jun 11 00:16:38.198: INFO: Deleting pod "pvc-volume-tester-fpzfk" in namespace "csi-mock-volumes-8925" STEP: Deleting claim pvc-l2gs2 Jun 11 00:16:38.209: INFO: Waiting up to 2m0s for PersistentVolume pvc-c71e4529-59c2-4dbb-91b5-f16092f9594c to get deleted Jun 11 00:16:38.211: INFO: PersistentVolume pvc-c71e4529-59c2-4dbb-91b5-f16092f9594c found and phase=Bound (2.505711ms) I0611 00:16:38.222144 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null} Jun 11 00:16:40.214: INFO: PersistentVolume pvc-c71e4529-59c2-4dbb-91b5-f16092f9594c was removed STEP: Deleting storageclass csi-mock-volumes-8925-scdr59n STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-8925 STEP: Waiting for namespaces [csi-mock-volumes-8925] to vanish STEP: uninstalling csi mock driver Jun 11 00:16:46.245: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8925-7130/csi-attacher Jun 11 00:16:46.249: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-8925 Jun 11 00:16:46.252: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-8925 Jun 11 00:16:46.256: INFO: deleting *v1.Role: csi-mock-volumes-8925-7130/external-attacher-cfg-csi-mock-volumes-8925 Jun 11 00:16:46.260: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8925-7130/csi-attacher-role-cfg Jun 11 00:16:46.263: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8925-7130/csi-provisioner Jun 11 00:16:46.267: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-8925 Jun 11 00:16:46.270: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-8925 Jun 11 00:16:46.274: INFO: deleting *v1.Role: csi-mock-volumes-8925-7130/external-provisioner-cfg-csi-mock-volumes-8925 Jun 11 00:16:46.280: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8925-7130/csi-provisioner-role-cfg Jun 11 00:16:46.287: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8925-7130/csi-resizer Jun 11 00:16:46.291: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-8925 Jun 11 00:16:46.297: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-8925 Jun 11 00:16:46.301: INFO: deleting *v1.Role: csi-mock-volumes-8925-7130/external-resizer-cfg-csi-mock-volumes-8925 Jun 11 00:16:46.304: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8925-7130/csi-resizer-role-cfg Jun 11 00:16:46.307: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8925-7130/csi-snapshotter Jun 11 00:16:46.311: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-8925 Jun 11 00:16:46.314: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-8925 Jun 11 00:16:46.317: INFO: deleting *v1.Role: csi-mock-volumes-8925-7130/external-snapshotter-leaderelection-csi-mock-volumes-8925 Jun 11 00:16:46.321: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8925-7130/external-snapshotter-leaderelection Jun 11 00:16:46.324: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8925-7130/csi-mock Jun 11 00:16:46.327: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-8925 Jun 11 00:16:46.330: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-8925 Jun 11 00:16:46.334: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-8925 Jun 11 00:16:46.337: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-8925 Jun 11 00:16:46.340: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-8925 Jun 11 00:16:46.344: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-8925 Jun 11 00:16:46.347: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-8925 Jun 11 00:16:46.351: INFO: deleting *v1.StatefulSet: csi-mock-volumes-8925-7130/csi-mockplugin Jun 11 00:16:46.354: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-8925 STEP: deleting the driver namespace: csi-mock-volumes-8925-7130 STEP: Waiting for namespaces [csi-mock-volumes-8925-7130] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:17:30.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:69.901 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI NodeStage error cases [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:734 should call NodeUnstage after NodeStage ephemeral error /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:828 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI NodeStage error cases [Slow] should call NodeUnstage after NodeStage ephemeral error","total":-1,"completed":14,"skipped":267,"failed":1,"failures":["[sig-storage] CSI mock volume CSI NodeUnstage error cases [Slow] two pods: should call NodeStage after previous NodeUnstage transient error"]} Jun 11 00:17:30.379: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:07:31.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [It] should fail due to non-existent path /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:307 STEP: Creating local PVC and PV Jun 11 00:07:31.910: INFO: Creating a PV followed by a PVC Jun 11 00:07:31.918: INFO: Waiting for PV local-pvwdlhw to bind to PVC pvc-86jwf Jun 11 00:07:31.918: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-86jwf] to have phase Bound Jun 11 00:07:31.920: INFO: PersistentVolumeClaim pvc-86jwf found but phase is Pending instead of Bound. Jun 11 00:07:33.923: INFO: PersistentVolumeClaim pvc-86jwf found and phase=Bound (2.004619797s) Jun 11 00:07:33.923: INFO: Waiting up to 3m0s for PersistentVolume local-pvwdlhw to have phase Bound Jun 11 00:07:33.925: INFO: PersistentVolume local-pvwdlhw found and phase=Bound (2.304713ms) STEP: Creating a pod STEP: Cleaning up PVC and PV Jun 11 00:17:33.958: INFO: Deleting PersistentVolumeClaim "pvc-86jwf" Jun 11 00:17:33.962: INFO: Deleting PersistentVolume "local-pvwdlhw" [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:17:33.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5271" for this suite. • [SLOW TEST:602.098 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Local volume that cannot be mounted [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:304 should fail due to non-existent path /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:307 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local Local volume that cannot be mounted [Slow] should fail due to non-existent path","total":-1,"completed":8,"skipped":173,"failed":0} Jun 11 00:17:33.978: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:14:25.893: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:469 STEP: Creating configMap with name cm-test-opt-create-05a643e8-3290-40b8-9f3a-212c8c5fa455 STEP: Creating the pod [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:19:25.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-359" for this suite. • [SLOW TEST:300.066 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:469 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow]","total":-1,"completed":15,"skipped":634,"failed":0} Jun 11 00:19:25.961: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:14:32.623: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] Should fail non-optional pod creation due to secret object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/secrets_volume.go:430 STEP: Creating the pod [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:19:32.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1591" for this suite. • [SLOW TEST:300.058 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 Should fail non-optional pod creation due to secret object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/secrets_volume.go:430 ------------------------------ {"msg":"PASSED [sig-storage] Secrets Should fail non-optional pod creation due to secret object does not exist [Slow]","total":-1,"completed":14,"skipped":382,"failed":0} Jun 11 00:19:32.684: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:14:28.994: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [It] should fail due to wrong node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:324 STEP: Initializing test volumes Jun 11 00:14:31.045: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-5ed33eb2-c91b-41d8-951b-83b6c17040e9] Namespace:persistent-local-volumes-test-7620 PodName:hostexec-node2-bvf9x ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:14:31.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 11 00:14:31.141: INFO: Creating a PV followed by a PVC Jun 11 00:14:31.147: INFO: Waiting for PV local-pvc7rn9 to bind to PVC pvc-q9j9b Jun 11 00:14:31.147: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-q9j9b] to have phase Bound Jun 11 00:14:31.149: INFO: PersistentVolumeClaim pvc-q9j9b found but phase is Pending instead of Bound. Jun 11 00:14:33.154: INFO: PersistentVolumeClaim pvc-q9j9b found and phase=Bound (2.006987412s) Jun 11 00:14:33.154: INFO: Waiting up to 3m0s for PersistentVolume local-pvc7rn9 to have phase Bound Jun 11 00:14:33.156: INFO: PersistentVolume local-pvc7rn9 found and phase=Bound (2.092322ms) STEP: Cleaning up PVC and PV Jun 11 00:19:33.180: INFO: Deleting PersistentVolumeClaim "pvc-q9j9b" Jun 11 00:19:33.184: INFO: Deleting PersistentVolume "local-pvc7rn9" STEP: Removing the test directory Jun 11 00:19:33.188: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-5ed33eb2-c91b-41d8-951b-83b6c17040e9] Namespace:persistent-local-volumes-test-7620 PodName:hostexec-node2-bvf9x ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 11 00:19:33.188: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:19:33.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-7620" for this suite. • [SLOW TEST:304.310 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Local volume that cannot be mounted [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:304 should fail due to wrong node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:324 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local Local volume that cannot be mounted [Slow] should fail due to wrong node","total":-1,"completed":9,"skipped":388,"failed":0} Jun 11 00:19:33.306: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 11 00:16:31.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] Should fail non-optional pod creation due to secret object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_secret.go:411 STEP: Creating the pod [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:21:31.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9133" for this suite. • [SLOW TEST:300.065 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 Should fail non-optional pod creation due to secret object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_secret.go:411 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret Should fail non-optional pod creation due to secret object does not exist [Slow]","total":-1,"completed":23,"skipped":664,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]"]} Jun 11 00:21:31.590: INFO: Running AfterSuite actions on all nodes {"msg":"PASSED [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should not modify fsGroup if fsGroupPolicy=None","total":-1,"completed":24,"skipped":1077,"failed":0} Jun 11 00:16:44.792: INFO: Running AfterSuite actions on all nodes Jun 11 00:21:31.624: INFO: Running AfterSuite actions on node 1 Jun 11 00:21:31.624: INFO: Skipping dumping logs from cluster Summarizing 3 Failures: [Fail] [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Set fsGroup for local volume [It] should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:808 [Fail] [sig-storage] CSI mock volume CSI NodeUnstage error cases [Slow] [It] two pods: should call NodeStage after previous NodeUnstage final error /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1017 [Fail] [sig-storage] CSI mock volume CSI NodeUnstage error cases [Slow] [It] two pods: should call NodeStage after previous NodeUnstage transient error /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1017 Ran 166 of 5773 Specs in 1134.089 seconds FAIL! -- 163 Passed | 3 Failed | 0 Pending | 5607 Skipped Ginkgo ran 1 suite in 18m55.719967182s Test Suite Failed